text
stringlengths 3
744k
⌀ | summary
stringlengths 24
154k
|
---|---|
reduction of maternal mortality will remain an important global developmental goal in the upcoming years . having a monetary approximation on the value of these losses may have important implications in the allotting financial and technical resources to reduce it .
reducing population - level rates of maternal morbidity and mortality is an important developmental goal for many countries of the world .
ecuadors maternal mortality is 87 deaths per 100,000 of live births . the united nations and governments around the world targeted a 75% reduction in maternal mortality by 2015 , through two main strategies : ( i ) increasing antenatal care and training of skilled birth attendants .
ecuador s specific strategy to meet this goal is the cone program ( spanish acronym for essential obstetric and neonatal care ) .
cone is implemented through the public network of health services and private partnerships for patient referrals .
maternal health specifically is delivered at the first level of care . according to the world health organization s ( who )
3 delays model , maternal mortality can be attributable to inadequate health care provision . in brief ,
the 3 delays framework provides an understanding of the factors that result in obstetric emergencies into the following delays : i ) the women s and family s decision to seek health care ; ii ) issues related to accessing medical facilities such as transportation barriers , roads or others ; and iii ) the receipt of inadequate and appropriate care .
universal coverage is undoubtedly a critical component to reducing maternal mortality and improving the general health of a nation . in ecuador , universal coverage
was first applied to maternal health care through the free maternity and child care law ( lmgai ) .
it was set to improve maternal and child health care outcomes of ecuador s most vulnerable populations . however , even with the implementation of lfmc maternal mortality did not decrease , and maternal services remained partial .
for example , around 28% of deliveries nationwide took place without the presence of a skilled birth attendant , of which , 75% occurred in rural areas . in ecuador , little is known about how society values maternal health , particularly whether that societal value is greater than the resources expended for free access to services through the universal health care system .
one way to assess the societal value given to the prevention of maternal deaths in particular , is to ask society about their willingness to pay ( wtp ) to prevent these deaths .
this approach , called contingent valuation , is a survey - based method whereby respondents are asked to trade off mortality risk for wealth or income .
the resulting estimate , averaged across a population and multiplied times the risk reduction , represents the value that society places on preventing a statistical death .
this estimate can then be used as the measure in a cost - benefit analysis where the costs of the free access to maternal health services through the universal healthcare system can be compared to the benefits of preventing maternal deaths . the purpose of this research was to investigate ecuadorians wtp to prevent maternal death and disabilities due to complications of care during childbirth in the context of universal coverage .
to our knowledge , this is the first study to bring the question to individual citizens of how much they value universal coverage in relation to one of the most critical health problems in ecuador .
evidence on wtp studies on alternatives to prenatal care was studied with a comparison between a general practitioner / midwife led care versus obstetrician led care with no significant differences between them and a wtp of 2500 euros , while in tanzania a group of researchers investigated the willingness of patients and households to pay for rural district hospital services in the north - western region with significant differences between outpatient services and in - patient services ranging from 358 tsd per a one day admission day to 2218 tsd for an hernia operation . in the united states
a wtp study was used to inform the united states preventive services task force on the frequency and wtp of pregnant women to receive a sonogram during their pregnancy .
their results indicates that most women want a sonogram during pregnancy , and many are willing to pay for the examination .
ecuador is a country of 16,144,000 people with close to equal distribution between men and women , with a live expectancy of 74 years for men and 79 years for women and a total expenditure on health per capita of 1,040 usd .
under - five infant mortality is 57 live births per 1,000 with other and congenital causes as main causes , and main causes of adults deaths as ischaemic heart disease , stroke and lower respiratory diseases .
the results of this study have the potential to influence future assessments of the returns on investment in the ecuadorian healthcare infrastructure to prevent maternal mortality .
this study is part of a research endeavor to study intentional and unintentional violence , and compared methods and results in a sample in the united states in georgia , and another sample in ecuador .
the parent project estimated the monetary value that individuals place on maternal mortality and child maltreatment preventive programs .
specifically , this paper presents the results of the contingent valuation to prevent maternal mortality and morbidity through universal coverage in ecuador .
data collection was conducted between february and june 2012 with a convenience sample of adult residents living in the two largest cities in ecuador , quito and guayaquil .
the only inclusion criteria for this study was being older than 18 years of age .
study participants were recruited in utility centers where ecuadorians pay their utility bills such as water , electricity and municipality services located at shopping malls , community centers , and small shops .
the study obtained university of san francisco de quito and the university of georgia s institutional review board approvals .
each survey took between 15 and 20 minutes , and participants received a $ 15 phone card to compensate for their time .
the study sample was randomly split to address separately the question on the value placed on reduction of maternal mortality and morbidity , between a hypothetical market that included a 50% reduction in the risk of maternal mortality from 100 to 50 per 100,000 , and a market that included a 50% reduction in the risk of maternal morbidity from 4,000 to 2,000 per 100,000 .
the survey included verbal protocols to establish the contingent market which included maternal and infant data on morbidity and mortality outcomes , specific to ecuador s epidemiologic profiles .
following research by corso et al 2011 and best practices to represent the denominator neglected , we provided a visual aid ( a laminated page of 100,000 dots ) to illustrate the hypothetical population , with dots highlighted in red to indicate those at risk of maternal death .
yes they would pay for the program or no they would not pay for the program based on a randomly selected wtp value ( between $ 10 and $ 300 in ecuador us$ ) .
randomization was accomplished by having the bid values in a bag and randomly selecting a paper with a bid value .
a second question was posed using a wtp value that was $ 25 lower than the initial bid ; if the response was yes , then the second question posed was $ 25 higher than the initial bid .
this process was done one time . after completing the contingent valuation task by completing the survey described above ,
respondents were asked to rate their confidence in their ability to pay this amount if the opportunity arose .
the study used a dichotomous approach to elicit wtp values in contingent valuation surveys , and tested respondents wtp based on payment mechanism . to do this ,
half of the sample was asked about wtp in annual taxes and the other half was asked about wtp in annual donations .
the study wanted to measure government confidence in the provision of services versus receiving services from a private sector .
we also collected data on participants confidence in their response using a 5 likert scale that measured very confident , confident , neutral , somewhat confident and not at all confident , and several socio - demographic characteristics that have been shown to influence wtp in other studies , including age , gender , race / ethnicity , years of education , marital status ( categorized in single , married , divorced and widowed ) , income , and self - reported general health ( measured using sf-12 health related quality of life question : in general , how would you rate your health ) . for the data analysis we assumed a wtp value yi * that is represented by the model yi * = i + i , where the i are normally distributed with a mean zero and the xi represent individual respondent characteristics .
while yi * was not directly observed for respondent i , it is known to lie in the interval [ yi1 , yi2 ] based on responses elicited in the contingent valuation survey and the corresponding likelihood contribution is : when an upper bound is unknown ( right - censored data ) the likelihood contribution is : when a lower bound is unknown ( left - censored data ) , we set a lower bound of zero , and the likelihood contribution is : the maximum likelihood function was estimated with interval regression using the intreg command in stata version 13 ( stata corp , college station , tx , usa ) .
the primary independent variable of interest for the model was the indicator of whether the respondent was asked to respond to a question about mortality or morbidity .
mortality and morbidity independent analysis was done for each independent variable at the beginning of the model building process , we decided to include age and gender in the model regardless of significance or model fit . to build the model , we used a forward stepwise procedure with the potential independent variables , including a variable if it significantly improved model fit as measured by the akaike information criterion ( aic ) .
use of the aic allows for a trade - off between improvements in the goodness of fit and increasing complexity from adding additional independent variables .
we estimated the mean wtp value for both the mortality and morbidity samples using the final model .
bootstrapped standard errors ( 1,000 replications ) were used to calculate bias - corrected 95% confidence intervals ( cis ) on the mean wtp .
data collection was conducted between february and june 2012 with a convenience sample of adult residents living in the two largest cities in ecuador , quito and guayaquil .
the only inclusion criteria for this study was being older than 18 years of age .
study participants were recruited in utility centers where ecuadorians pay their utility bills such as water , electricity and municipality services located at shopping malls , community centers , and small shops .
the study obtained university of san francisco de quito and the university of georgia s institutional review board approvals .
each survey took between 15 and 20 minutes , and participants received a $ 15 phone card to compensate for their time .
the study sample was randomly split to address separately the question on the value placed on reduction of maternal mortality and morbidity , between a hypothetical market that included a 50% reduction in the risk of maternal mortality from 100 to 50 per 100,000 , and a market that included a 50% reduction in the risk of maternal morbidity from 4,000 to 2,000 per 100,000 .
the survey included verbal protocols to establish the contingent market which included maternal and infant data on morbidity and mortality outcomes , specific to ecuador s epidemiologic profiles .
following research by corso et al 2011 and best practices to represent the denominator neglected , we provided a visual aid ( a laminated page of 100,000 dots ) to illustrate the hypothetical population , with dots highlighted in red to indicate those at risk of maternal death .
yes they would pay for the program or no they would not pay for the program
based on a randomly selected wtp value ( between $ 10 and $ 300 in ecuador us$ ) .
randomization was accomplished by having the bid values in a bag and randomly selecting a paper with a bid value . if the response was no ,
a second question was posed using a wtp value that was $ 25 lower than the initial bid ; if the response was yes , then the second question posed was $ 25 higher than the initial bid .
this process was done one time . after completing the contingent valuation task by completing the survey described above ,
respondents were asked to rate their confidence in their ability to pay this amount if the opportunity arose .
the study used a dichotomous approach to elicit wtp values in contingent valuation surveys , and tested respondents wtp based on payment mechanism . to do this ,
half of the sample was asked about wtp in annual taxes and the other half was asked about wtp in annual donations .
the study wanted to measure government confidence in the provision of services versus receiving services from a private sector .
we also collected data on participants confidence in their response using a 5 likert scale that measured very confident , confident , neutral , somewhat confident and not at all confident , and several socio - demographic characteristics that have been shown to influence wtp in other studies , including age , gender , race / ethnicity , years of education , marital status ( categorized in single , married , divorced and widowed ) , income , and self - reported general health ( measured using sf-12 health related quality of life question : in general , how would you rate your health ) .
for the data analysis we assumed a wtp value yi * that is represented by the model yi * = i + i , where the i are normally distributed with a mean zero and the xi represent individual respondent characteristics . while yi * was not directly observed for respondent i , it is known to lie in the interval [ yi1 , yi2 ] based on responses elicited in the contingent valuation survey and the corresponding likelihood contribution is : when an upper bound is unknown ( right - censored data ) the likelihood contribution is : when a lower bound is unknown ( left - censored data ) , we set a lower bound of zero , and the likelihood contribution is : the maximum likelihood function was estimated with interval regression using the intreg command in stata version 13 ( stata corp , college station , tx , usa ) .
the primary independent variable of interest for the model was the indicator of whether the respondent was asked to respond to a question about mortality or morbidity .
mortality and morbidity independent analysis was done for each independent variable at the beginning of the model building process , we decided to include age and gender in the model regardless of significance or model fit . to build the model , we used a forward stepwise procedure with the potential independent variables , including a variable if it significantly improved model fit as measured by the akaike information criterion ( aic ) .
use of the aic allows for a trade - off between improvements in the goodness of fit and increasing complexity from adding additional independent variables .
we estimated the mean wtp value for both the mortality and morbidity samples using the final model .
bootstrapped standard errors ( 1,000 replications ) were used to calculate bias - corrected 95% confidence intervals ( cis ) on the mean wtp .
the study s goal was to reach a sample of 400 people , of which 99% agreed to participate .
those that declined to participate did so because of lack of time and uninterested to participate .
the final study sample consisted of a total of n=398 people , n=223 who were asked to respond to a risk reduction in maternal mortality risk , and n=175 who were asked to respond to a risk reduction in maternal morbidity risk .
a description of study participants is presented in table 1 . social demographic description of study participants measured using sf12 question : in general , how would you rate your health , the following scale was used for the question asking for respondents confidence in paying for the program : 1=very confident , 2=somewhat confident , 3=not too confident , 4=not at all confident .
the responses were recoded into a single variable with the following values ( 0=codes 24 ; 1=code 1 ) the study participant s descriptive characteristics included age , gender , race , marital status , years of accumulative education , perception of health status and annual income .
there was no - significant difference between the two sub - samples ( mortality and morbidity ) and the only variable statistically significantly different between the two subsample was income .
less than 20% responded no to either bid amount presented , 100% of participants agreed to the initial bid of $ 10 , 84% to the initial bid of $ 50 dollars , and 46% to the bid of $ 100 , and decreasing thereafter .
table 2 presents the results of the interval regression for reducing the risk of maternal morbidity or mortality , including only those covariates that were significant in the model .
interval regression results of wtp for reducing the risk of maternal morbidity or mortality indicator variable equals 1 for mortality sample and equals 0 for morbidity sample .
lr =42.7 ( df=7 ; p<0.001 ) , coxsnell r=0.102 a scope test suggested a significant difference between the valuation of morbidity and mortality .
economic theory suggests willingness - to - pay ( wtp ) should be significantly higher for a higher risk than for a lower risk .
our scope test determined that people were willing to pay more to reduce the risk of mortality than for morbidity .
overall , this model with covariates was statistically significant at the base model without any covariates .
however , the overall model fit was poor based on the r value of 0.102 , indicating that very little of the variance in the estimates were actually explained by the model
. however , income , and the higher the income the higher one should expect wtp to be , was statistically significant in the model ( p < 0.05 ) .
table 3 shows average amount participants were willing to pay to prevent maternal mortality in the context of universal coverage , from a model with no other covariates .
the unadjusted mean wtp for a reduction in the maternal morbidity risk was $ 135 ( 95% ci=$132 , $ 139 ) . estimated mean wtp for reducing maternal mortality risk all wtp values are expressed in dollars ecuador$ , biascorrected and accelerated
value of statistical life ( vsl ) is a summary measure of the willingness - to - pay for a mortality risk reduction , and a key input into the calculation of the benefits of policies or projects that affect mortality risk or excess death .
the mortality benefits are computed as the expected number of deaths avoided by the policy change times the average wtp value , and is therefore defined as the rate at which the people are prepared to trade off income for risk reduction .
translated into value of statistical life , the wtp estimates produced in this study suggest that participants valued the prevention of one statistical maternal death at usd $ 352,000 .
there is no need to convert this finding to international dollars since ecuador dollarized the economy in 1999 .
in this study , respondents were willing to pay a considerable amount of money , us$176 per year , to prevent maternal mortality , or $ 132 to prevent maternal morbidity risk .
the scope test performed confirmed people s willingness to pay more to reduce the risk of mortality than for morbidity .
this results may be related to a national campaign to reduce maternal mortality by the ministry of health ( moh ) and the global pressure to meet the united nations millennium developmental goals . despite the moh s attempt for universal coverage , out of pocket per person health expenditures
( % of private expenditure on health ) in ecuador is at us$83.68 per month in 2011 , according to the world bank , representing close to 20% of ecuadorian s disposable income .
the total of national expenditures in maternal health care services is difficult to estimate given the multiple programs , levels of services and delivery partners .
data from the free maternity and child care law ( lmgai ) program indicate that the central government invested close to 29 million dollars a year to provide the following services : prenatal check - ups , care for normal and at - risk births , cesarean sections , post - partum care , obstetric emergencies , intra - family violence prevention , laboratory services and medicines , in addition to prevention programs in family planning methods , and hiv . in 2012 ,
the lmgai reported covering the costs of 315,000 births , with an average cost of close to $ usd 100 per birth . despite the monies invested in the provision of maternal and infant health care services these are not enough to prevent maternal mortality . among efforts , the moh must strengthen the directorate of health quality services to improve the delivery of maternal health services and reduce the number of maternal deaths . to do so ,
it need to prioritize the re - assignment of functions of community - centered committees to study maternal deaths in each territorial zone of the country .
the results of this study suggest that the costs of maternal care do not outweigh the benefit of prevention , and that ecuadorians are willing to pay a significant amount to reduce the risk of maternal mortality .
first , the data were collected from a convenience sample from 2 large urban cities , which may not be representative of the total population of ecuador and the rural and ethnic diversity of the country . however
, this sample does include a majority of lower ses people and include the two largest cities of the country that represent close to 40% of the population of the country and therefore the wtp estimates can be a considered a lower bound estimate of society s true wtp to prevent maternal mortality .
the second limitation is common to many cv studies , where our initial bid can have a biased because of the hypothetical nature of the questioning .
however , these biases were partially controlled by randomizing participants to the initial bid values and using dichotomous choice responses . with such a small sample , we were not able to test for validity of responses in a scope test , that is determining whether wtp varied by changes in risk reduction . in sum , as financial resources become more restrictive and public health threats are at an all - time high , economic evaluations can bring information and analysis to help decision makers make the necessary comparisons and make informed decisions .
the results from this study puts a monetary value to an intangible loss , a mother , through the estimate of her value of statistical life , but it also brings an estimate of how much an average citizen values maternal mortality .
this estimate can have a potential implication on how low - income countries , such as ecuador , collect monies from citizens to pay for public health and prevention programs .
this study suggests that everyone has a role to prevent maternal mortality and the economic burden of prevention strategies and adequate health services can be shared among all ecuadorians .
what are the implications of the study findings for other countries in the region ?
this is the first study to bring the question to individual citizens of how much they value universal coverage in relation to one of the most critical health problems in ecuador.study respondents were willing to pay a considerable amount of money , us$176 per year , to prevent maternal mortality , or $ 132 to prevent maternal morbidity risk.although the ecuadorian ministry of health offers free health services through a public network of health care providers , the facilities are difficult to reach and out - of - pocket payments still continue to constitute a significant percentage of health care expenditures .
study shows that ecuadorians may be willing to pay for a stronger system of universal coverage if , at a minimum , maternal deaths are prevented .
this is the first study to bring the question to individual citizens of how much they value universal coverage in relation to one of the most critical health problems in ecuador .
study respondents were willing to pay a considerable amount of money , us$176 per year , to prevent maternal mortality , or $ 132 to prevent maternal morbidity risk .
although the ecuadorian ministry of health offers free health services through a public network of health care providers , the facilities are difficult to reach and out - of - pocket payments still continue to constitute a significant percentage of health care expenditures .
study shows that ecuadorians may be willing to pay for a stronger system of universal coverage if , at a minimum , maternal deaths are prevented . | context : there is an established association between the provision of health care services and maternal mortality . in ecuador
, little is known if the societal value is greater than the resources expended in preventive medicine.aims:the purpose of this research is to investigate ecuadorians willingness to pay to prevent maternal death and disabilities due to complications of care during childbirth in the context of universal coverage.methods and materials : the study elicited a contingent market on morbidity and mortality outcomes , specific to ecuador s epidemiologic profiles between a hypothetical market that included a 50% reduction in the risk of maternal mortality from 100 to 50 per 100,000 , and a market that included a 50% reduction in the risk of maternal morbidity from 4,000 to 2,000 per 100,000.results : the average amount participants are willing to pay ( wtp ) to prevent maternal mortality in the context of universal coverage , was $ 176 a year ( 95% ci=$172 , $ 179 ) .
the unadjusted mean wtp for a reduction in the maternal morbidity risk was $ 135 ( 95% ci=$132 , $ 139 ) . translated into value of statistical life , participants from this study valued the prevention of one statistical maternal death at usd $ 352,000.conclusion : results suggest that the costs of maternal care do not outweigh the benefit of prevention , and that ecuadorians are willing to pay a significant amount to reduce the risk of maternal mortality.global health implications : reduction of maternal mortality will remain an important global developmental goal in the upcoming years . having a monetary approximation on the value of these losses may have important implications in the allotting financial and technical resources to reduce it . |
null | many xenobiotics produce hepatic injury due to their metabolism in the liver to highly reactive electrophile intermediates which form covalent conjugates with nucleophilic cellular constituents .
this presentation describes studies indicating that the production of chemically reactive metabolites by pulmonary metabolism of xenobiotics can also play a fundamental role in the pathogenesis of chemically induced lung disease.imagesfigure 1.figure 1.figure 2.figure 3 . |
GLENDIVE, Mont. (AP) — Truckloads of drinking water were being shipped to the eastern Montana city of Glendive on Monday after traces of a major oil spill along the Yellowstone River were detected in public water supplies, raising concerns about a potential health risk.
Paul Peronard, left, with the U.S. Environmental Protection Agency, describes a pipeline spill along the Yellowstone River near Glendive, Mont. as Gov. Steve Bullock listens, Monday, Jan. 19, 2015. Up... (Associated Press)
Bob Habeck, left, with the Montana Department of Environmental Quality, speaks at a meeting in Glendive, Mont., discussing concerns about drinking water supplies after a pipeline spill along the Yellowstone... (Associated Press)
A warning sign shows the location of a 12-inch oil pipeline owned by Bridger Pipeline Co. that spilled up to 50,000 gallons of crude along the Yellowstone River near Glendive, Mont., Monday, Jan. 19,... (Associated Press)
Cleanup workers cut holes into the ice on the Yellowstone River near Crane, Mont. on Monday, Jan. 19, 2015 as part of efforts to recover oil from an upstream pipeline spill that released up to 50,000... (Associated Press)
Montana Gov. Steve Bullock, left, talks with Glendive Mayor Jerry Jimison on Monday, Jan. 19, 2015 about an oil spill along the Yellowstone River upriver of the city that released up to 50,000 gallons... (Associated Press)
A warning sign shows the location of a 12-inch oil pipeline owned by Bridger Pipeline Co. that spilled up to 50,000 gallons of crude along the Yellowstone River near Glendive, Mont., Monday, Jan. 19,... (Associated Press)
Preliminary tests at the city's water treatment plant indicated that at least some oil got into a water supply intake along the river, according to state and federal officials. About 6,000 people are served by the intake, Glendive Mayor Jerry Jimison said.
Officials stressed that they were bringing in the shipments of drinking water as a precaution and did not know yet whether there was any health threat. Results of further tests to determine the scope of the danger were expected in coming days.
Up to 50,000 gallons of oil spilled in the pipeline accident Saturday. Cleanup crews trying to recover the spilled crude were hampered by ice that covered most of the river, making it hard to find the oil.
Initial tests of water supplies Saturday and Sunday revealed no evidence of oil. But by late Sunday, residents began complaining that the water coming from their taps had an unusual odor, officials said.
An advisory against ingesting water from the city's treatment plant was issued late Monday. After hearing about it, Glendive resident Ed Miller, 67, picked up an extra gallon of water from the fast-dwindling supplies at a convenience store.
Miller hadn't noticed any odors from his own tap water. But his neighbors had, and Miller said he wouldn't be drinking any city water until the advisory was lifted.
Glendive City Councilman Gerald Reichert said he first noticed an odor in the water at his house Sunday night. He said it smelled like diesel fuel.
Officials with Bridger Pipeline LLC of Casper, Wyoming, have said the break in the 12-inch steel pipe happened in an area about 5 miles upstream from Glendive, an agricultural community in east-central Montana near the North Dakota border.
Bridger spokesman Bill Salvin said Monday that the company is confident that no more than 1,200 barrels — or roughly 50,000 gallons — of oil spilled during the hour-long breach.
An oil sheen was seen near Sidney, almost 60 river miles downstream from Glendive, said Paul Peronard, the on-scene coordinator for the U.S. Environmental Protection Agency.
Booms were being placed in areas of open water to try and trap oil. Near Crane, which is about 30 miles downstream from the spill, crews were chopping holes into the ice in hopes that they will be able to vacuum up crude as it comes down the river in coming days.
"These are horrible working conditions to try to recover oil," Peronard said Monday. "Normally you at least see it, but you can't see it, you can't smell it. ... We're going to have to hunt and peck through ice to get it out," Peronard said.
Bridger Pipeline crews were still working Monday to determine exactly where the breach occurred.
If it happened on the bank, some of the oil may be trapped in the soil near the river. If it was beneath the river, "then it's all in the river," Peronard said.
Montana Gov. Steve Bullock toured the spill site Monday afternoon. He said he expected Bridger to continue its cleanup efforts "until it's cleaned up to our standards."
"The water's a concern," Bullock said. "I expect Bridger to continue and provide all the resources needed."
The Poplar Pipeline system runs from Canada to Baker, Montana, and carries crude oil from the Bakken oil producing region in Montana and North Dakota. It remained shut down Monday while crews planned to pump out any remaining oil from the section of the pipeline where the breach occurred.
The pipeline receives oil at the Poplar Station in Roosevelt County, Fisher and Richey stations in Richland County, and at Glendive in Dawson County, all in Montana. It was last inspected in 2012, Salvin said, and is at least 8 feet below the Yellowstone River bed where it crosses the river near Glendive.
Bridger Pipeline, a subsidiary of True Cos., also owns and operates the Four Bears Pipeline System in North Dakota along with the Parshall Gathering System and the Powder River System in Wyoming, according to the company's website.
Bridge Pipeline Vice President Tad True said the company apologizes for the spill and has taken responsibility for the cleanup.
The company will not be able to restart the pipeline until it receives approval from the U.S. Department of Transportation's Pipeline and Hazardous Materials Safety Administration. Inspectors from the federal agency were at the spill and also planned to inspect Bridger Pipeline's control room in Casper, Wyoming, to gather more information, PHMSA spokeswoman Susan Lagana said. ||||| Crews work to contain an oil spill from Bridger Pipeline’s broken pipeline near Glendive, Mont., in this aerial view on Monday. (Photo by Larry Mayer / The Billings (Mont.) Gazette)
Fresh water being trucked into Glendive, Mont., after almost 50,000 gallons of Bakken crude oil spills into Yellowstone River
Fresh water being trucked into Glendive, Mont., after almost 50,000 gallons of Bakken crude oil spills into Yellowstone River
Fresh water being trucked into Glendive, Mont., after almost 50,000 gallons of Bakken crude oil spills into Yellowstone River
GLENDIVE, Mont. -- Truckloads of water are being brought into Glendive after a spill of close to 1,200 barrels of oil, roughly 50,000 gallons, has officials concerned about the town’s water supply.
Montana officials have notified Sidney, Mont., and Williston, N.D., both downstream from the leak, and municipal water systems there are being tested for contamination, too, according to the Montana Department of Environmental Quality.
The Environmental Protection Agency said in a statement Monday evening that elevated levels of hydrocarbons have been found in Glendive’s water supply.
“This is a significant spill, and the coordination of various response activities at the spill site, the city of Glendive, and downstream locations will be a priority over the next several days,” the EPA said in its statement.
Bridger Pipeline’s Poplar line, leaked oil Saturday near where it crosses the Yellowstone River, Glendive’s water supply. The leak occurred roughly 9 miles south -- upstream -- from the town of about 5,000 along Interstate 94 in eastern Montana.
As crews worked to find the cause of the leak, officials closed water intakes in the river and brought semi-loads of fresh water here Monday evening after 20 to 30 residents reported a smell or taste to their drinking water.
“We don’t know 100 percent yet that there’s contamination on the system but we are going to put out warnings to the residents of Glendive that they probably shouldn’t be drinking the water until we get definite results back,” Mayor Jerry Jimison said earlier Monday.
As of about 5 p.m. MDT Monday, as the EPA reported elevated hydrocarbons in initial test results, responders were placing containment structures across the Yellowstone River at Sidney, Mont., about 30 miles downstream from the leak, according to the EPA.
More specific test results are expected in the next couple days.
After passing Sidney, the river enters North Dakota and shortly thereafter joins the Missouri River, near Williston. The state of North Dakota has dispatched an official to watch for signs of the oil on its side of the border, according to the Montana Department of Environmental Quality.
Governor declares emergency
When Gerald Reichert first heard reports of smelly drinking water Sunday, he thought it could all be psychological, residents nervous after hearing about an oil spill.
Then he smelled it himself in his Glendive home.
“Suddenly at our house there was a definite smell. It was a diesel smell,” Reichert, a member of the Glendive City Council, said Monday afternoon.
Reichert was one city official getting calls Sunday from residents who smelled something funny in the water.
Bridger Pipeline planned to continue bringing in a semi-load of water each day until the system is clear, Jimison said. Officials from EPA, Bridger, the state of Montana and the city of Glendive are developing a plan to flush the water distribution system, according to the EPA.
The U.S. Fish and Wildlife Service, the U.S. Department of Transportation and the U.S. Coast Guard are also responding.
Officials said earlier Monday that contamination was unlikely because the water intake is 14 feet below the water surface and the oil tends to float.
Montana Gov. Steve Bullock visited the site Monday afternoon for a briefing, Jimison said. The governor’s office issued an executive order declaring a state of emergency in Dawson and Richland Counties. The river’s frozen state hinders response efforts, Bullock said in the order.
The leak began at 10 a.m. Saturday and Bridger shut down the line by 11 a.m., according to a company statement. The spill has wound up being on the higher end of the company’s initial estimate of 300 to 1,200 barrels.
“Our primary concern is to minimize the environmental impact of the release and keep our responders safe as we clean up from this unfortunate incident,” Tad True, vice president of Bridger Pipeline, said in a statement.
A spokesman for Bridger didn’t return a call for comment. Bridger Pipeline is under the umbrella of the True Companies, which also owns Belle Fourche Pipeline Co., Black Hills Trucking and other energy businesses.
The spill is the second in the river in recent years. In 2011, Exxon Mobil Corp.’s 40,000 barrel-per-day Silvertip pipeline in Montana ruptured underneath the river, releasing more than 1,000 barrels of crude and costing the company about $135 million to clean up.
The price of Bakken crude was little changed on the Martin Luther King, Jr. holiday despite the shutdown of the Poplar line, which carries Bakken crude to Baker, Mont. Bakken crude narrowed slightly to $5.40 per barrel below the West Texas Intermediate benchmark, according to Shorcan Energy brokers, compared with a settlement of $5.80 under the benchmark on Friday.
One trader in Calgary said he did not expect the outage to have a significant impact on differentials as the pipeline is not a major conduit for crude in the area.
Reuters contributed to this report. ||||| Ten groundwater wells have also been sampled. The wells were selected due to their shallow depth and vicinity to the break. Results for all ten wells sampled were non-detect for VOCs.
Permanent monitoring equipment was installed at the water plant. This equipment detects VOCs and other oil constituents entering the system, providing continuous data to operators, and sounding an alarm that will trigger a shutdown of the treatment plant if benzene levels reach 2 ppb (less then half of the benzene maximum contaminant level - MCL). On March 14, a higher than normal level of VOCs was detected by the equipment during the ice breakup. The situation was planned for and the plant shut down to preserve the clean water in the storage system. A water conservation request was made to the City of Glendive for the weekend of March 14-15. An aeration system was put in place and the plant began treating water again on March 15. The conservation request was lifted on March 16 and there have been no detections since.
Water samples have been taken along the river at the site of the release and select points downstream. Sediment and surface water sampling at the irrigation intake located approximately 17 miles downstream of Glendive has been conducted at the request of the Lower Yellowstone Irrigation District. There has been no visual or other evidence of crude oil at the intake's irrigation structures. There was some light oil staining at the I-94 bridge that was cleaned up. Additional sediment sampling will take place.
Crews recovered approximately 490 barrels of oil from the pipeline after it was shut down, and about 60 barrels from the river.
On April 10, 2015 at 7:00 a.m., the Unified Command dissolved and operations moved into a more traditional response under the authority of the Montana Water Quality Act (WQA) and the Comprehensive Environmental Cleanup and Responsibility Act (CECRA). Bridger and its consultants will coordinate directly with DEQ regarding work plans, reports, additional cleanup, other remediation work confirmation sampling and reclamation work.
The pipeline was 12 inches in diameter and one-half-inch thick in the area where it crossed under the Yellowstone River just upstream from Glendive. A breach in the line occurred between two block valves approximately 6,800 feet apart where the line crosses the river. The point of the release was determined to be under the river. The line was carrying Bakken crude oil at the time of the release. The damaged pipeline was pulled from the river and sent to a lab in Oklahoma for metallurgical testing.
April 13, 2015
Montana Fish, Wildlife and Parks has lifted its consumption advisory for fish caught on the Yellowstone River near the spill. After the ice left the river in March, FWP fisheries biologists were able to catch 213 fish representing species known to live in the river between the spill site and the North Dakota border. Laboratory tests of those fish showed no detectible levels of petroleum contamination in the edible muscle tissues. *The following updates came from information provided by the Unified Command. April 10, 2015 At 7:00 a.m. today, the Unified Command was dissolved. Work will continue with oversight from DEQ under the authority of the Montana Water Quality Act and the Comprehensive Environmental Cleanup and Responsibility Act. Bridger and its consultants will coordinate with DEQ on work plans, reports, additional cleanup and other work regarding remediation, confirmation sampling and reclamation work. April 8, 2015 The Unified Command has been notified that an eight-foot section of damaged pipeline has been successfully withdrawn from the Yellowstone River. The linked image shows a break along a section weld line. Go to the Poplar Pipeline Response website for more information. April 3, 2015 The Montana Department of Environmental Quality has received Bridger Pipeline's work plan to remove the exposed section of damaged pipeline in the Yellowstone River. The plan calls for dive vessels and support craft to be launched on Monday and Tuesday, April 6 and 7 while divers locate and survey the pipeline. A small, temporary hydro-dam diversion wall will allow for some protection against river flow as survey and withdrawal work is underway. "Pigs" remain in the broken section of the line and were used to purge remaining oil toward the gate valves where it was recovered. These devices may be pushed out of the pipeline after sectioning the line, or withdrawn manually. A 10 to 15-foot section of the line containing the rupture is expected to be removed to the north shore on Wednesday, April 8. Questions about the pipe removal should be addressed to Bridger. March 27, 2015 The Montana Department of Environmental Quality has issued comments on the Poplar Pipeline Response for Sediment and Co-located Water Sampling Work Plan and Schedule. For the Comments document, click here. March 25, 2015 The Unified Command has transitioned from emergency response and oil recovery under the Incident Command System to long-term remediation and monitoring. The EPA will issue its final report and DEQ will assume the role of lead agency. Cleanup, monitoring and data collection will continue. The Montana DNRC and FWP will continue to serve in support roles for the State. The EPA will remain available for consultation and assistance as necessary. The last of the ice in the Yellowstone River melted during the week of March 13. Extensive reconnaissance by boat and air is being conducted from the spill site to the Montana/North Dakota border. Staining and sheen has been noted, but no recoverable oil has been found. For the full press release, click here. March 16, 2015 The City of Glendive has lifted its "conserve water" advisory for residents on the City's water system. The water treatment plant is up and running and the water being produced is clean and safe to drink. March 15, 2015 The Glendive Water Treatment plant is back online and making water. Workers completed the aeration system late this morning and it is working as expected. Readings of Volatile Organic Compounds (VOCs) have been zero throughout the morning. The aeration system will allow the plant to continue operations as the last of the ice melts on the river and releases whatever amount of oil remains from the Poplar Pipeline spill January 17. Officials at the Glendive Water Department are asking residents to continue to conserve water until Monday so that the water plant can refill its reserve tanks. The level in the reserve tanks dropped while the system was shutdown the last 30 hours. Workers from Dawson County provided more than 750 gallons of bottled water the last two days to local residents, the prison and the local hospital. It is important to point out that at no time did any contamination make its way into the Glendive water system. The system was shut down because of higher than normal levels of VOCs at the intake for the system. The new monitoring system installed after the January 17 breach of the Poplar Pipeline worked as designed and allowed workers to keep the city’s water supply safe. The water in the system remains clean and safe to drink. March 14, 2015 - City of Glendive – Water Conservation Advisory The City of Glendive is asking residents on the city water system to conserve water this weekend. On Saturday morning, due to the ice break-up in the river, the city water plant detected a higher than normal level of Volatile Organic Compounds (VOCs) at the intake. This situation was planned for and the plant is currently shut down to preserve the clean water in the system. Currently, all of the water in the city’s water system is clean and safe to drink and water is available for emergency fire response. The main concern is that if current conditions remain and no action is taken, the water plant may need to be shut down for a longer period of time. The Unified Command is taking actions to ensure the residents of the city of Glendive continue to have an uninterrupted supply of clean drinking water: Installing an aeration system at the water plant to remove traces of VOCs from the water. That will be completed as soon as possible, but no later than Sunday and will enable the plant to produce clean water regardless of the conditions on the river. Asking residents to conserve water during the weekend of March 14-15. Providing bottled water assist with water conservation to anyone who needs extra supply of drinking water. Bottled water can be picked up from 1-5pm Saturday at the Dawson County Emergency Operations Center. Testing the water at the intake and distribution point to ensure the city’s water supply is protected efforts. March 13, 2015 Warm weather has made on-ice recovery of oil unsafe. Members of the Unified Command are monitoring conditions and responding to reports of oil on the river. Spill responders will resume assessing the impact of the spill on the river once the ice is clear. In the meantime, local residents can report oil sightings or odor complaints by calling the Poplar Response Hotline at 888-959-8351. A joint press release notes additional resources have been deployed to Glendive in anticipation of the ice break-up. Air monitoring and water sampling equipment will be on-hand for quick response. Specialized monitoring equipment has been installed at the Glendive Water Treatment Plant that will detect crude oil contaminants at the intake. Should Volatile Organic Compounds (VOCs) be detected, the intakes will automatically close to prevent contaminants from entering the system. March 6, 2015
The U.S. Department of Transportation's Pipeline and Hazardous Materials Safety Administration has approved Bridger Pipeline's request to reopen a 49-mile portion of the Poplar System beyond the rupture point that occurred on January 17 under the Yellowstone River. The approved section does not cross any major waterways and will be restarted under reduced operating pressure and enhanced surveillance.
March 5, 2015
DEQ continues to have a presence in Glendive as the Poplar Pipeline spill response is in it's interim phase.
Weather conditions are still hampering the recovery of oil. Surveillance continues and no new visible impacts downstream have been observed.
Information is being developed and disseminated to landowners along the river with instructions on who to contact and what to do if any oil is spotted.
Bridger Pipeline, LLC has responded to DEQ's Notice of Potential Liability Letter under the Comprehensive Environmental Cleanup and Responsibility Act and the Water Quality Act, sent February 12.
Bridger Pipeline, LLC Response to DEQ Notice of Potential Liability Letter
DEQ Notice of Potential Liability Letter to Bridger Pipeline, LLC
February 26, 2015
The long-term monitoring equipment, or Total VOC Analyzer, is now online at the Glendive Water Treatment Plant. The daily sampling of water at the intake and output will be discontinued today. The EPA mobile lab will demobilize on February 27.
February 23, 2015
On Friday, Montana Fish, Wildlife & Parks updated its fish consumption advisory. Detectable levels of petroleum were found in tests of fish pulled from the Yellowstone River downstream from the Poplar Pipeline break near Glendive. See the full press release here: http://fwp.mt.gov/news/newsReleases/fishing/nr_0887.html
February 20, 2015
The results for the ten groundwater wells sampled on February 11 and 12 came back non-detect for volatile organic compounds. These wells were selected due to their shallow depth and vicinity to the pipeline break.
February 18, 2015
Daily updates coming from the Unified Command continue to be much of the same. Reconnaissance has been conducted from the incident location to the confluence of the Yellowstone and Missouri Rivers. On February 17, there were no visible impacts observed. A flyover was conducted with notes of some open water and no sheen from Glendive to Savage. Oil recovery continues to be hampered by weather conditions.
February 13, 2015
Cold temperatures have drastically slowed oil recovery again. However, 3.5 barrels of oil have been recovered from the river this week. Booms have been deployed on the ice to help capture and collect oil on the surface.
February 12, 2015
Five shallow groundwater wells in the vicinity of the break (from the pipeline to Glendive) were sampled yesterday and came back non-detect for crude oil or crude oil constituents. An additional four to five will be sampled today.
The long-term monitoring equipment for the water treatment plant has arrived and is being installed today. It should be operational in the coming days.
Oil recovery has been limited due to weather and ice conditions. A small amount of oil was recovered Monday and Tuesday. Estimates of how much was recovered are still being calculated.
Wildlife agencies are still waiting for the fish tissue sample results to come back. There have been no reports of oiled wildlife.
February 9, 2015
Response crews are back out on the ice today recovering oil from the Yellowstone River. Over the weekend, weather conditions became more favorable and crews were able to use oil spill boom to keep oil on the ice from spreading, and cut slots into the ice in some places. Confirmed oil recovered from both the pipeline and the river stands at about 23,000 gallons. Updated recovery numbers will be provided as they are available.
Water sampling continues daily at the Glendive Water Treatment Plant. All samples continue to confirm the water in Glendive is safe to drink. Environmental specialists continue their sampling efforts at various locations along the Yellowstone River.
February 6, 2015
A spill response crew continues to monitor the river everyday. With warmer weather in the coming days, there is a chance that more oil may be recovered.
Unified Command has signed a groundwater sampling plan and sampling will begin next week.
We continue to encourage people to call the hotline at 888-959-8351 if they have questions or concerns regarding the spill and response.
February 4, 2015
The Unified Command has received comment asking why individual home water sampling is not taking place. The Glendive public water supply is a regulated entity and two samples per day are taken before the water goes through the treatment process and after the treatment process. These are the compliance points and we now have a consistent data set showing no elevated levels of contaminants. (See Water Treatment Plant Sampling Results in the Maps and Documents tab.) The public water supply is in compliance and safe.
Residents need to make sure they have flushed their systems properly. Steps Glendive residents should take to flush their systems If anyone still has questions, call the hotline at: 888-959-8351
If anyone detects odors in their water, call the hotline. The hotline is manned from 8:00 a.m. to 8:00 p.m. If you reach the hotline after hours, leave a message and someone will call you back.
February 2, 2015
Oil recovery activities were suspended over the weekend due to weather conditions. The current oil recovery estimate is 548 barrels (23,016 gallons) from both the pipeline and the river. Water samples are still being taken at the Glendive Water Treatment Plant and continue to show no detection of contaminants.
Response teams are staged in Glendive to quickly recover oil should it be observed or reported. Detailed incident plans for both Phase II (Interim) and Phase II of the spill response can be found here.
January 31, 2015
January 30, 2015
The Unified Command is shifting to an interim phase now that all of the oil remaining in the pipeline has been recovered. On-ice recovery of oil from the Yellowstone River will continue as conditions allow.
During the interim phase, workers will continue water sampling at the Glendive Water Treatment Plant and environmental specialists will take water samples along the river at the site of the the release and at select points downstream. Additional environmental sampling will also be conducted to determine the extent of the spill's environmental impact and to guide future response plans once the ice breaks up.
Bridger Pipeline will have a team of oil spill response specialists stationed in Glendive to support environmental monitoring and to collect any residual, recoverable oil from the January 17 release. Both the U.S. Environmental Protection Agency and the Montana Department of Environmental Quality will have on-scene coordinators to oversee response efforts.
To date, response crews have collected 536.6 barrels of oil (about 22,537 gallons) out of more than 1,200 barrels that could have been released. Most of the oil recovery was from within the pipeline after it was shut down. Workers have been collecting about 400 gallons of oil a day from their on-ice recovery efforts.
Air monitoring concluded in the city of Glendive on January 28. Seven days of continuous monitoring showed no elevated levels of hydrocarbon components in the air. All drinking water sampling continues to show the water in Glendive is safe to drink. Scientists from the National Oceanographic and Atmospheric Administration (NOAA) Emergency Response Division analyzed sampling data and their consensus was that the levels of contaminants were "well below public health concern thresholds and may in fact be near background levels."
At the height of the response, more than 125 workers from Bridger Pipeline, the State of Montana, and the Federal government were responding to the pipeline breach. During the interim operations, approximately 20 people will be working daily on environmental monitoring, oil recovery, and claims processing.
January 29, 2015
Crews will continue oil recovery on the Yellowstone River today. Safety is a top priority and river conditions are constantly monitored; however, temperatures have cooled and river conditions have stabilized.
January 28, 2015
Oil Recovery Update: 490 barrels (20,580 gallons) recovered from pipeline, 41 barrels (1,722 gallons) recovered from river, 694 barrels (29,148 gallons) un-recovered from river.
High resolution sonar did not reveal any new information on the pipeline; however, it did confirm what the lower resolution sonar identified over the last couple of days.
The river flow rate and turbidity under the ice is too high for any camera results.
The warm weather has created difficult river ice conditions hampering recovery and investigation at the pipeline crossing.
Planned Pipeline Activities:
The pipeline crew is now working on a camera that can be inserted into the pipe and attempt to capture information on the pipeline breach from inside the pipeline. First attempts will be to insert the camera from the northwest side of the river.
January 27, 2015
Oil Recovery Update: Another 10 barrels of oil were recovered from the river today, bringing the total recovered to 518 barrels (21,756 gallons).
January 26, 2015
Oil Recovery Update: 490 barrels (20,580 gallons) recovered from pipeline, 28 barrels (1,176 gallons) recovered from river, 707 barrels (29,694 gallons) un-recovered from river.
EPA Pollution Report: River conditions are hampering access to the spilled oil. There is extensive ice cover on the Yellowstone River, but the ice is not sound enough in many locations to conduct response efforts. Significant thawing has occurred in the past two days and is increasing the risks associated with oil assessment and recovery efforts.
Current reconnaissance indicates that there is not much oil remaining in the operational theater. More than 160 man-hours per day and an extensive array of equipment is being used to recover oil within the first three miles downstream of the pipeline break. Given the unsafe working conditions and the limited oil recovered, as discussed earlier in the pollution report, the response is rapidly approaching the point of diminishing returns. This is especially relevant given evidence of the physical damage caused to the river by activities of the response crews and equipment. Representatives from Montana Department of Natural Resources & Conservation and Fish, Wildlife & Parks have expressed concern that this is likely to be the most significant source of damages to the riverine system.
It has been brought to the attention of the Unified Command by the U.S. Fish & Wildlife Service that much of the river corridor hosts nesting sites used by bald eagles; golden eagles; least terns; piping plovers; and great blue herons that are protected, threatened, or endangered. Eagles have already returned to the area and it is expected that they will begin nesting as early as February. Other significant species include the endangered pallid sturgeon and the spiny softshell turtle. The UC is weighing the impacts of the response (airboats, helicopters, and vehicular and foot traffic) along the shoreline versus the limited oil recovery and limited product remaining.
Water Sampling Update: Drinking water sampling in the community of Glendive has been conducted daily since Montana environmental officials declared the water safe to drink on January 22.
Environmental specialists have sampled water at the intake of the water treatment plant, as well as treated water from the plant, each day. All results have been well within drinking water standards.
Additionally, 24 surface water samples have been taken on the river, including at the spill site, and all of them have been within standards.
Workers took potable water samples from fire hydrants, residences, and public buildings and all samples have been below Maximum Contaminant Levels. Water is tested for Volatile Organic Compounds, including benzene, and other components of crude oil. Based on these samples, Glendive residents should feel comfortable continuing to use their water as normal.
Sampling of water at the intake and treated water at the water treatment plant will continue. Equipment is expected to be put in place to detect and alert water treatment plant operators of anything abnormal entering the system long-term.
Unified Command Update: January 24, 2015
Air Monitoring Update: The air in the community of Glendive has been monitored 24 hours a day since Sunday afternoon, January 18. Monitoring has been for benzene, Volatile Organic Compounds, and other compounds associated with crude oil. None of these compounds have been detected in the air anywhere in the community. Based on these results, the residents of Glendive should feel comfortable with their normal activities, including allowing their children to play outside. Any questions about air monitoring can be referred to the Poplar Response Hotline 1-888-959-8351.
Pipeline Update: A sonar survey of the Poplar Pipeline where it crosses the Yellowstone River near Glendive shows that the pipeline is exposed on the river bed for approximately 100-110 feet near the site of the breach. At one point, the bottom of the river bed is about one foot (1 ft) below the pipeline. Bridger Pipeline last confirmed the depth of the pipeline under the river in September 2011. At that time, the pipeline was about eight feet (8 ft) below the river bed at its shallowest point. The sonar survey did not identify a cause of the pipeline breach that occurred January 17, but this data will assist investigators in determining the cause of the spill. Current estimates show the pipeline could have leaked up to 925 barrels of oil (38,850 gallons) into the Yellowstone River. Responders recovered a significant amount of oil from the pipeline on January 23 and January 24. A final tally of the total oil recovered is being made and a more precise estimate of the volume lost is being calculated. These numbers will be released in the next few days. On Sunday, workers from Ballard Marine Construction will continue to examine the pipeline to further assess the line. Oil recovery continues on the river. The weather will play a big factor in the coming days. Crews also continue to try to determine the cause of the pipeline breach. The information center has been closed; however, if people still have questions, they may call the hotline.
Unified Command Update: January 23, 2015
Bottled Water Distribution Update: Bottled water distribution is being discontinued for residents on the City of Glendive's water system as the city's water has been certified safe to drink by the Montana Department of Environmental Quality. As a precaution, local authorities have stockpiled a two-day supply of bottled water at the Dawson County Disaster and Emergency Services office in Glendive.
Residential Water System Update: Some residents have reported a dark brown to black material coming out of taps at or near the end of the flushing process. The EPA has evaluated several of these incidents. Environmental Protection Agency Incident Commander Paul Peronard says the material is not related to the spill but rather is naturally occurring sediment that built up when the water system was not in use.
If you encounter sediment during the flushing process, please continue to flush and wash the material down the drain. No further action is needed.
If you encounter odor in your water after the flushing process, please report that to the hotline at 888-959-8351.
Unified Command Update: Glendive municipal water supply now meeting standards set by the federal Safe Drinking Water Act, according to DEQ. Read More
January 22, 2015
Unified Command Update: Steps Glendive residents should take to flush their systems.
Public Meeting: A Glendive Water System Public Meeting Update will be held this evening, Thursday, January 22, at 7:00 p.m. at the Glendive High School, 900 North Merrill Avenue. This is an informational meeting for all users of the municipal water system to carry out several steps to flush any remaining contamination from the system.
January 21, 2015
Unified Command Update: The Glendive water treatment plant has been decontaminated. Preliminary sampling shows all of the contaminants that were elevated in water samples earlier this week are now below federal clean water standards. Confirmation testing is being done overnight and certified test results available tomorrow. The main water distribution lines have been flushed through the fire hydrants and samples have been taken. If those samples also show levels within safe drinking water limits, workers will begin the process of instructing residents how to flush the water in their homes and businesses.
Montana Public Radio offers interviews with EPA on-site coordinator Paul Peronard, Bridger on-site public information officer Bill Salvin, and Glendive Mayor Jerry Jimison.
The Montana Department of Fish Wildlife & Parks has issued a press release calling for a fish consumption advisory for the Yellowstone River between the spill site just upstream from Glendive and the North Dakota state line.
January 20, 2015
Unified Command Update: Oil spill response workers recovered approximately 240 barrels of crude oil from the Poplar Pipeline Tuesday. Workers recovered the oil from the south side of the Yellowstone River where the pipeline crosses about six miles upstream from Glendive.
Responders earlier had calculated the pipeline breach to have a worst-case discharge of up to 1,200 barrels (50,000 gallons) of crude oil. Today’s recovery reduces the total estimated escaped crude into the river to be about 960 barrels (or 40,000 gallons.)
Drinking water is still being made available due to concerns about safety of the city’s water supply. Testing Monday showed elevated levels of Volatile Organic Compounds (VOCs), predominantly benzene in the water.
Workers continue installing additional treatment capability at the Glendive Water Treatment Plant to clean the system and bring it back online.
The Unified Command is sampling water from the plant so that the water system can be restarted. Residents will be able to resume using their water once the water quality is determined safe.
In the meantime, 16,500 gallons of drinking water is available for residents to pick up at the Eastern Plains Event Center at 313 South Merrill Avenue in Glendive.
A community center has been established at the Dawson County Courthouse at 207 W. Bell in Glendive. Representatives of Bridger Pipeline will be available to answer questions about the response. A representative from the Governor’s office will also be on hand. The center will be open from 9:00 a.m. to 5:00 p.m. daily.
11:00 a.m.: The City of Glendive is advising residents to not drink or cook with water from the city's municipal water system. Drinking water is being distributed at the Eastern Plains Event
Center at 313 South Merrill Avenue in Glendive. Water is being delivered daily and will be available. Please monitor the Dawson County website at www.dawsoncountymontana.org for updates on water arrival.
Work at the Glendive Water Treatment Plant is underway to remove the contamination and bring the system back on line. Those actions include:
Increasing the dose of activated carbon, which removes contaminants.
If the activated carbon does not prove adequate, workers will add air stripping equipment at the plant inlet to pretreat water coming into the facility.
The increased activated carbon treatment began at 9:00 a.m. Tuesday and testing will be done later today to determine if this treatment is effective. If workers need to install the air stripping equipment a the plant inlet, it will take an additional day to complete that work.
Once DEQ certifies the water safe at the plant, the system will be flushed so that residents can resume using their water.
Drinking water will continue to be made available until the water system is certified safe to drink. Residents can pick up the water at Eastern Plains Event Center at 313 South Merrill Avenue in Glendive.
7:45 a.m.: A hotline has been established for drinking water concerns: 888-959-8351
7:30 a.m.: Results from the first water sample taken from the Glendive Municipal Water Treatment Plant has come back and the sample showed an elevated level of volatile organic compounds (VOCs), predominantly benzene. The presence of benzene would account for reports of adverse odor in the local water supply. This test result confirms findings from field samples taken Monday at several locations in the city.
While the elevated levels are above the level for long-term consumption, the scientists who reviewed the data at the Centers for Disease Control have told the Unified Command that they “do not see that domestic use of this water poses a short term public health hazard.”
Because of the public concern over the safety of the Glendive municipal water supply, the Unified Command has made arrangements to provide drinking water to Glendive residents on the city’s municipal water supply.
The Unified Command is taking two additional actions to confirm these test results and to remove the contamination from the Glendive Municipal Water System.
First, plans are being put in place to fully decontaminate the Glendive Municipal Water system.
Also, responders will continue to sample the water from multiple locations for testing in both field sites and laboratories. Those results will be released as they become available.
The Unified Command was established in response to a release from the Poplar Pipeline System owned by Bridger Pipeline, LLC. The command is operating out the Dawson County Disaster and Emergency Service Center in Glendive. Officials believe up to 1,200 barrels of crude oil (approximately 50,000 gallons) leaked from the Poplar Pipeline near where the line crosses the Yellowstone River near Glendive.
January 19, 2015
Unified Command Update: On January 17th at 3:00 pm, Bridger Pipeline, LLC notified local authorities of a potential release from a pipeline that crosses the Yellowstone River approximately five miles upstream from Glendive. Dawson County has received complaints of odor in drinking water from people who use the municipal water supply.
Water samples were taken from the municipal drinking water supply on Monday morning and were expedited to Energy Labs in Billings for analysis. Until more definitive information is made available, the Centers for Disease Control (CDC) recommends that residents do not ingest municipal water and to use bottled water for drinking and cooking. The Incident Management Team has ordered bottled water for public distribution at The EPEC building located at 313 South Merrill Avenue – time to be announced – for individuals seeking assistance. Water conservation is encouraged to preserve water capacity for emergency response.
Additional information about testing and drinking water will be posted as soon as it becomes available. For the most current information please visit the DEQ Bridger Pipeline Spill website at http://www.deq.mt.gov/yellowstonespill2015.mcpx. Updates will also be posted on the DEQ Twitter and Facebook.
For more information, contact the Dawson County Health Department at (406) 377-5213
An Executive Order has been issued by the Governor's office proclaiming an emergency to exist in the counties of Dawson and Richland along the Yellowstone River.
GIS Map of Bridger Pipeline Oil Spill
Water samples will be taken in Glendive and tested to determine any potential impact to drinking water supply. DEQ and EPA are responding to concerned citizens to sample their water using portable hand-held devices.
January 18, 2015: ||||| BILLINGS, Mont. (AP) — Montana officials said Sunday that an oil pipeline breach spilled up to 50,000 gallons of oil into the Yellowstone River near Glendive, Montana, but they said they are unaware of any threats to public safety or health.
The Bridger Pipeline Co. said the spill occurred about 10 a.m. Saturday. The initial estimate is that 300 to 1,200 barrels of oil spilled, the company said in a statement Sunday.
Some of the oil did get into the water, but the area where it spilled was frozen over and that could help reduce the impact, said Dave Parker, a spokesman for Gov. Steve Bullock.
"We think it was caught pretty quick, and it was shut down," Parker said. "The governor is committed to making sure the river is cleaned up."
Bridger Pipeline Co. said in the statement that it shut down the 12-inch-wide pipeline shortly before 11 a.m. Saturday. "Our primary concern is to minimize the environmental impact of the release and keep our responders safe as we clean up from this unfortunate incident," said Tad True, vice president of Bridger.
The EPA and state Department of Environmental Quality have responded to the area about 9 miles upriver from Glendive, Parker said.
An Exxon Mobil Corp. pipeline broke near Laurel during flooding in July 2011, releasing 63,000 gallons of oil that washed up along an 85-mile stretch of riverbank.
Montana officials are trying to determine if oil could have been trapped by sediment and debris and settled into the riverbed.
Exxon Mobil is facing state and federal fines of up to $3.4 million from the spill. The company has said it spent $135 million on the cleanup and other work.
Montana and federal officials notified Exxon that they intend to seek damages for injuries to birds, fish and other natural resources from the 2011 spill. The company also is being asked to pay for long-term environmental studies and for lost opportunities for fishing and recreation during and since the cleanup. ||||| These crawls are part of an effort to archive pages as they are created and archive the pages that they refer to. That way, as the pages that are referenced are changed or taken from the web, a link to the version that was live when the page was written will be preserved.Then the Internet Archive hopes that references to these archived pages will be put in place of a link that would be otherwise be broken, or a companion link to allow people to see what was originally intended by a page's authors.The goal is to fix all broken links on the web . Crawls of supported "No More 404" sites. | – Montana Gov. Steve Bullock declared a state of emergency for two counties yesterday after a 12-inch oil pipeline burst Saturday, pouring up to 50,000 gallons of oil into the Yellowstone River near the town of Glendive, the state's Department of Environmental Quality reports. Bridger Pipeline Co. noticed the breach Saturday around 10am and shut the pipeline down by 11am, per a company statement. A spokesman for Bullock initially told the AP that "we think it was caught pretty quick" and noted the frozen river may minimize damage. But cleanup crews are struggling through the ice, the AP notes. And although initial tests revealed no oil in the drinking water, residents started noting Sunday that they "smelled something funny" in the water; the EPA said in a statement last night that elevated hydrocarbon levels were detected in Glendive's water, the Grand Forks Herald reports. Drinking water is now being trucked into Glendive "as a precaution," say officials; an advisory against drinking water from the local treatment plant was also issued last night, the AP notes. Officials had said earlier yesterday that contamination was unlikely because of oil's tendency to float (the water intake is nestled 14 feet below the surface). The DEQ notes a few oil sheens have been seen, with the AP reporting one 60 miles downstream from Glendive. In addition to the EPA and DEQ, the Coast Guard, Fish and Wildlife Service, and DOT are all assessing the mess and assisting with cleanup, notes the Herald. "Our primary concern is to minimize the environmental impact of the release and keep our responders safe as we clean up from this unfortunate incident," Bridger VP Tad True said in the statement. (An Exxon Mobile pipeline flooded the Yellowstone River nearly four years ago.) |
inflammatory breast cancer ( ibc ) is a rare form of breast cancer , accounting for only 5% of breast cancer cases annually in the united states .
survival outcomes are improving for patients with non - inflammatory breast cancer ( non - ibc ) , but remain poor for patients with ibc despite aggressive multimodal treatment 1 - 3 .
ibc , the most lethal form of breast carcinoma , is characterized by distinct clinicopathologic features , including rapid disease progression and onset of swelling , enlargement of the breast , skin tenderness , induration , edema , warmth , and erythema , which is commonly combined with peau d'orange 4 - 8 .
the 5-year survival rate for patients presenting with non - metastatic ibc is only about 40% , even with modern multidisciplinary therapy 1,7,9 - 12 .
treatment factors associated with improved patient survival include use of multimodality therapeutic strategies , including chemotherapy , modified radical mastectomy , and postmastectomy radiation 12 - 14 .
although excellent rates of locoregional control are achievable 6 , mortality is usually related to systemic recurrence .
because of the rarity of ibc and the inherent difficulty in obtaining tumor tissue from ibc patients , who may lack a tumor mass at presentation and typically receive upfront systemic chemotherapy , few studies have been performed to characterize its molecular biology 15 - 17 .
understanding the distinct biologic and molecular behavior of ibc is likely to provide insight into carcinogenic mechanism(s ) and aid discovery of novel targets for future treatment interventions .
human mammary tumor virus ( hmtv ) , a human homologue to the mouse mammary tumor virus ( mmtv ) has been proposed by pogo et al . to play a role in ibc 18 .
mmtv predictably leads to tumor formation in mice 18 ; however , hmtv may or may not be associated with human cancers 18 - 26 .
recently , pogo et al . reported that hmtv sequences were detected in 71% of ibc cases in american women and , in turn , were associated with a more malignant breast cancer phenotype than non - ibc 26 .
importantly , these findings have not been independently validated , and the significance of these putative viral dna sequences in humans remains unclear 18 - 26 .
the biology of ibc is distinct from that of non - ibc , in that ibc progresses much more rapidly ( weeks to months , rather than months to years ) and has unique clinical features , such as skin erythema , warmth , lack of a discrete mass , and often the presence of dermal lymphatic invasion . given the markedly different clinical presentation of ibc ( as compared with non - ibc ) along with diverse viral etiologies implicated in other types of cancer , we hypothesized that ibc may have a viral cause , possibly involving putative hmtv infection .
given the scarcity of available human ibc tissue samples available for research at any single institution , we chose to leverage the 2 commercially available immortalized ibc cell lines sum149 ( the most widely used cell line model in ibc studies 27 - 29 ) and sum190 as appropriate models for testing our hypothesis
. cell culture experiments were performed using sum149 due to the extremely fastidious nature of manipulating sum190 cells in culture .
considering hmtv is still a putative virus , we assessed our in vitro ibc model for carrying a viral infection by multiple modalities .
first , we sought to define single nucleotide polymorphisms ( snps ) for the ribonuclease ( rnase ) l gene , the product of which combats viral infection by degrading viral rna and inducing apoptosis of infected cells 30 - 33 .
several non - synonymous coding snps have already been associated with high risk of prostate cancer 30,31 and further reported to be associated with a putative oncogenic viral infection 32 .
we chose to investigate the significance of two common missense variants r462q ( rs486907 ) and e541d ( rs627928 ) reported to associate with the incidence rate of sporadic prostate cancer in several studies . given the utility of these rnase l snps as high - risk biomarkers for susceptibility to prostate cancer , we sought to determine whether ibc cell lines contained the same high - risk genotypes , potentially serving as an indicator of genetic susceptibility to viral infection and ibc carcinogenesis .
moreover , the rnase l gene is a downstream effector of the type 1 interferon pathway , which is utilized in mitigating viral infections ( figure 1 ) , and altered function of this gene product may render cells more susceptible to cancer development , as the normal function of type 1 ifn - induced rnase l expression is to trigger destruction of viral rna 30 - 32 .
an allelic discrimination assay for the downstream interferon ( ifn ) effector rnase l was utilized to detect 2 snp variants associated with cancer risk , impacting amino acids 462 and 541 34 .
we hypothesized that variations in genotypic frequency of these rnase l snps may indicate an association of increased risk of a viral infection and potential etiology within ibc vs. non - ibc cell lines .
we also conducted a search on the national center for biotechnology information ( ncbi ) gene expression omnibus ( geo ) website to assess differences in expression of rnase l between ibc and non - ibc tumor tissues 35 .
our second approach to determining the possibility of viral infection of ibc cells , we assayed for a selective decrease in proliferation of sum149 cells in response to interferon - alpha ( ifn- ) treatment ( a naturally occurring antiviral cytokine ) .
interferons ( ifns ) are effective molecules for studying the biology of viral cause : in vertebrates , they are produced naturally by many nucleated cells in response to viral , parasitic , and tumor - derived challenges .
ifns assist the immune response by inhibiting viral replication within host cells , activating natural killer cells , and increasing antigen presentation to lymphocytes 36,37 .
moreover , the ifn - mediated 2 - 5a pathway is a key innate response to viral infection via rnase l - induced viral rna degradation , as well as a mediator of apoptosis 38,39 .
lastly , to investigate findings by pogo et al 18 , we sought whether determine whether hmtv elements are present in the ibc cell line sum149 using rt - pcr and southern blot analysis .
allelic discrimination assay we tested for the presence of rnase l variants r462q and e541d as potential indicators of a genetic risk factor for ibc , using the taqman allelic discrimination assay ( life technologies , carlsbad , ca ) .
the primers and probes for r462q were : forward primer 5'-ggaagatgtggaaaatgaggaaga-3 ' , reverse primer 5'-tgca- gatcctggtgggtgta-3 ' , and probes 5'-vic - caggacatttcggg- caa - mgb and 5'-fam - caggacattttgggcaa - mgb .
the primers and probes for e541d were : forward primer 5'-tctatgtggtaaagaagggaagca-3 ' , reverse primer 5'-ttgaaccacctcttcattactttgag-3 ' , and probes 5'-vic - tttcagatcct - caaat - mgb and 5'-famtttcagctcctcaaat - mgb 31 .
we extracted genomic dna from 12 cell lines using the qiaamp dna purification kit ( qiagen , valencia , ca ) .
all reactions were conducted in triplicate using an abi 7500 fast rt pcr system and analyzed using sds 2.0 software ( life technologies ) .
p values for the cell line snp comparisons were calculated using the online contingency table from vassarstats ( http://vassarstats.net ) .
bioinformatics analysis a comparative search was performed on the ncbi geo site ( http://www.ncbi.nlm.nih.gov/geo/ ) in order to assess differences in expression of rnase l between ibc and non - ibc tumor tissues .
the gse5847 entry originally published by boersma et al . provided a suitable data set with 15 ibc and 35 non - ibc tumor samples , with 2 normal breast tissue samples as a control 35 . the decision to use this data set
was based on lack of available entries for ibc samples , as well as the consistent nature of the expression profile of stromal tissue relative to tumor samples . moreover , because we were looking for a dna - based marker of genetic susceptibility , and tumor cells are highly heterogeneous , we selected the stromal data set for this analysis . statistical considerations . to assess snp prevalence between ibc and non - ibc breast stromal tissue samples ,
a computational script was written in the language r based on the sample size analysis recommendations made by pfeiffer et al .
the script was then independently verified using the bioinformatics institute 's ( bii ) online sample size estimator ( http://osse.bii.a-star.edu.sg/ ) . using a case - control design , based on the lowest minor allele frequency ( maf ) for snp rs486907
; we based these estimates on the maf for rs486907 , because the lower allele frequency will require a larger sample size .
we calculated the required sample sizes to test a significant risk ratio between normal samples and ibc or non - ibc samples , as well as the risk ratio between ibc and non - ibc samples .
cell culture sum149 cells , bt474 cells , and mda - mb-231 cells were used in this study .
all cell lines were acquired from the american type culture collection ( atcc , manassas , va ) except the sum149 cell line sourced from asterand ( detroit , mi ) .
all cell lines were subjected to genotyping with the abi identifiler assay ( life technologies ) for validation of cell line identity .
sum149 cells were grown in ham 's f-12 medium supplemented with 5% heat - inactivated fetal bovine serum ( fbs ) and 1% antibiotics / antimycotics ( invitrogen , carlsbad , ca ) .
bt474 cells were grown in roswell park memorial institute ( rpmi ) medium supplemented with 10% fbs and 1% antibiotics / antimycotics ( invitrogen ) .
mda - mb-231 cells were grown in dulbecco 's modified eagle 's medium ( dmem ) supplemented with 10% fbs and 1% antibiotics / antimycotics ( invitrogen ) .
all cell lines were maintained in a humidified incubator with 5% co2 atmosphere at 37c .
cells were plated in triplicate at a density of 5,000 cells per well in 96-well tissue culture plates and grown to greater than or equal to 30% confluence at the time of treatment .
ifn- treatment ifn- ( imgenex , san diego , ca ) was dissolved in phosphate - buffered saline ( pbs ) with 5% fetal bovine serum to a final stock concentration of 100 g / ml . before treatment
, the complete medium was removed and the cell monolayers were rinsed once with pbs .
cells were then treated with 0 , 500 , 1000 , 2500 , and 5000 u / ml of ifn- for 24 hours and 48 hours . at each time point , we removed the ifn- , rinsed the cells with pbs , and evaluated cell proliferation .
for ifn- block experiments , we pre - incubated the cells for 15 minutes with 1 g / ml ifn- specific antibody ( sigma , st .
proliferation assays cell proliferation was evaluated in triplicate for all treatments with the cyquant cell proliferation assay kit ( invitrogen ) , per the manufacturer 's instructions .
100 l cyquant cell proliferation assay working reagent was then added to the cells and incubated for 1 hour at 37c . emission ( directly proportional to proliferation )
proliferation data was analyzed by one - way analysis of variance using the tukey multiple comparisons test and graphpad prism software ( graphpad prism software , inc .
all assays were performed in at least triplicate and were analyzed together at the same time point .
primer design and pcr putative hmtv sequence were obtained from the national center for biotechnology information website ( http://www.ncbi.nlm.nih.gov/sites/entrez ) .
the primers used for pcr of the env / ltr region were as follows : 5'tct gcg tta cac cac tac cg 3 ' and 5'tga act cga cct tcc tcc tg 3 ' .
the primers used for pcr of the late ltr region were as follows : 5'acc ttc ctc ctg agc cta gc 3 ' and 5'ttt att agc cca acc ttg cg 3 ' .
for reverse - transcription polymerase chain reaction ( rt - pcr ) , total rna was isolated from sum149 , bt474 and mda - mb-231 cells using the rneasy purification kit ( qiagen ) and cdna produced using the first strand cdna synthesis kit from mbi fermentas ( glen burnie , md ) . to conduct pcr ,
we used mbi fermentas reagents and evaluated products on a 1% tris - borate - edta ( tbe)/agarose gel .
gel images were acquired using a gel logic 200 imaging system with kodak 1d 3.6 software ( carestream molecular imaging , rochester , ny ) .
cloning and sequencingpcr products of interest were excised from agarose gels and the dna purified using the qiaquick gel extraction kit ( qiagen ) .
purified pcr products were then ligated into the pgem - t easy vector ( promega , madison , wi ) overnight at 4c and transformed into top10 chemically competent e. coli ( invitrogen ) .
blue - white colony selection was used to screen for recombinant plasmids containing ligated pcr fragments ( recombinants resulting in disrupted -galactosidase function , preventing metabolism of x - gal substrate ) .
recombinant plasmids were evaluated by sanger sequencing , performed by the genomic analysis and technology core facility at the bio5 institute at the university of arizona ( tucson , az ) .
western blot analysis proteins were resolved by sodium dodecyl sulfate - polyacrylamide gel electrophoresis ( sds - page ) on a 4% to 20% gradient minigel ( bio - rad , hercules , ca ) , using the mini - protean 3 cell , run at 100 v for 1.5 hours at 25 c. 15 g of total protein from cytoplasmic / membrane extracts were resolved and transferred to nitrocellulose membranes using a mini trans - blot electrophoretic transfer cell ( bio - rad ) .
efficient transfer of proteins was confirmed by sypro ruby protein blot stain ( bio - rad ) and kaleidoscope molecular weight markers ( bio - rad ) .
nitrocellulose membranes were blocked for 2 hours in blocking buffer ( 4.0% bovine serum albumin [ bsa ] , 10 mm pbs , 0.05% triton x-100 , ph 7.4 ) at 25 c. membranes were next incubated with antihuman - specific rabbit monoclonal ifn receptor alpha ifnar1 antibody ( ab45172 , abcam , cambridge , ma ) , diluted 1:20,000 v / v in blocking buffers overnight at 4 c with gentle agitation . following incubation with primary antibody , membranes were washed 3 times in 10 mm pbs , 0.05% triton x-100 , ph 7.4 then incubated in donkey anti - rabbit secondary antibody with alkaline phosphatase ( ap ) conjugate ( 1:1000 v / v ; jackson immunoresearch , west grove , pa ) at room temperature for 1 hour .
nitrocellulose membranes were washed 3 times in 10 mm pbs , 0.05% triton x-100 ph 7.4 ; and protein products visualized after 1 to 5 minutes following addition of ap substrate ( mbi fermentas ) .
quantification of a specific protein band was established with gelquant.net software provided by biochemlabsolutions.com . in the densitometry analysis ,
southern blot analysis 10 g purified sum149 genomic dna was digested with 5u fastdigest bamhi restriction enzyme ( mbi fermentas ) at 37 c for 30 minutes , then heat inactivated at 80c for 5 minutes .
dna was subsequently extracted with an equal volume of isopropyl alcohol and resuspended in 10 l nuclease - free water at room temperature for 15 minutes .
the entire volume was electrophoresed through a 1% agarose gel in 1x tbe buffer for 6 hours at 3 v / cm ( until the bromphenol blue marker reached the bottom of the gel ) .
halfway through the gel electrophoresis , we loaded a positive control synthetic fragment ( idt technologies , san diego ca ) encoding for a 172 base - pair region of the hmtv env region 5 ' tat gat ttt atc tgc gtt aca cca cta ccg tat aat gct tct gag agc tgg gaa aga acc aag gct cat tta ctg ggc att taa aat aac aat gag att tca tat aac ata caa aaa tta acc aac cta att agt gat atg agc aaa caa cat att gac gca gtg gac ctt a 3 ' . before blotting , the gel was rinsed in deionized water , incubated in denaturing solution for 30 minutes at room temperature with shaking , rinsed again in deionized water and incubated in neutralization for 15 minutes at room temperature with shaking .
we repeated this procedure and then transferred the dna by traditional upward capillary action for 18 hours at room temperature .
after transfer , the membrane was washed in 2x ssc solution to remove any residual agarose , dried at room temperature , and fixed by uv crosslinking for 2 minutes .
probe synthesis hmtv env dna ( 500 ng ) was labeled using the biotin decalabel dna labeling kit ( mbi fermentas ) .
the hmtv env dna template was combined with 5x decanucleotide reaction buffer and nuclease - free water .
the tube was vortexed , pulse - spun for 5 seconds , incubated in a boiling water bath for 10 minutes , and quickly cooled on ice .
biotin labeling mix and 5u of klenow fragment were added and the reaction incubated for 1 hour at 37 c. the reaction was stopped by adding 1l 0.5 m edta , ph 8.0 .
hybridization and detection the membrane was incubated in a pre - hybridization solution containing 5x ssc/5x denhardt 's , 0.5% sds , 100 g / ml nonspecific dna ( sigma ) at 42 c for 4 hours with agitation in a problot 12 hybridization oven ( labnet international , woodbridge , nj ) . during this time , the biotin - labeled probe was denatured at 100 c for 5 minutes and chilled on ice .
the denatured probe was added to the pre - hybridization solution to obtain a final probe concentration of 100 ng / ml and incubated it overnight at 42 c with shaking .
after hybridization , the membrane was washed twice with 2x ssc , 0.1% sds for 10 minutes at room temperature , then twice with 0.1x ssc , 0.1% sds for 20 minutes at 65 c. excess liquid was removed from the membrane by briefly placing it on filter paper .
the biotin - labeled dna was detected using the biotin chromogenic detection kit ( mbi fermentas ) , according to the manufacturer 's directions .
we acquired both gel and membrane images using the gel logic 200 imaging system with kodak 1d 3.6 software ( carestream molecular imaging , rochester , ny ) .
snp genotyping was first performed for rnase l variants r462q and e541d in 2 ibc and 10 non - ibc cell lines with results shown in figure 2 and table 1 .
the sum149 and sum190 ibc cell lines were homozygous g at rs486907 ( homozygous arginine at residue 462 ) and homozygous g at rs627928 ( homozygous glutamic acid at residue 541 ) .
the 541 gg and 462 aa genotype is the same as those previously reported to be associated with increased risk for sporadic prostate cancer 30,31 . of note
, both ibc cell lines displayed 462 gg and 541 gg homozygous genotypes , of which , 462 gg is not associated with prostate cancer development , whereas 541 gg is .
however , these genotypes did differ significantly from non - ibc genotypes at these residues suggesting the possibility of a novel risk allele for ibc .
all but two of the non - ibc cell lines were either heterozygous or homozygous for the a allele at rs486907 ( residue 462 ) , and all 10 non - ibc cell lines were heterozygous or homozygous for the t allele at rs627928 ( residue 541 ) .
analysis by the fisher 's exact probability test yielded a two - tailed p value of 0.09 for the 462 variant and 0.015 for the 541 variant , as calculated using vassarstats ( vassarstats.net ) contingency tables ( table 1 ) .
an evaluation of genotype frequency for these rnase l snps in ibc tumors is required in order to define a statistically significant correlation with potential risk of ibc in patient samples . due to the rarity of ibc , and lack of large biorepositories for this disease
, we calculated a statistical estimate of how many cases and controls would be required to perform a comprehensive analysis of this nature .
the dbsnp database lists the minor allele frequency ( maf ) for rs486907 to be 24% and the maf for snp rs627928 to be 48% . using a case - control design , based on the maf for snp rs486907 ( the lower allele frequency will require a larger sample size )
, we calculated the required sample sizes to test a significant risk ratio ( estimated for a 2-fold increased risk ) between healthy patient samples ( without cancer ) and ibc and non - ibc patient samples , as well another 2-fold risk ratio between ibc and non - ibc samples .
the minimum sample size required to have 80% power to detect at least a 2-fold increase in risk between ibc and non - ibc cancer is 356 samples of each cancer ; further , in order to test a difference of at least 2-fold level of risk between either ibc or non - ibc cancers to healthy individuals will necessitate 160 healthy ( control ) samples .
this resulted in a total of 872 defined cancer type / control samples being required to evaluate the significance of these snps in ibc .
this analysis could be accomplished should major ibc investigators choose to pool case / control resources to do so . our bioinformatic query using ncbi geo data set gse5847 regarding the differences in gene expression of rnasel between human ibc and non - ibc samples found no significant difference in gene expression between the two groups within the stroma of human breast tissue samples ( figure 3 ) .
we next sought to determine whether sum149 cells showed an altered proliferative potential following ifn- stimulation compared to non - ibc cell lines ( figure 4 ) .
we observed a dose- and time - dependent decrease in cell proliferation of sum149 cells treated with ifn-. at 24 hours , cell proliferation decreased by 32% relative to controls at the highest dose of 5000 u / ml ifn- ( p < 0.001 ) . at 48 hours , cell proliferation continued to decrease at this dose , by 41% relative to controls ( p < 0.01 ) .
we did not observe a corresponding decrease in sum149 proliferation after pre - incubation of ifn- with a specific neutralizing antibody ( figure 5 , proliferation decreased by only 5% at this dose relative to controls ) .
the results of the same inf- response proliferation assay performed with 2 non - ibc cell lines ( bt474 and mda - mb-231 ) are shown in the lower panel of figure 4 .
in contrast to the dose- and time - dependent decrease in cell proliferation of sum149 cells , we did not see any decrease in cell proliferation in the mda - mb-231 cells , and saw only a weak decrease in the bt474 cells at 24 hours that was not evident at 48 hours .
moreover , only the sum149 cell line demonstrated a direct and specific response to ifn- treatment ( figure 5 ) .
note that each of the 3 cell lines expressed the ifn- receptor ifnar1 rna ( figure 6a ) and protein ( figure 6b ) , indicating that the absence of an ifn- treatment response in the bt474 and mda - mb-231 cell lines was not due to the lack of the receptor .
the results from western blot showed that all three cell lines expressed ir , however the expression level is much higher in sum149 cell than the other two cell lines .
correspondingly , the relative ifnar1 expression ratios for sum149 , mda - mb-231 and bt474 cell lines were fold changes of 2.05 ( 59,601/29,037 ) , 0.51 ( 107,956/212,049 ) , and 0.17 ( 26,893/149,868 ) , respectively , based on quantification of the western blot shown in figure 6b .
notably , there were subtle differences detected in the amount of protein loaded into each well , but this was accounted for in the densitometry analysis by normalizing each well to the amount of -actin .
the results of our pcr - based analysis to detect the presence of hmtv sequences are shown in figure 7 .
pcr analysis of genomic dna ( figure 7a ) and rt - pcr analysis ( figure 7b ) of 4 primer sets for the hmtv env / ltr and late ltr regions showed that each of these viral elements were not detected in the sum149 cell line .
in addition , our southern blot analysis using a specific probe to the env region of hmtv did not detect the presence of hmtv integrated in the sum149 genome ( figure 8) , consistent with our pcr findings . although we were able clone and sequence pcr amplicons generated by our analyses , none of these fragments revealed homology to hmtv ( data not shown ) .
snp genotyping was first performed for rnase l variants r462q and e541d in 2 ibc and 10 non - ibc cell lines with results shown in figure 2 and table 1 .
the sum149 and sum190 ibc cell lines were homozygous g at rs486907 ( homozygous arginine at residue 462 ) and homozygous g at rs627928 ( homozygous glutamic acid at residue 541 ) .
the 541 gg and 462 aa genotype is the same as those previously reported to be associated with increased risk for sporadic prostate cancer 30,31 . of note
, both ibc cell lines displayed 462 gg and 541 gg homozygous genotypes , of which , 462 gg is not associated with prostate cancer development , whereas 541 gg is .
however , these genotypes did differ significantly from non - ibc genotypes at these residues suggesting the possibility of a novel risk allele for ibc .
all but two of the non - ibc cell lines were either heterozygous or homozygous for the a allele at rs486907 ( residue 462 ) , and all 10 non - ibc cell lines were heterozygous or homozygous for the t allele at rs627928 ( residue 541 ) .
analysis by the fisher 's exact probability test yielded a two - tailed p value of 0.09 for the 462 variant and 0.015 for the 541 variant , as calculated using vassarstats ( vassarstats.net ) contingency tables ( table 1 ) .
an evaluation of genotype frequency for these rnase l snps in ibc tumors is required in order to define a statistically significant correlation with potential risk of ibc in patient samples . due to the rarity of ibc , and lack of large biorepositories for this disease
, we calculated a statistical estimate of how many cases and controls would be required to perform a comprehensive analysis of this nature .
the dbsnp database lists the minor allele frequency ( maf ) for rs486907 to be 24% and the maf for snp rs627928 to be 48% . using a case - control design , based on the maf for snp rs486907 ( the lower allele frequency will require a larger sample size )
, we calculated the required sample sizes to test a significant risk ratio ( estimated for a 2-fold increased risk ) between healthy patient samples ( without cancer ) and ibc and non - ibc patient samples , as well another 2-fold risk ratio between ibc and non - ibc samples .
the minimum sample size required to have 80% power to detect at least a 2-fold increase in risk between ibc and non - ibc cancer is 356 samples of each cancer ; further , in order to test a difference of at least 2-fold level of risk between either ibc or non - ibc cancers to healthy individuals will necessitate 160 healthy ( control ) samples .
this resulted in a total of 872 defined cancer type / control samples being required to evaluate the significance of these snps in ibc .
this analysis could be accomplished should major ibc investigators choose to pool case / control resources to do so . our bioinformatic query using ncbi geo data set gse5847 regarding the differences in gene expression of rnasel between human ibc and non - ibc samples found no significant difference in gene expression between the two groups within the stroma of human breast tissue samples ( figure 3 ) .
we next sought to determine whether sum149 cells showed an altered proliferative potential following ifn- stimulation compared to non - ibc cell lines ( figure 4 ) .
we observed a dose- and time - dependent decrease in cell proliferation of sum149 cells treated with ifn-. at 24 hours , cell proliferation decreased by 32% relative to controls at the highest dose of 5000 u / ml ifn- ( p < 0.001 ) . at 48 hours , cell proliferation continued to decrease at this dose , by 41% relative to controls ( p < 0.01 ) .
we did not observe a corresponding decrease in sum149 proliferation after pre - incubation of ifn- with a specific neutralizing antibody ( figure 5 , proliferation decreased by only 5% at this dose relative to controls ) .
the results of the same inf- response proliferation assay performed with 2 non - ibc cell lines ( bt474 and mda - mb-231 ) are shown in the lower panel of figure 4 .
in contrast to the dose- and time - dependent decrease in cell proliferation of sum149 cells , we did not see any decrease in cell proliferation in the mda - mb-231 cells , and saw only a weak decrease in the bt474 cells at 24 hours that was not evident at 48 hours .
moreover , only the sum149 cell line demonstrated a direct and specific response to ifn- treatment ( figure 5 ) .
note that each of the 3 cell lines expressed the ifn- receptor ifnar1 rna ( figure 6a ) and protein ( figure 6b ) , indicating that the absence of an ifn- treatment response in the bt474 and mda - mb-231 cell lines was not due to the lack of the receptor .
the results from western blot showed that all three cell lines expressed ir , however the expression level is much higher in sum149 cell than the other two cell lines .
correspondingly , the relative ifnar1 expression ratios for sum149 , mda - mb-231 and bt474 cell lines were fold changes of 2.05 ( 59,601/29,037 ) , 0.51 ( 107,956/212,049 ) , and 0.17 ( 26,893/149,868 ) , respectively , based on quantification of the western blot shown in figure 6b .
notably , there were subtle differences detected in the amount of protein loaded into each well , but this was accounted for in the densitometry analysis by normalizing each well to the amount of -actin .
the results of our pcr - based analysis to detect the presence of hmtv sequences are shown in figure 7 .
pcr analysis of genomic dna ( figure 7a ) and rt - pcr analysis ( figure 7b ) of 4 primer sets for the hmtv env / ltr and late ltr regions showed that each of these viral elements were not detected in the sum149 cell line .
in addition , our southern blot analysis using a specific probe to the env region of hmtv did not detect the presence of hmtv integrated in the sum149 genome ( figure 8) , consistent with our pcr findings . although we were able clone and sequence pcr amplicons generated by our analyses , none of these fragments revealed homology to hmtv ( data not shown ) .
our experimental analysis of ibc cell lines sum149 and sum190 , revealed 2 snps in the rnase l gene with known association to prostate carcinogenesis , particularly those of possible viral etiology .
the consistent ibc homozygous variants 462r and 541e were infrequent or absent in non - ibc cell lines respectively suggesting these variants may represent novel risk alleles for ibc onset .
this finding warrants investigation in patient samples . in an effort to determine if these snps could portend a genetic predisposition to ibc , we first performed an analysis of rnase l transcript levels between ibc and non - ibc breast tissue samples from publicly available data . within the stroma of breast tissue
, there was no significant difference in the gene expression of these rnase l snps in ibc and non - ibc tissue samples ( figure 3 ) .
however , the expected differences in prevalence , if there are any , should be in the somatic tissues , such as breast parenchyma .
this data suggests that altered activity of rnase l is not likely due to altered abundance . in 2012 , jin et al .
33 reported using two sequence homology - based computational tools [ sort intolerant from tolerant ( sift ) and polymorphism phenotype ( polyphen ) ] to predict the functional contributions of several non - synonymous rnase l variants .
their analysis suggested the r462q and e541d variants are predicted to be ' tolerated ' changes , however functional studies on these snps are yet to be conducted , particularly in the context of prior viral infection .
further , despite the prediction of a tolerated change , variants at rnase l amino acid residues 462 and 541 have been definitively associated with altered cancer predisposition .
notably , next generation sequencing would yield the most appropriate dataset for this comparison , but such datasets are not currently available . a significant confounding factor for a study of this nature
is the paucity of archived ibc samples , owing to its rare clinical presentation and particular difficulties with biospecimen collection of this tumor type , which typically lacks a mass lesion and is generally treated with systemic therapy prior to surgery 13 .
further validation with patient samples is required to evaluate the frequency of rnase l variants in ibc , and whether they are indeed a biomarker of genetic risk for ibc . of note
, the sample detection could be through patient cheek swaps , since the polymorphism would likely affect all somatic tissues . in turn
our findings suggest that the 541 g variant may serve as a susceptibility factor in the development of ibc , but larger sample sizes are needed to better assess the role of 462 g , which may potentially serve as an ibc biomarker .
in contrast to our negative findings for hmtv in ibc dna , our finding of a selective response to ifn- treatment may indicate the potential to respond to viral infection in ibc , as the 2 - 5a pathway is a key mediator in the innate response to viral infection 37 .
additionally , rnase l 462r and 541 have normal enzymatic functionality ( both 462 and 541 variants are in the rnase l protein kinase domain ) , and may , in turn , decrease the likelihood of viral infection 31 .
it is not without precedent that cell lines have been found to have a constitutive viral load .
have shown that the cervical cancer cell line caski has an average viral load of 600 particles per cell 43 .
however , a constitutive viral load has yet to be demonstrated within ibc cell lines . of note , reuben et al .
recently suggested that epstein - barr virus ( ebv ) may also play a potential role in the pathogenesis of ibc 44 .
they demonstrated that 20% of ibc patients have been exposed to ebv , as determined by the detection of ebv - specific immunoglobulin g ( igg ) antibody from peripheral blood 44 .
however , no causal relationship between ebv and ibc has been proven , so the role of ebv in ibc is yet to be determined , despite the more clearly defined role of ebv in carcinogenesis of diseases such as nasopharyngeal carcinoma and burkitt 's lymphoma 45 .
concomitantly , in our endeavor to detect the presence of the hmtv virus , we did not detect hmtv env sequences , by either rt - pcr or southern blot analysis , in sum149 cells .
recently reported an increased detection of hmtv in ibc samples and increased expression of hmtv envelope ( env ) and capsid ( ca ) proteins in 10 primary cultures of human breast cancer containing hmtv sequences ( mssm ) 46 .
these cells were derived from discarded ascitic fluids or pleural effusions obtained from patients with metastatic breast cancer 46 .
pogo et al . also reported that , by using nested priming pcr and southern blot techniques , they detected the presence of mmtv - like env sequences in 71.5% of the 67 human tissue samples evaluated 46 - 48 .
are thought - provoking , a causal relationship has yet to be definitively established , given the conflicting nature of the reports currently available on the prevalence of mmtv - like sequences within ibc patient specimens 49,50 . in our study , both our pcr and southern blot analysis failed to detect hmtv - like env sequences in the sum149 cell line despite the lower limits of detection in our southern blot assay being in the femtogram level 51 . although the findings of pogo et al .
conflicted with ours in that they found hmtv in tumors from ibc patients , some marked differences between the 2 models could explain this discrepancy 25 . in our study , we used authenticated sum149 cells that were derived from a primary inflammatory ductal carcinoma of the breast and established as an immortalized cell line that is well characterized 27 - 29 . to our knowledge
, no other group has investigated the presence of putative viral hmtv elements in this cell line , so our results provide novel insights , as sum149 is the most widely used in vitro model for ibc . even though sum149 cells serve as an excellent ibc model
, the dynamics of disease progression may differ in this established cell line , as compared with the primary tumors evaluated by pogo et al .
given the rarity of ibc clinical specimens , we were not able to examine tumors for viral sequences in our in vitro study .
our power calculation of the minimum sample size required to have 80% power to detect at least a 2-fold increase in risk between ibc and non - ibc cancer is 356 samples of each cancer ( ibc and non - ibc ) with 160 healthy ( control ) samples .
given that few institutions would be likely to have 356 ibc banked specimens or access to 356 ibc patients currently being followed in the clinic who might participate in the evaluation of the prevalence of these rnase l snps , validation of these snps becomes problematic .
moreover , clinical biomarker validation studies with fewer than the calculated number of patients may be misleading or even futile 52 .
however , given the large number of ibc specimens required to establish whether these snps correlate with ibc , this study would be best performed using either multi - institutional datasets or in the context of a clinical trial 's biospecimen collection .
one such approach would be genome wide association studies ( gwas ) of relevant datasets to query the prevalence of rnase l snps in ibc ; unfortunately , no such dataset exists at this time .
however , since sum149 is the most widely studied ibc cell line model , our finding that it may be discordant with regard to the presence of the hmtv genome in a cohort of ibc primary tumors calls into question its appropriateness as a model for ibc , provided that the findings of pogo et al . are validated .
's findings that hmtv is prevalent in ibc , sum149 remains the commercially available in vitro model of choice for ibc . alternatively , our in vitro study has identified 2 snps , specific genotypic variants of which identify important possible genetic risk determinants for ibc .
future studies investigating the genotypic frequency of these snps within human ibc tumors are warranted to validate our in vitro findings . | background : inflammatory breast cancer ( ibc ) is a rare , highly aggressive form of breast cancer .
the mechanism of ibc carcinogenesis remains unknown .
we sought to evaluate potential genetic risk factors for ibc and whether or not the ibc cell lines sum149 and sum190 demonstrated evidence of viral infection.methods : we performed single nucleotide polymorphism ( snp ) genotyping for 2 variants of the ribonuclease ( rnase ) l gene that have been correlated with the risk of prostate cancer due to a possible viral etiology .
we evaluated dose - response to treatment with interferon - alpha ( ifn- ) ; and assayed for evidence of the putative human mammary tumor virus ( hmtv , which has been implicated in ibc ) in sum149 cells .
a bioinformatic analysis was performed to evaluate expression of rnase l in ibc and non-ibc.results : 2 of 2 ibc cell lines were homozygous for rnase l common missense variants 462 and 541 ; whereas 2 of 10 non - ibc cell lines were homozygous positive for the 462 variant ( p= 0.09 ) and 0 of 10 non - ibc cell lines were homozygous positive for the 541 variant ( p = 0.015 ) .
our real - time polymerase chain reaction ( rt - pcr ) and southern blot analysis for sequences of hmtv revealed no evidence of the putative viral genome.conclusion : we discovered 2 snps in the rnase l gene that were homozygously present in ibc cell lines .
the 462 variant was absent in non - ibc lines .
our discovery of these snps present in ibc cell lines suggests a possible biomarker for risk of ibc .
we found no evidence of hmtv in sum149 cells .
a query of a panel of human ibc and non - ibc samples showed no difference in rnase l expression .
further studies of the rnase l 462 and 541 variants in ibc tissues are warranted to validate our in vitro findings . |
hepatitis c virus ( hcv ) infection is the leading cause of hepatocellular carcinoma ( hcc ) in
japan .
for this reason , the medical community and the japanese government have made it a
priority to offer all japanese patients with chronic hepatitis c ( chc ) the option of
antiviral therapy .
the current standard therapy is a combination of pegylated interferon and
ribavirin ( rbv ) .
high response
to the combination therapy has been reported by large clinical trials ,
in which patients were closely managed by hepatologists and treatment compliance was high .
the management of treatment - related adverse effects requires experience and
expertise . in real life ,
however , most patients with chc are treated by primary care physicians ( pcps ) .
treatment by specialists could
improve the therapy outcome . however , there are a few hepatologists ( 3.4/100,000 population )
in japan , and this is the case especially in ibaraki prefecture ( 2.3/100,000 population ) ,
which is where tsuchiura city is located .
few studies have assessed whether a collaboration
between hepatologists and pcps is a valid treatment alternative to treat patients with chc .
the purpose of this study was to assess the treatment outcome in patients with chc using the
current standard antiviral therapy when patients were treated in collaboration between
hepatologists and pcps .
between may 2005 and july 2008 , 110 japanese patients with chc were treated with a
combination therapy of peginterferon - alpha 2b and ribavirin at tsuchiura kyodo general
hospital ( tkgh ) , tsuchiura , japan . among them , 25 patients were treated by a collaboration
between hepatologists and pcps ( collaboration group ) , whereas 85 patients were treated
exclusively by hepatologists ( noncollaboration group ) .
all patients were positive for both
anti - hcv antibody by a third - generation enzyme immunoassay and hcv - rna at the start of
treatment and showed elevated serum alanine transaminase ( alt ; above the upper limit for the
normal ) for the past 6 months .
exclusion criteria included decompensated liver disease ,
coexisting serious medical or psychiatric illness , other forms of liver disease
( drug - induced liver disease , alcoholic liver disease , autoimmune hepatitis ) , a neutrophil
count less than 1500/mm , a platelet count less than 8
10/mm , a hemoglobin of less than 12 g / dl , a serum creatinine
greater than 1.5 times the upper limit of the normal range and co - infection with hepatitis b
virus or human immunodeficiency virus .
difficult- to - treat patients ( genotype 1 with a high load of hcv - rna ; 1h patients ) and 24
weeks for the remaining patients ( non-1h patients ) . in the 1h patients , however , therapy was
discontinued if hcv - rna was still detectable at week 24 . all patients were treated with
pegifna2b ( 1.5 g / kg subcutaneously ) once weekly plus rbv at a dose adjusted for body weight
( patients over 80 kg in weight received 1000 mg , those weighing from 6080 kg received 800
mg and those under 60 kg received 600 mg ) .
safety assessment included red blood cell , white
blood cell and platelet counts in response to therapy .
pegifna2b was reduced to a half of
the original dose in patients with < 750/mm neutrophils , whereas it was
withdrawn in patients with < 500/mm .
the same dose reductions were applied if
platelets fell below 8 10/mm , whereas pegifna2b was withdrawn when
the threshold of 5 10/ mm was reached .
the rbv dose was tapered by
200 mg / day in patients with hemoglobin < 10 g / dl , whereas it was discontinued in patients
with hemoglobin < 8.5 g / dl .
a posttreatment follow - up period of 24 weeks was also included in the
study .
when both the patient and the pcp wished , the therapy was done in collaboration between a
hepatologist and the pcp . otherwise , the therapy was done exclusively by hepatologists at
tkgh .
hepatologists initiated the
antiviral therapy and carefully monitored tolerability during the first four weeks of
treatment .
thereafter , the weekly administration of pegifna2b was performed by the pcp after
a careful examination in an interview and/or physical examination .
the doses of
pegifna2b and/or rbv were adjusted , if needed , by hepatologists every four weeks at tkgh at
the time of routine laboratory tests , including hcv - rna determinations . in the
noncollaboration group , on the other hand , 85 patients were exclusively treated by
hepatologists weekly at tkgh .
the doses of pegifna2b and/or rbv were adjusted , if needed , by
hepatologists at least every four weeks at the time of routine laboratory tests , including
hcv - rna determinations .
the serum hcv rna level was measured with a quantitative hcv rna assay ( cobas amplicore hcv
monitor ver .
2.0 ; roche diagnostic systems , tokyo , japan ) during , and after therapy .
when the measured serum hcv rna level was lower
than 0.5 kiu / ml , hcv rna was also determined by a quantitative pcr assay ( amplicore hcv
v2.0 , roche diagnostic systems , tokyo , japan ) , which had a detection limit of
50 iu / ml .
a high viral load was defined as a serum hcv - rna level of more than 100 kiu / ml
serum .
assessment of efficacy was based on sustained virologic
response ( svr ) , i.e. , undetectable hcv - rna at week 24 post treatment .
informed consents were
obtained from all patients . clinical characteristics and the treatment outcome were compared
between these two groups .
values were expressed as means sd . the mann - whitney u test or fisher s exact probability
test was used for statistical analyses , and p values less than 0.05
patients who discontinued treatment for any reason were
categorized as nonresponders ( intention - to - treat analysis ) .
twenty - three
patients were referred to a hepatologist by the pcps . among them , eleven patients had been
cared for underlying diseases by their pcps .
the remaining two patients were referred to
pcps by a hepatologist for the patients convenience .
the distance from tkgh to the office
of the pcps varied between 2.5 - 25.3 ( 12.7 6.2 ) km .
the baseline characteristics of the 1h patients are shown in table 1table 1characteristics of the 1h patients at the time of inclusioncollaboration group ( n=12)noncollaboration group ( n=46)p valueage ( years)58.8 7.753.3 9.8 0.1266gender , n ( % ) 0.4791male10 ( 83)32 ( 70)female2 ( 17)14 ( 30 ) baseline hcv rna ( kiu / ml)1024 4891702 25850.8755underlying diseases , n ( % ) diabetes mellitus5 ( 42)5 ( 11)0.0239hypertension2 ( 17)7 ( 15)>.9999prior interferon therapy , n ( % ) 2 ( 17)17 ( 37)0.3018baseline labsalt ( iu / ml)56.8 17.684.9 50.10.0823 hemoglobin ( g / dl)14.3 1.414.5 1.40.8703white blood cell count ( /mm)5140 12825444 17830.7297platelet count ( 10/mm)16.3 4.917.1 5.50.6728 values are expressed as means sd . alt , alanine aminotransferase . .
no significant difference was present between the collaboration group and
the noncollaboration group except for an underlying disease .
the baseline characteristics of the non-1h
patients are shown in table 2table 2characteristics of the non-1h patients at the time of inclusioncollaboration group ( n=13)noncollaboration group ( n=39)p valueage ( years)51.1 13.151.3 13.70.8078gender , n ( % ) > .9999male8 ( 62)22 ( 56)female5 ( 38)17 ( 44)hcv genotype , n ( % ) 0.058812 ( 15)0211 ( 85)39 ( 100)baseline hcv rna ( kiu / ml)genotype 1 patients46.5 61.5genotype 2 patients2859 31381464 21230.0830underlying diseases , n ( % ) diabetes mellitus2 ( 15)4 ( 10)0.6323hypertension1 ( 8)5 ( 13)>.9999completed stroke1 ( 8)00.2500prior interferon therapy , n ( % ) 09 ( 23)0.0910baseline labsalt ( iu / ml)58.5 54.592.5 68.70.0265hemoglobin
11795611 17580.4660platelet count ( 10/mm)17.5 3.617.8 5.10.8991values are expressed as means sd .
no significant difference was present between the two groups except for hcv
genotypes and baseline alt levels . in the collaboration group ,
the hcv genotype was 1 in two
patients and 2 in the remaining 11 patients . in the noncollaboration group , all
the safety and tolerability profile of the 1h patients is shown in table 3table 3rates of safety and tolerability in the 1h patientscollaboration group ( n=12)noncollaboration group ( n=46)p valueserious adverse events , n ( % ) 01 ( 2)>.9999treatment modification , n ( % ) discontinuation for 24 weeks rule1 ( 8)4 ( 7)>.9999discontinuation for safety reasons1 ( 8)9 ( 20)0.6700discontinuation for reasons other than safety04 ( 7)0.5707completed therapy , n ( % ) 10 ( 80)29 ( 63)0.3018depression , n12 ( 4)0.5080other adverse effects , n ( % ) influenza - like syndrome7 ( 58)24 ( 52)0.7556gastrointestinal symptoms1 ( 8)1 ( 2)0.3739psychiatric symptoms03 ( 7)>.9999dermatologic symptoms5 ( 42)18 ( 39)>.9999retinopathy01 ( 2)>.9999hematologic effect , n ( % ) anemia ( 8.5 g / dl < hb < 10 g / dl)6 ( 50)16 ( 35)0.5053anemia ( hb < 8.5 g / dl)01 ( 2)>.9999neutropenia < 750/mm2 ( 17)9 ( 20)>.9999neutropenia < 500/mm04 ( 7)0.5707thrombocytopenia < 8 10/mm4 ( 33)11 ( 24)0.4867thrombocytopenia < 5 10/mm00>.9999pegifna2b dose reduction , n ( % ) 5 ( 42)22 ( 48)0.7556rbv dose reduction , n ( % ) 8 ( 67)24 ( 52)0.5178 therapy was discontinued because hcv - rna was still detectable at week 24 .. serious adverse effects ( cerebral hemorrhage ) occurred in one patient
belonging to the noncollaboration group .
therapy was discontinued because hcv - rna was still
detectable at week 24 in one patient of the collaboration group and four patients of the
noncollaboration group .
therapy was discontinued for safety reasons in one patient
( depression ) of the collaboration group and nine patients of the noncollaboration group
( cerebral hemorrhage , n=1 ; depression , n=2 ; gi symptoms , n=1 ; psychiatric symptoms , n=3 ;
retinopathy , n=1 ; and dermatologic symptoms , n=1 ) .
therapy was discontinued for reasons
other than safety in four patients of the noncollaboration group ( lost to follow - up , n=3 ;
economic reason , n=1 ) .
the number of blood tests performed was 15.8 3.6 in the
collaboration group and 18.8 4.3 in the noncollaboration group ( p=0.0517 ) .
no significant
difference was present between the 2 groups in the rates of hematologic toxicities .
the
rates of the dose reduction of pegifna2b and rbv were also comparable between the two
groups . as a result ,
6/12 ( 50.0% ) patients of the collaboration group and 14/46 ( 30.4% )
patients of the noncollaboration group received 80% of the recommended dosage of both
pegifna2b and rbv for 80% of the intended duration of therapy ( p=0.3065 ) .
the safety and tolerability profile of the non-1h patients is shown in table 4table 4rates of safety and tolerability in the non-1h patientscollaboration group ( n=13)noncollaboration group ( n=39)p valueserious adverse events , n ( % ) 00>.9999treatment modification , n ( % ) discontinuation for safety reasons1 ( 8)3 ( 8)>.9999discontinuation for reasons other than safety1 ( 8)2 ( 5)>.9999completed therapy , n ( % ) 11 ( 84)34 ( 87)>.9999depression , n02 ( 5)>.9999other adverse effects , n ( % ) influenza - like syndrome5 ( 38)14 ( 36)>.9999gastrointestinal symptoms01 ( 3)>.9999psychiatric symptoms1 ( 8)00.2500dermatologic symptoms4 ( 31)13 ( 33)>.9999hematologic effect , n ( % ) anemia ( 8.5 g / dl < hb < 10 g / dl)2 ( 15)6 ( 15)>.9999anemia ( hb < 8.5 g / dl)01 ( 3)>.9999neutropenia < 750/mm06 ( 15)0.3172neutropenia < 500/mm00>.9999thrombocytopenia < 8 10/mm07 ( 18)0.1715thrombocytopenia < 5 10/mm00>.9999pegifna2b dose reduction , n ( % ) 1 ( 8)13 ( 33)0.1554rbv dose reduction , n ( % ) 3 ( 23)18 ( 46)0.1977 .
therapy was discontinued for safety
reasons in one patient ( dermatologic symptoms ) of the collaboration group and three patients
of the noncollaboration group ( depression , n=2 , gi symptoms , n=1 ) .
therapy was discontinued
for reasons other than safety ( lost to follow - up ) in one patient of the collaboration group
and two patients of the noncollaboration group .
the number of blood test performed was 8.0
1.8 in the collaboration group and 13.4 4.8 in the noncollaboration group ( p=0.0004 ) .
no
significant difference was present between the two groups in the rates of hematologic
toxicities .
as a result , the rates of the dose reduction of pegifna2b and rbv were also
comparable between the two groups .
treatment responses of the patients in the intention - to - treat analysis are shown in table 5table 5treatment responses of the patients in the intention - to - treat analysiscollaboration groupnoncollaboration groupp value1h patientstotal number of patients1246svr , n ( % ) 5 ( 42)18 ( 39)>.9999non-1h patientstotal number of patients1339 svr , n ( % ) 8 ( 62)25 ( 64)>.9999svr , sustained virologic response .. for the 1h patients , the svr rate in the collaboration group was 42% , which
was similar to that ( 39% ) in the noncollaboration group .
for the noncollaboration group , the
svr rate in the collaboration group was 62% , which was also similar to that ( 64% ) in the
noncollaboration group .
this increase is mostly attributed to chronic hcv
infection . to reduce the deaths from hcc ,
a national 5-year project identifying hcv carriers
in the general japanese population was started in april 2002 . as part of a regular preventive physical
examination program , offered every five years to subjects 40 years and older ,
additionally , hcv testing was also offered to individuals with increased risk
of hcv infection . during the first year , the project detected hcv rna in 1.1% of subjects
tested during their regular physical examination and in 2.7% of the high - risk individuals .
among the newly diagnosed hcv - positive patients , a considerable proportion ( 48% ) had visited
nonhepatologist such as their pcps . however , most pcps have limited experience in treating
patients with chc with an interferon - based therapy
japan has relative few hepatologists , and this is
especially the case in ibaraki prefecture , where tkgh is located .
additionally , self - injection of pegifna2b is
not permitted in japan , and weekly injections in a pcp s office are more convenient for
patients . to our knowledge , this is the first study to assess an interferon - based therapy in patients
with chc managed in collaboration between hepatologist and pcps , although the sample size is
small .
genotype 1 patients
require 48 weeks of the combination therapy for 50% successful viral elimination , while
genotype 2 patients require 24 weeks of therapy for 80% or 90% viral elimination .
in the present study ,
the two groups showed similar rates of treatment - related serious
adverse effects and dropout rates for adverse effects .
svr rates were also similar between
the two groups . moreover , for the 1h patients , the svr rate ( intention - to - treat analysis ) in
the collaboration group was 42% , which was similar to that ( 121/254 , 48% ) reported in a
large clinical trial , where
the patients were managed completely by hepatologists ( p=0.7781 ) .
the results also showed a
similar discontinuation rate ( 1/12 , 8% vs. 52/254 , 20% , p=0.4689 ) and safety profile . among
the non-1h patients ,
for the genotype 2
patients , the svr rate ( intention - to - treat analysis , 8/11 , 73% ) was also similar to that
( 168/250 , 67% ) obtained in a large clinical trial ( p>0.9999 ) .
the rate of therapy
discontinuation ( 1/11 , 9% , vs. 37/250 , 15% , p>0.9999 ) and the safety profile were also
comparable . in a retrospective study
,
significantly better treatment response rates were found in those patients who visited a
specialist regularly , at least once every three months , compared with those who visited a
specialist irregularly .
difficult to treat patients , i.e. ,
those infected with hcv genotypes 1 and 4 , may benefit more from close therapy supervision
by a specialist to achieve treatment success rates .
irregular visitors compared with regular visitors .
treatment success is highly influenced by adherence to therapy in genotype-1-infected
patients . as the authors
mentioned , therapy - associated adverse effects may lead , in the absence of a specialist , to
more premature dose reductions and/or unnecessary treatment discontinuations . in the present
study ,
the adherence to the therapy in the 1h patients was comparable between the two
groups .
moreover , the discontinuation rate in the collaboration group was low and similar to
that in the noncollaboration group .
the low discontinuation rates in the collaboration group
could contribute to the comparable svr rates to those reported in the large clinical trials .
in the present study , the hepatologists supervised treatment of the patients every four
weeks .
whether supervision by hepatologists every four weeks is superior to that done every
3 months needs to be assessed .
large - scale studies are also needed to confirm the usefulness
of the collaboration between hepatologists and pcps . in conclusion
, collaboration between hepatologists and primary care physicians may be a
valid treatment alternative to treat patients with chronic hepatitis c using current
standard antiviral therapy . | objective : the purpose of this study was to assess the treatment outcome in
patients with chronic hepatitis c ( chc ) using the current standard antiviral therapy when
patient were treated in collaboration between hepatologists and primary care physicians
( pcps).patients and methods : one hundred and ten patients with chc were treated
with a combination therapy of peginterferon - alpha 2b and ribavirin . among them , 25
patients were treated by a collaboration between hepatologists and pcps ( collaboration
group ) , whereas 85 patients were treated with exclusively by hepatologists
( noncollaboration group ) .
the duration of the therapy was 48 weeks for 58
difficult-
to - treat patients ( genotype 1 with a high load of hcv - rna ; 1h patients ) and 24 weeks for
the remaining 52 patients ( non-1h patients ) . in the collaboration group ,
antiviral therapy
was initiated and adjusted , if needed , by hepatologists ( visits every four weeks ) , whereas
the weekly administration of peginterferon - alpha 2b was performed by pcps .
clinical
characteristics and the treatment outcome were compared between these two groups.results : the two groups had similar baseline characteristics . by intention
to treat , the two groups showed similar rates of treatment - related serious adverse effects
( 0% vs. 1% , respectively ) and dropout rates for adverse effects ( 8% vs. 13% ,
respectively ) .
sustained virologic response rates were also similar between the two
groups , being 42% vs. 39% in the 58 1h patients ( ns ) and 62% vs. 64% in the 52 non-1h
patients ( ns ) , respectively.conclusions : collaboration between hepatologists and pcps may be a valid
treatment alternative to treat patients with chc using the current standard antiviral
therapy . |
given the harms that can ensue from cancer screening procedures , people s decisions as to whether to undergo cancer screening should be based on a realistic knowledge of its benefits .
face - to - face - interviews were conducted among a representative sample of men and women in nine european countries , who were asked to choose among estimates of the number of fewer cancer - specific deaths ( per 1000 individuals screened ) by prostate - specific antigen and mammography screening , respectively .
this study found dramatic ( by an order of magnitude or more ) overestimation of the benefits ( absolute cancer - specific mortality reduction ) of mammography and prostate - specific antigen testing in the vast majority of women and men , respectively , in all countries surveyed .
frequent consultation of sources of medical information ( including physicians ) was not associated with more realistic knowledge of the benefits of screening . a basis for informed decisions by people about participation in screening for breast and prostate cancer is largely nonexistent in europe , suggesting inadequacies in the information made available to the public .
the influence of the public 's overestimation of screening benefits on actual participation in screening was not addressed in this study , and the work was restricted to european countries .
given the harms that can ensue from cancer screening procedures , people s decisions as to whether to undergo cancer screening should be based on a realistic knowledge of its benefits .
face - to - face - interviews were conducted among a representative sample of men and women in nine european countries , who were asked to choose among estimates of the number of fewer cancer - specific deaths ( per 1000 individuals screened ) by prostate - specific antigen and mammography screening , respectively .
this study found dramatic ( by an order of magnitude or more ) overestimation of the benefits ( absolute cancer - specific mortality reduction ) of mammography and prostate - specific antigen testing in the vast majority of women and men , respectively , in all countries surveyed .
frequent consultation of sources of medical information ( including physicians ) was not associated with more realistic knowledge of the benefits of screening .
a basis for informed decisions by people about participation in screening for breast and prostate cancer is largely nonexistent in europe , suggesting inadequacies in the information made available to the public .
the influence of the public 's overestimation of screening benefits on actual participation in screening was not addressed in this study , and the work was restricted to european countries .
| making informed decisions about breast and prostate cancer screening requires knowledge of its benefits .
however , country - specific information on public knowledge of the benefits of screening is lacking .
face - to - face computer - assisted personal interviews were conducted with 10 228 persons selected by a representative quota method in nine european countries ( austria , france , germany , italy , the netherlands , poland , russia , spain , and the united kingdom ) to assess perceptions of cancer - specific mortality reduction associated with mammography and prostate - specific antigen ( psa ) screening .
participants were also queried on the extent to which they consulted 14 different sources of health information .
correlation coefficients between frequency of use of particular sources and the accuracy of estimates of screening benefit were calculated .
ninety - two percent of women overestimated the mortality reduction from mammography screening by at least one order of magnitude or reported that they did not know .
eighty - nine percent of men overestimated the benefits of psa screening by a similar extent or did not know .
women and men aged 5069 years , and thus targeted by screening programs , were not substantially better informed about the benefits of mammography and psa screening , respectively , than men and women overall .
frequent consulting of physicians ( r = .07 , 95% confidence interval [ ci ] = 0.05 to 0.09 ) and health pamphlets ( r = .06 , 95% ci = 0.04 to 0.08 ) tended to increase rather than reduce overestimation .
the vast majority of citizens in nine european countries systematically overestimate the benefits of mammography and psa screening . in the countries investigated
, physicians and other information sources appear to have little impact on improving citizens perceptions of these benefits . |
SECTION 1. SHORT TITLE.
This title may be cited as the ``FHA Manufactured Housing Loan
Modernization Act of 2007''.
SEC. 2. FINDINGS AND PURPOSES.
(a) Findings.--The Congress finds that--
(1) manufactured housing plays a vital role in providing
housing for low- and moderate-income families in the United
States;
(2) the FHA title I insurance program for manufactured home
loans traditionally has been a major provider of mortgage
insurance for home-only transactions;
(3) the manufactured housing market is in the midst of a
prolonged downturn which has resulted in a severe contraction
of traditional sources of private lending for manufactured home
purchases;
(4) during past downturns the FHA title I insurance program
for manufactured homes has filled the lending void by providing
stability until the private markets could recover;
(5) in 1992, during the manufactured housing industry's
last major recession, over 30,000 manufactured home loans were
insured under title I;
(6) in 2006, fewer than 1,500 manufactured housing loans
were insured under title I;
(7) the loan limits for title I manufactured housing loans
have not been adjusted for inflation since 1992; and
(8) these problems with the title I program have resulted
in an atrophied market for manufactured housing loans, leaving
American families who have the most difficulty achieving
homeownership without adequate financing options for home-only
manufactured home purchases.
(b) Purposes.--The purposes of this Act are--
(1) to provide adequate funding for FHA-insured
manufactured housing loans for low- and moderate-income
homebuyers during all economic cycles in the manufactured
housing industry;
(2) to modernize the FHA title I insurance program for
manufactured housing loans to enhance participation by Ginnie
Mae and the private lending markets; and
(3) to adjust the low loan limits for title I manufactured
home loan insurance to reflect the increase in costs since such
limits were last increased in 1992 and to index the limits to
inflation.
SEC. 3. EXCEPTION TO LIMITATION ON FINANCIAL INSTITUTION PORTFOLIO.
The second sentence of section 2(a) of the National Housing Act (12
U.S.C. 1703(a)) is amended--
(1) by striking ``In no case'' and inserting ``Other than
in connection with a manufactured home or a lot on which to
place such a home (or both), in no case''; and
(2) by striking ``: Provided, That with'' and inserting ``.
With''.
SEC. 4. INSURANCE BENEFITS.
(a) In General.--Subsection (b) of section 2 of the National
Housing Act (12 U.S.C. 1703(b)), is amended by adding at the end the
following new paragraph:
``(8) Insurance benefits for manufactured housing loans.--
Any contract of insurance with respect to loans, advances of
credit, or purchases in connection with a manufactured home or
a lot on which to place a manufactured home (or both) for a
financial institution that is executed under this title after
the date of the enactment of the FHA Manufactured Housing Loan
Modernization Act of 2007 by the Secretary shall be conclusive
evidence of the eligibility of such financial institution for
insurance, and the validity of any contract of insurance so
executed shall be incontestable in the hands of the bearer from
the date of the execution of such contract, except for fraud or
misrepresentation on the part of such institution.''.
(b) Applicability.--The amendment made by subsection (a) shall only
apply to loans that are registered or endorsed for insurance after the
date of the enactment of this Act.
SEC. 5. MAXIMUM LOAN LIMITS.
(a) Dollar Amounts.--Paragraph (1) of section 2(b) of the National
Housing Act (12 U.S.C. 1703(b)(1)) is amended--
(1) in clause (ii) of subparagraph (A), by striking
``$17,500'' and inserting ``$25,090'';
(2) in subparagraph (C) by striking ``$48,600'' and
inserting ``$69,678'';
(3) in subparagraph (D) by striking ``$64,800'' and
inserting ``$92,904'';
(4) in subparagraph (E) by striking ``$16,200'' and
inserting ``$23,226''; and
(5) by realigning subparagraphs (C), (D), and (E) 2 ems to
the left so that the left margins of such subparagraphs are
aligned with the margins of subparagraphs (A) and (B).
(b) Annual Indexing.--Subsection (b) of section 2 of the National
Housing Act (12 U.S.C. 1703(b)), as amended by the preceding provisions
of this Act, is further amended by adding at the end the following new
paragraph:
``(9) Annual indexing of manufactured housing loans.--The
Secretary shall develop a method of indexing in order to
annually adjust the loan limits established in subparagraphs
(A)(ii), (C), (D), and (E) of this subsection. Such index shall
be based on the manufactured housing price data collected by
the United States Census Bureau. The Secretary shall establish
such index no later than one year after the date of the
enactment of the FHA Manufactured Housing Loan Modernization
Act of 2007.''.
(c) Technical and Conforming Changes.--Paragraph (1) of section
2(b) of the National Housing Act (12 U.S.C. 1703(b)(1)) is amended--
(1) by striking ``No'' and inserting ``Except as provided
in the last sentence of this paragraph, no''; and
(2) by adding after and below subparagraph (G) the
following:
``The Secretary shall, by regulation, annually increase the dollar
amount limitations in subparagraphs (A)(ii), (C), (D), and (E) (as such
limitations may have been previously adjusted under this sentence) in
accordance with the index established pursuant to paragraph (9).''.
SEC. 6. INSURANCE PREMIUMS.
Subsection (f) of section 2 of the National Housing Act (12 U.S.C.
1703(f)) is amended--
(1) by inserting ``(1) Premium Charges.--'' after ``(f)'';
and
(2) by adding at the end the following new paragraph:
``(2) Manufactured Home Loans.--Notwithstanding paragraph (1), in
the case of a loan, advance of credit, or purchase in connection with a
manufactured home or a lot on which to place such a home (or both), the
premium charge for the insurance granted under this section shall be
paid by the borrower under the loan or advance of credit, as follows:
``(A) At the time of the making of the loan, advance of
credit, or purchase, a single premium payment in an amount not
to exceed 2.25 percent of the amount of the original insured
principal obligation.
``(B) In addition to the premium under subparagraph (A),
annual premium payments during the term of the loan, advance,
or obligation purchased in an amount not exceeding 1.0 percent
of the remaining insured principal balance (excluding the
portion of the remaining balance attributable to the premium
collected under subparagraph (A) and without taking into
account delinquent payments or prepayments).
``(C) Premium charges under this paragraph shall be
established in amounts that are sufficient, but do not exceed
the minimum amounts necessary, to maintain a negative credit
subsidy for the program under this section for insurance of
loans, advances of credit, or purchases in connection with a
manufactured home or a lot on which to place such a home (or
both), as determined based upon risk to the Federal Government
under existing underwriting requirements.
``(D) The Secretary may increase the limitations on premium
payments to percentages above those set forth in subparagraphs
(A) and (B), but only if necessary, and not in excess of the
minimum increase necessary, to maintain a negative credit
subsidy as described in subparagraph (C).''.
SEC. 7. TECHNICAL CORRECTIONS.
(a) Dates.--Subsection (a) of section 2 of the National Housing Act
(12 U.S.C. 1703(a)) is amended--
(1) by striking ``on and after July 1, 1939,'' each place
such term appears; and
(2) by striking ``made after the effective date of the
Housing Act of 1954''.
(b) Authority of Secretary.--Subsection (c) of section 2 of the
National Housing Act (12 U.S.C. 1703(c)) is amended to read as follows:
``(c) Handling and Disposal of Property.--
``(1) Authority of secretary.--Notwithstanding any other
provision of law, the Secretary may--
``(A) deal with, complete, rent, renovate,
modernize, insure, or assign or sell at public or
private sale, or otherwise dispose of, for cash or
credit in the Secretary's discretion, and upon such
terms and conditions and for such consideration as the
Secretary shall determine to be reasonable, any real or
personal property conveyed to or otherwise acquired by
the Secretary, in connection with the payment of
insurance heretofore or hereafter granted under this
title, including any evidence of debt, contract, claim,
personal property, or security assigned to or held by
him in connection with the payment of insurance
heretofore or hereafter granted under this section; and
``(B) pursue to final collection, by way of
compromise or otherwise, all claims assigned to or held
by the Secretary and all legal or equitable rights
accruing to the Secretary in connection with the
payment of such insurance, including unpaid insurance
premiums owed in connection with insurance made
available by this title.
``(2) Advertisements for proposals.--Section 3709 of the
Revised Statutes shall not be construed to apply to any
contract of hazard insurance or to any purchase or contract for
services or supplies on account of such property if the amount
thereof does not exceed $25,000.
``(3) Delegation of authority.--The power to convey and to
execute in the name of the Secretary, deeds of conveyance,
deeds of release, assignments and satisfactions of mortgages,
and any other written instrument relating to real or personal
property or any interest therein heretofore or hereafter
acquired by the Secretary pursuant to the provisions of this
title may be exercised by an officer appointed by the Secretary
without the execution of any express delegation of power or
power of attorney. Nothing in this subsection shall be
construed to prevent the Secretary from delegating such power
by order or by power of attorney, in the Secretary's
discretion, to any officer or agent the Secretary may
appoint.''.
SEC. 8. REVISION OF UNDERWRITING CRITERIA.
(a) In General.--Subsection (b) of section 2 of the National
Housing Act (12 U.S.C. 1703(b)), as amended by the preceding provisions
of this Act, is further amended by adding at the end the following new
paragraph:
``(10) Financial soundness of manufactured housing
program.--The Secretary shall establish such underwriting
criteria for loans and advances of credit in connection with a
manufactured home or a lot on which to place a manufactured
home (or both), including such loans and advances represented
by obligations purchased by financial institutions, as may be
necessary to ensure that the program under this title for
insurance for financial institutions against losses from such
loans, advances of credit, and purchases is financially
sound.''.
(b) Timing.--Not later than the expiration of the 6-month period
beginning on the date of the enactment of this Act, the Secretary of
Housing and Urban Development shall revise the existing underwriting
criteria for the program referred to in paragraph (10) of section 2(b)
of the National Housing Act (as added by subsection (a) of this
section) in accordance with the requirements of such paragraph.
SEC. 9. REQUIREMENT OF SOCIAL SECURITY ACCOUNT NUMBER FOR ASSISTANCE.
Section 2 of the National Housing Act (12 U.S.C. 1703) is amended
by adding at the end the following new subsection:
``(j) Requirement of Social Security Account Number for
Financing.--No insurance shall be granted under this section with
respect to any obligation representing any loan, advance of credit, or
purchase by a financial institution unless the borrower to which the
loan or advance of credit was made, and each member of the family of
the borrower who is 18 years of age or older or is the spouse of the
borrower, has a valid social security number.''.
SEC. 10. GAO STUDY OF MITIGATION OF TORNADO RISKS TO MANUFACTURED
HOMES.
The Comptroller General of the United States shall assess how the
Secretary of Housing and Urban Development utilizes the FHA
manufactured housing loan insurance program under title I of the
National Housing Act, the community development block grant program
under title I of the Housing and Community Development Act of 1974, and
other programs and resources available to the Secretary to mitigate the
risks to manufactured housing residents and communities resulting from
tornados. The Comptroller General shall submit to the Congress a report
on the conclusions and recommendations of the assessment conducted
pursuant to this section not later than the expiration of the 12-month
period beginning on the date of the enactment of this Act.
Passed the House of Representatives June 25, 2007.
Attest:
LORRAINE C. MILLER,
Clerk. | FHA Manufactured Housing Loan Modernization Act of 2007 - Amends the National Housing Act with respect to Federal Housing Administration (FHA) housing loan insurance for manufactured homes (or lots for such homes).
(Sec. 3) Exempts such loans from certain financial institution portfolio limits, increasing an allowable claim for loss from 10% to 90% of an institution's total amount of such loans, credit advances, and purchases.
(Sec. 4) Makes any new contract of insurance for such loans, credit advances, or purchases conclusive evidence of an institution's insurance eligibility. (Thus requires each loan to be insured individually instead of as part of a bundle of such loans.)
(Sec. 5) Increases loan limits, requiring annual indexing.
(Sec. 6) Prescribes requirements for payment by a borrower of premium charges for credit insurance, including an up-front premium of up to 2.25% and an annual premium of up to 1%.
(Sec. 7) Revises requirements for the handling and disposal of any real or personal conveyed to or acquired by the Secretary of Housing and Urban Development (HUD), and the pursuit of all claims against mortgagors assigned to the Secretary by mortgagees.
(Sec. 8) Directs the Secretary of HUD to: (1) establish underwriting criteria for loans and credit in connection with a manufactured home, or a lot for one, that will ensure the manufactured housing program's financial soundness; and (2) revise within six months existing criteria to accord with those established under this Act.
(Sec. 9) Prohibits any grant of credit insurance to a financial institution unless the borrower to which a housing renovation or modernization loan or advance of credit was made, and each member of the borrower's family age 18 years or older, including the borrower's spouse, has a valid Social Security number.
(Sec. 10) Directs the Comptroller General to assess, and report to Congress on, how the Secretary of HUD utilizes the FHA manufactured housing loan insurance program, the community development block grant program, and other programs and resources to mitigate the risks to manufactured housing residents and communities resulting from tornados. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Retirement Security for Life Act of
2004''.
SEC. 2. EXCLUSION FOR LIFETIME ANNUITY PAYMENTS.
(a) Lifetime Annuity Payments Under Annuity Contracts.--Section
72(b) of the Internal Revenue Code of 1986 (relating to exclusion
ratio) is amended by adding at the end the following new paragraph:
``(5) Exclusion for lifetime annuity payments.--
``(A) In general.--In the case of lifetime annuity
payments received under one or more annuity contracts
in any taxable year, gross income shall not include 50
percent of the portion of lifetime annuity payments
otherwise includible (without regard to this paragraph)
in gross income under this section. For purposes of the
preceding sentence, the amount excludible from gross
income in any taxable year shall not exceed $20,000.
``(B) Cost-of-living adjustment.--In the case of
taxable years beginning after December 31, 2005, the
$20,000 amount in subparagraph (A) shall be increased
by an amount equal to--
``(i) such dollar amount, multiplied by
``(ii) the cost-of-living adjustment
determined under section 1(f)(3) for the
calendar year in which the taxable year begins,
determined by substituting `calendar year 2004'
for `calendar year 1992' in subparagraph (B)
thereof.
If any amount as increased under the preceding sentence
is not a multiple of $500, such amount shall be rounded
to the next lower multiple of $500.
``(C) Application of paragraph.--Subparagraph (A)
shall not apply to--
``(i) any amount received under an eligible
deferred compensation plan (as defined in
section 457(b)) or under a qualified retirement
plan (as defined in section 4974(c)),
``(ii) any amount paid under an annuity
contract that is received by the beneficiary
under the contract--
``(I) after the death of the
annuitant in the case of payments
described in subsection
(c)(5)(A)(ii)(III), unless the
beneficiary is the surviving spouse of
the annuitant, or
``(II) after the death of the
annuitant and joint annuitant in the
case of payments described in
subsection (c)(5)(A)(ii)(IV), unless
the beneficiary is the surviving spouse
of the last to die of the annuitant and
the joint annuitant, or
``(iii) any annuity contract that is a
qualified funding asset (as defined in section
130(d)), but without regard to whether there is
a qualified assignment.
``(D) Investment in the contract.--For purposes of
this section, the investment in the contract shall be
determined without regard to this paragraph.''.
(b) Definitions.--Subsection (c) of section 72 of the Internal
Revenue Code of 1986 is amended by adding at the end the following new
paragraph:
``(5) Lifetime annuity payment.--
``(A) In general.--For purposes of subsection
(b)(5), the term `lifetime annuity payment' means any
amount received as an annuity under any portion of an
annuity contract, but only if--
``(i) the only person (or persons in the
case of payments described in subclause (II) or
(IV) of clause (ii)) legally entitled (by
operation of the contract, a trust, or other
legally enforceable means) to receive such
amount during the life of the annuitant or
joint annuitant is such annuitant or joint
annuitant, and
``(ii) such amount is part of a series of
substantially equal periodic payments made not
less frequently than annually over--
``(I) the life of the annuitant,
``(II) the lives of the annuitant
and a joint annuitant, but only if the
annuitant is the spouse of the joint
annuitant as of the annuity starting
date or the difference in age between
the annuitant and joint annuitant is 15
years or less,
``(III) the life of the annuitant
with a minimum period of payments or
with a minimum amount that must be paid
in any event, or
``(IV) the lives of the annuitant
and a joint annuitant with a minimum
period of payments or with a minimum
amount that must be paid in any event,
but only if the annuitant is the spouse
of the joint annuitant as of the
annuity starting date or the difference
in age between the annuitant and joint
annuitant is 15 years or less.
``(iii) Exceptions.--For purposes of clause
(ii), annuity payments shall not fail to be
treated as part of a series of substantially
equal periodic payments--
``(I) because the amount of the
periodic payments may vary in
accordance with investment experience,
reallocations among investment options,
actuarial gains or losses, cost of
living indices, a constant percentage
applied not less frequently than
annually, or similar fluctuating
criteria,
``(II) due to the existence of, or
modification of the duration of, a
provision in the contract permitting a
lump sum withdrawal after the annuity
starting date, or
``(III) because the period between
each such payment is lengthened or
shortened, but only if at all times
such period is no longer than one
calendar year.
``(B) Annuity contract.--For purposes of
subparagraph (A) and subsections (b)(5) and (w), the
term `annuity contract' means a commercial annuity (as
defined by section 3405(e)(6)), other than an endowment
or life insurance contract.
``(C) Minimum period of payments.--For purposes of
subparagraph (A), the term `minimum period of payments'
means a guaranteed term of payments that does not
exceed the greater of 10 years or--
``(i) the life expectancy of the annuitant
as of the annuity starting date, in the case of
lifetime annuity payments described in
subparagraph (A)(ii)(III), or
``(ii) the life expectancy of the annuitant
and joint annuitant as of the annuity starting
date, in the case of lifetime annuity payments
described in subparagraph (A)(ii)(IV).
For purposes of this subparagraph, life expectancy
shall be computed with reference to the tables
prescribed by the Secretary under paragraph (3). For
purposes of subsection (w)(1)(C)(ii), the permissible
minimum period of payments shall be determined as of
the annuity starting date and reduced by one for each
subsequent year.
``(D) Minimum amount that must be paid in any
event.--For purposes of subparagraph (A), the term
`minimum amount that must be paid in any event' means
an amount payable to the designated beneficiary under
an annuity contract that is in the nature of a refund
and does not exceed the greater of the amount applied
to produce the lifetime annuity payments under the
contract or the amount, if any, available for
withdrawal under the contract on the date of death.''.
(c) Recapture Tax for Lifetime Annuity Payments.--Section 72 of the
Internal Revenue Code of 1986 is amended by redesignating subsection
(w) as subsection (x) and by inserting after subsection (v) the
following new subsection:
``(w) Recapture Tax for Modifications to or Reductions in Lifetime
Annuity Payments.--
``(1) In general.--If any amount received under an annuity
contract is excluded from income by reason of subsection (b)(5)
(relating to lifetime annuity payments), and--
``(A) the series of payments under such contract is
subsequently modified so any future payments are not
lifetime annuity payments,
``(B) after the date of receipt of the first
lifetime annuity payment under the contract an
annuitant receives a lump sum and thereafter is to
receive annuity payments in a reduced amount under the
contract, or
``(C) after the date of receipt of the first
lifetime annuity payment under the contract the dollar
amount of any subsequent annuity payment is reduced and
a lump sum is not paid in connection with the
reduction, unless such reduction is--
``(i) due to an event described in
subsection (c)(5)(A)(iii), or
``(ii) due to the addition of, or increase
in, a minimum period of payments within the
meaning of subsection (c)(5)(C) or a minimum
amount that must be paid in any event (within
the meaning of subsection (c)(5)(D)),
then gross income for the first taxable year in which such
modification or reduction occurs shall be increased by the
recapture amount.
``(2) Recapture amount.--
``(A) In general.--For purposes of this subsection,
the recapture amount shall be the amount, determined
under rules prescribed by the Secretary, equal to the
amount that (but for subsection (b)(5)) would have been
includible in the taxpayer's gross income if the
modification or reduction described in paragraph (1)
had been in effect at all times, plus interest for the
deferral period at the underpayment rate established by
section 6621.
``(B) Deferral period.--For purposes of this
subsection, the term `deferral period' means the period
beginning with the taxable year in which (without
regard to subsection (b)(5)) the payment would have
been includible in gross income and ending with the
taxable year in which the modification described in
paragraph (1) occurs.
``(3) Exceptions to recapture tax.--Paragraph (1) shall not
apply in the case of any modification or reduction that occurs
because an annuitant--
``(A) dies or becomes disabled (within the meaning
of subsection (m)(7)),
``(B) becomes a chronically ill individual within
the meaning of section 7702B(c)(2), or
``(C) encounters hardship.''.
(d) Lifetime Distributions of Life Insurance Death Benefits.--
(1) In general.--Section 101(d) of the Internal Revenue
Code of 1986 (relating to payment of life insurance proceeds at
a date later than death) is amended by adding at the end the
following new paragraph:
``(4) Exclusion for lifetime annuity payments.--
``(A) In general.--In the case of amounts to which
this subsection applies, gross income shall not include
the lesser of--
``(i) 50 percent of the portion of lifetime
annuity payments otherwise includible in gross
income under this section (determined without
regard to this paragraph), or
``(ii) the amount in effect under section
72(b)(5).
``(B) Rules of section 72(b)(5) to apply.--For
purposes of this paragraph, rules similar to the rules
of section 72(b)(5) and section 72(w) shall apply,
substituting the term `beneficiary of the life
insurance contract' for the term `annuitant' wherever
it appears, and substituting the term `life insurance
contract' for the term `annuity contract' wherever it
appears.''.
(2) Conforming amendment.--Section 101(d)(1) of such Code
is amended by inserting ``or paragraph (4)'' after ``to the
extent not excluded by the preceding sentence''.
(e) Effective Date.--
(1) In general.--The amendments made by this section shall
apply to amounts received in calendar years beginning after the
date of the enactment of this Act.
(2) Special rule for existing contracts.--In the case of a
contract in force on the date of the enactment of this Act that
does not satisfy the requirements of section 72(c)(5)(A) of the
Internal Revenue Code of 1986 (as added by this section), or
requirements similar to such section 72(c)(5)(A) in the case of
a life insurance contract), any modification to such contract
(including a change in ownership) or to the payments thereunder
that is made to satisfy the requirements of such section (or
similar requirements) shall not result in the recognition of
any gain or loss, any amount being included in gross income, or
any addition to tax that otherwise might result from such
modification, but only if the modification is completed prior
to the date that is 2 years after the date of the enactment of
this Act. | Retirement Security for Life Act of 2004 - Amends the Internal Revenue Code to allow an exclusion from gross income for 50 percent of the amount otherwise includible in gross income as guaranteed payments from certain annuity or life insurance contracts. Limits the amount of such exclusion to $20,000 in any taxable year. Provides for an inflation adjustment of the $20,000 limitation beginning in 2006. |
lacidipine ( lcdp ) is chemically a 1,4-dihydropyridine derivative as shown in figure 1a , which is pharmacologically a calcium channel blocker used as an anti - hypertensive drug .
lcdp works by blocking calcium channels in the arterial wall those are present in the muscle cell .
calcium is needed by muscle cells in order for them to contract ; so , by depriving them of calcium , lcdp causes the muscle cells to relax . relaxing and widening of the small arteries decreases the resistance that the heart has to push against in order to pump the blood around the body , which reduces the pressure within the blood vessels .
lcdp is completely absorbed from the gastrointestinal tract ( git ) providing its complete dissolution .
but the quandary is that lcdp has very low solubility , which presents a challenge to the formulation scientists .
when an active agent is administered orally , it must first dissolve in gastric and/or intestinal fluids before it permeates the membranes of the gi tract to reach systemic circulation .
therefore , a drug with poor aqueous solubility will typically exhibit dissolution rate limited absorption .
chemical structure of ( a ) lacidipine and ( b ) polyvinylpyrrolidone ( pvp ) one of the most important tasks in drug discovery and development is to enhance the oral bioavailability by improving the dissolution of poorly water - soluble drugs .
salt formation , solubilization , particle size reduction , and solid dispersion formation are the approaches most often used to reach this goal .
the formulation of hydrophobic drugs as solid dispersions is a significant area of research aimed at improving their dissolution , and thus enhancing the bioavailability . in the solid dispersion system
, the solubility of drug can be improved by changing drug crystallinity to an amorphous state and reducing the particle size for better wettability .
the rationale behind such a strategy is that a highly disordered amorphous state has a lower energetic barrier to overcome in order to enter a solution than a regularly structured crystalline state . on the other hand
, the presence of the carrier might also cause the creation of a micro - environment in which the drug solubility is improved .
the high molecular weight compound , polyvinylpyrrolidone ( pvp ) , as shown in figure 1b , is a synthetic polymer made up of linear groups of 1-vinyl-2-pyrrolidone monomers .
pvp ( povidone ) , a polymeric lactam , has low toxicity , strong hydrophilic properties , and physiological tolerance which offers enhancement of drug release ( dr ) and bioavailability of drugs with very low solubility .
one of the outstanding properties of the soluble pvp products is their universal solubility in hydrophilic and hydrophobic solvents .
so , it is extensively employed as a carrier for the preparation of solid dispersions to improve the solubility of hydrophobic drugs .
enhancement of dr is caused by the inhibition of crystallization of drugs , which is mostly offered by the anti - plasticizing effect of pvp and by pvp 's surface adsorption and efficient steric hindrance for nucleation and crystal growth .
understanding the basic forces that hold molecules together is important for understanding how molecules interact with each other .
taking all the above into account , the solid dispersions of lcdp in pvp k29/32 were prepared using solvent evaporation technique .
the main focus of this research work is on the molecular level analysis of the drug molecular state in solid dispersions and interactions between drug and carrier by combination of differential scanning calorimetry ( dsc ) , x - ray powder diffraction ( xrd ) , and fourier transform infrared ( ftir ) spectroscopy with hot stage microscopy ( hsm ) imaging technique .
lcdp , an active pharmaceutical ingredient , was procured from cadila pharmaceuticals limited , india .
pvp ( plasdone k29/32 ) , purchased from isp pharmaceuticals , mumbai , india . absolute alcohol ( ethanol 99.6% v / v ) was procured from sigma aldrich , bangalore , india and was used as a common solvent . unless otherwise stated , all other materials were of analytical grade .
solid dispersions of lcdp in pvp k29/32 ( mass ratio of lcdp to pvp k29/32 is from 1:1 to 1:14 ) were prepared by solvent evaporation method . in brief , accurately weighed quantities of pvp k29/32 were dissolved in ethanol , followed by addition of accurately weighed quantities of lcdp to the solution , which was allowed to dissolve completely by ultrasonication at room temperature for about an hour .
the resulting product was dried under vacuum in a desiccator over anhydrous cacl2 to a constant weight for at least 24 h at room temperature .
the dried product was ground in a mortar and passed through a sieve bss 60 # and stored in a desiccator until further evaluation .
the dissolution studies were performed using electrolab dissolution tester based on usp method ii ( paddle ) .
the samples , lcdp drug powder per se , the lcdp / pvp physical mixture ( pm ) and the solid dispersion ( sd ) at different lcdp : pvp ratios of 1:1 , 1:2 , 1:4 , 1:6 , 1:8 , 1:10 , 1:12 , and 1:14 , containing equivalent to 4 mg of lcdp , were sealed in hard gelatin capsules , then put into 500 ml of purified water with 1% w / v polysorbate 20 thermostatically maintained at 37 0.5c at a rotation speed of 50 rpm in usp type ii dissolution apparatus . at appropriate time intervals ,
5 ml of the sample was withdrawn from midway zone between the surface of medium and top of the rotating paddle not less than 1 cm from the vessel wall and filtered ( millex ap , millipore , 0.4 m ) .
the same volume of solution which was withdrawn was replaced by fresh medium for correction .
the absorbance of the standard preparation and the sample test preparation was measured on a suitable spectrophotometer ( systronic , 2201 ) at 284 nm against dissolution medium ( purified water with 1% w / v polysorbate 20 ) as blank .
tokyo , japan ) polarizing optical microscope equipped with a linkman thms600 hot stage ( linkman scientific instruments ltd . , surrey , england ) and linkman tms94 programmable temperature controller .
samples ( lcdp , pvp k29/32 , its pm and its sd in 1:10% w / w ) were heated at 2c / min from room temperature to 200c .
thermal analysis of the samples ( lcdp , pvp k29/32 , its pm and its sd in 1:10% w / w ) was carried out with a dsc-8 ( perkin elmer , massachusetts , usa ) .
about 10 mg of sample was weighed into a non - hermetically sealed aluminum pan .
the samples were heated from 25 to 250c at a heating rate of 5c / min .
all the dsc measurements were made in nitrogen atmosphere and the flow rate was 100 ml / min .
powder x - ray diffractometry ( pxrd ) of the samples ( lcdp , pvp k29/32 , its pm and its sd in 1:10% w / w ) was performed with an xpert pro x - ray ( panalytical , almelo , the netherland ) over 560 at 2 range at a scan rate of 1 per min , where the tube anode was cu with k of 0.154 nm monochromatized with a graphite crystal .
the pattern was collected with 40 kv of tube voltage and 40 ma of tube current in step scan mode ( step size of 0.05 , counting time of 1 s / step ) .
ftir spectra of the samples ( lcdp , pvp k29/32 , its pm and its sd in 1:10% w / w ) were obtained on a spectrum gx ftir spectrophotometer ( systronics , ahmedabad , india ) .
the ground samples were mixed thoroughly with kbr , an infrared - ir grade transparent matrix .
then , scans were obtained from 4000 to 400 cm at a resolution of 1 cm .
lcdp , an active pharmaceutical ingredient , was procured from cadila pharmaceuticals limited , india .
pvp ( plasdone k29/32 ) , purchased from isp pharmaceuticals , mumbai , india . absolute alcohol ( ethanol 99.6% v / v ) was procured from sigma aldrich , bangalore , india and was used as a common solvent . unless otherwise stated , all other materials were of analytical grade .
solid dispersions of lcdp in pvp k29/32 ( mass ratio of lcdp to pvp k29/32 is from 1:1 to 1:14 ) were prepared by solvent evaporation method . in brief , accurately weighed quantities of pvp k29/32 were dissolved in ethanol , followed by addition of accurately weighed quantities of lcdp to the solution , which was allowed to dissolve completely by ultrasonication at room temperature for about an hour .
the resulting product was dried under vacuum in a desiccator over anhydrous cacl2 to a constant weight for at least 24 h at room temperature .
the dried product was ground in a mortar and passed through a sieve bss 60 # and stored in a desiccator until further evaluation .
the dissolution studies were performed using electrolab dissolution tester based on usp method ii ( paddle ) .
the samples , lcdp drug powder per se , the lcdp / pvp physical mixture ( pm ) and the solid dispersion ( sd ) at different lcdp : pvp ratios of 1:1 , 1:2 , 1:4 , 1:6 , 1:8 , 1:10 , 1:12 , and 1:14 , containing equivalent to 4 mg of lcdp , were sealed in hard gelatin capsules , then put into 500 ml of purified water with 1% w / v polysorbate 20 thermostatically maintained at 37 0.5c at a rotation speed of 50 rpm in usp type ii dissolution apparatus . at appropriate time intervals ,
5 ml of the sample was withdrawn from midway zone between the surface of medium and top of the rotating paddle not less than 1 cm from the vessel wall and filtered ( millex ap , millipore , 0.4 m ) .
the same volume of solution which was withdrawn was replaced by fresh medium for correction .
the absorbance of the standard preparation and the sample test preparation was measured on a suitable spectrophotometer ( systronic , 2201 ) at 284 nm against dissolution medium ( purified water with 1% w / v polysorbate 20 ) as blank .
tokyo , japan ) polarizing optical microscope equipped with a linkman thms600 hot stage ( linkman scientific instruments ltd . , surrey , england ) and linkman tms94 programmable temperature controller . samples ( lcdp , pvp k29/32 , its pm and its sd in 1:10% w / w ) were heated at 2c / min from room temperature to 200c .
thermal analysis of the samples ( lcdp , pvp k29/32 , its pm and its sd in 1:10% w / w ) was carried out with a dsc-8 ( perkin elmer , massachusetts , usa ) .
about 10 mg of sample was weighed into a non - hermetically sealed aluminum pan .
the samples were heated from 25 to 250c at a heating rate of 5c / min .
all the dsc measurements were made in nitrogen atmosphere and the flow rate was 100 ml / min .
powder x - ray diffractometry ( pxrd ) of the samples ( lcdp , pvp k29/32 , its pm and its sd in 1:10% w / w ) was performed with an xpert pro x - ray ( panalytical , almelo , the netherland ) over 560 at 2 range at a scan rate of 1 per min , where the tube anode was cu with k of 0.154 nm monochromatized with a graphite crystal .
the pattern was collected with 40 kv of tube voltage and 40 ma of tube current in step scan mode ( step size of 0.05 , counting time of 1 s / step ) .
ftir spectra of the samples ( lcdp , pvp k29/32 , its pm and its sd in 1:10% w / w ) were obtained on a spectrum gx ftir spectrophotometer ( systronics , ahmedabad , india ) .
the ground samples were mixed thoroughly with kbr , an infrared - ir grade transparent matrix .
then , scans were obtained from 4000 to 400 cm at a resolution of 1 cm .
as the volume of dissolution medium was 500 ml purified water with 1% w / v polysorbate 20 in this study , it was sufficient to provide a sink condition for the dissolution of as much as 4 mg of lcdp .
dissolution profiles of the lcdp drug powder , the lcdp / pvp pm at different lcdp / pvp ratios of 1:1 , 1:2 , 1:4 , 1:6 , 1:8 , 1:10 , 1:12 , and 1:14 , and the sd at different lcdp / pvp ratios of 1:1 , 1:2 , 1:4 , 1:6 , 1:8 , 1:10 , 1:12 , and 1:14 were recorded as same the dissolution condition .
the dissolution rate of lcdp drug powder was limited , with a total of less than 8% dissolution after 45 min .
the lcdp / pvp pm showed enhanced dissolution rate as compared to lcdp per se due to the solubilization effect of pvp , as represented in figure 2a . after being formulated into sd ,
the dissolution profiles at different lcdp / pvp ratios increased greatly . at lcdp / pvp ratio of 2:1 , it was only comparable to that of 1:2 pm . however , when lcdp / pvp ratios increased from 1:1 to 1:14 , an abrupt increase in dissolution rate , over 75% within 45 min , was observed and there was little difference among several ratios from 1:8 to 1:14 .
the comparison between the 2:1 sd and the 1:2 pm highlighted the effect of incorporation of lcdp into sd .
it was suggested that the increase in dissolution rate of sd was attributed to the changes in the solid state during the formation of dispersion .
several authors have reported abrupt change in dissolution rate as the content of pvp in a sd kept increasing .
this phenomenon has been studied by doherty and york , and a change from crystalline drug - controlled to polymer - controlled mechanism was found to explain the abrupt increase in dissolution rate when pvp content increased over a critical point . in this study , special attention has been paid to the abrupt increase region , i.e. the lcdp / pvp ratios between 1:6 and 1:8 , as represented in figure 2b .
there was gradual increase in dissolution rate from the lcdp / pvp ratio of 1:1 to a ratio of 1:6 when significantly improved lcdp dissolution was achieved , as represented in figure 2b .
polymer controlled , and up to 1:6% w / w , sd should be crystalline drug controlled .
the results with 1:6 and 1:8 sd did not support a transition point of crystalline drug - controlled to polymer - controlled dissolution , but a transition range .
scatter plots for % drug release from ( a ) lcdp + pvp k29/32 physical mixture ( in x : y% w / w ) and ( b ) lcdp + pvp k29/32 solid dispersion ( in x : y% w / w ) hsm is used to characterize the interaction of many drugs with the polymer .
characteristic drug crystals , dispersed or adhered to the surface of spherical particles of pvp , were detectable only in pms , whereas original morphology of both drug and pvp disappeared in co - evaporated and co - ground sd system in 1:10% w / w , which appeared as aggregates of glassy flakes , making it impossible to differentiate the two components .
when this system was heated at 2c / min from room temperature to 200c , both an increase of crystal size and drug crystallization in the rim of pvp plates were detected ; finally , melting of the drug was seen at about 180c , as represented in figure 3 .
increase in crystal size during heating is attributed to the opening of drug pvp intermolecular binding .
hot stage microscopy of lcdp pvp sd ( 1:10% w / w ) the dsc diagram of lcdp , represented as a in figure 4 , exhibited a sharp endothermic peak at 180.6c , indicating the melting point of lcdp . during scanning of pvp ,
a broad endotherm ranging from 70c to 130c , represented as b in figure 4 , was observed , indicating the loss of water due to extremely hygroscopic nature of pvp .
the pm of lcdp / pvp ( 1:10 ) showed the broad endothermic peak of lcdp around 180c and the broad endothermic peak belonging to pvp simultaneously , represented as c in figure 4 , with about 11c decrease in melting onset temperature of the drug , suggesting that the drug has to be melted gradually in the polymer matrix .
this was because pvp k29/32 had a glass transition temperature at 169.0c , with a result of the drug to be melted in the liquid phase of pvp k29/32 with increase in the temperature .
lcdp / pvp sd ( 1:10 ) showed endothermic peaks that may be ascribed to pvp ; however , the endothermic peak of lcdp disappeared in lcdp / pvp ( 1:10 ) sd , seen as d marked in figure 4 .
dsc scans of : ( a ) lcdp per se ; ( b ) pvp k29/32 per se ; ( c ) lcdp + pvp k29/32 physical mixture ( in 1:10% w / w ) ; and ( d ) lcdp + pvp k29/32 solid dispersion ( in 1:10% w / w ) crystallinity is indicated by the presence of sharp peaks that are absent in the case of amorphous drugs . the inhibitory effect of pvp on the crystallization of many drugs
may be due to the interaction of the drugs with pvp , resulting in a change in the molecular mobility of the drugs , ultimately leading to an amorphous form of the drugs . in x - ray diffractogram , lcdp shows characteristic peaks at 7.61 , 13.31 , 14.71 , 17.22 , 23.54 , 25.32 , 26.35 , and 27.95 , represented as a in figure 5 , while pvp does not show any characteristic peak within the observed range of 540 ( 2q ) , represented as b in figure 5 .
the diffractograms of the lcdp / pvp pm show the characteristic peaks of lcdp , represented as c in figure 5 , and it looks like a superimposition of that of pvp and lcdp .
the diffractogram of lcdp / pvp ( 1:10%w / w ) sd , represented as d in figure 5 , is more like that of pvp , indicating absence of lcdp crystalline form .
xrd scans of : ( a ) lcdp per se ; ( b ) pvp k29/32
per se ; ( c ) lcdp + pvp k29/32 physical mixture ( in 1:10% w / w ) ; and ( d ) lcdp + pvp k29/32 solid dispersion ( in 1:10% w / w ) as seen in a marked in figure 6 , the characteristic absorption peaks of lcdp appeared at 3348.2 , 2977.4 to 2808 , 1705 to 1652 , and 1627 cm , respectively , denoting stretching vibration of nh , ch , c = o , and c = c functional groups .
the characteristic absorption peaks of pvp appeared at 3454.32 and 1654.48 cm corresponding to oh and the carbonyl on the pyrrolyl ring stretching vibration , represented as b in figure 6 . in the pm spectrum
, the characteristic peaks of both lcdp and pvp could be observed , and the spectrum can be regarded as a simple superimposition of that of lcdp and pvp , seen as c marked in figure 6 .
however , obvious changes occurred in the feature and fingerprint region of the ftir spectra of lcdp / pvp ( 1:10 ) sd , represented as d in figure 6 . in the feature region ,
it seemed that intermolecular hydrogen bond between nh of lcdp and c = o of pvp has formed .
the ch3 and ch2 stretching vibration ( approximately at 2890 cm ) was also influenced by the formation of hydrogen bond between the n atoms on the pyridine ring and the o atom on carbonyl group of pvp , and their peaks were hardly discernible in the sd spectrum .
in general , the variation in absorption peaks of all functional groups was within the range of 20 wave numbers , which strongly suggested the formation of intermolecular hydrogen bonds between lcdp and pvp , rather than more stable chemical bonds .
the formation of hydrogen bond with energy below 42 kj / mol between drugs and pvp , far less than the covalent bond , was beneficial not only to improve the dissolution rate , but also to enhance the stability and to slow down the aging of solid dispersions .
pvp is capable of forming hydrogen bond either through the nitrogen or carbonyl group on the pyrrole ring .
however , steric hindrance precludes the involvement of nitrogen atom in intermolecular interactions , thus making the carbonyl group more favorable for hydrogen bonding .
ftir scans of : ( a ) lcdp per se ; ( b ) pvp k29/32
per se ; ( c ) lcdp + pvp k29/32 physical mixture ( in 1:10% w / w ) ; and ( d ) lcdp + pvp k29/32 solid dispersion ( in 1:10% w / w )
as the volume of dissolution medium was 500 ml purified water with 1% w / v polysorbate 20 in this study , it was sufficient to provide a sink condition for the dissolution of as much as 4 mg of lcdp .
dissolution profiles of the lcdp drug powder , the lcdp / pvp pm at different lcdp / pvp ratios of 1:1 , 1:2 , 1:4 , 1:6 , 1:8 , 1:10 , 1:12 , and 1:14 , and the sd at different lcdp / pvp ratios of 1:1 , 1:2 , 1:4 , 1:6 , 1:8 , 1:10 , 1:12 , and 1:14 were recorded as same the dissolution condition .
the dissolution rate of lcdp drug powder was limited , with a total of less than 8% dissolution after 45 min .
the lcdp / pvp pm showed enhanced dissolution rate as compared to lcdp per se due to the solubilization effect of pvp , as represented in figure 2a . after being formulated into sd ,
the dissolution profiles at different lcdp / pvp ratios increased greatly . at lcdp / pvp ratio of 2:1 , it was only comparable to that of 1:2 pm . however , when lcdp / pvp ratios increased from 1:1 to 1:14 , an abrupt increase in dissolution rate , over 75% within 45 min , was observed and there was little difference among several ratios from 1:8 to 1:14 .
the comparison between the 2:1 sd and the 1:2 pm highlighted the effect of incorporation of lcdp into sd .
it was suggested that the increase in dissolution rate of sd was attributed to the changes in the solid state during the formation of dispersion .
several authors have reported abrupt change in dissolution rate as the content of pvp in a sd kept increasing .
this phenomenon has been studied by doherty and york , and a change from crystalline drug - controlled to polymer - controlled mechanism was found to explain the abrupt increase in dissolution rate when pvp content increased over a critical point . in this study , special attention has been paid to the abrupt increase region , i.e. the lcdp / pvp ratios between 1:6 and 1:8 , as represented in figure 2b .
there was gradual increase in dissolution rate from the lcdp / pvp ratio of 1:1 to a ratio of 1:6 when significantly improved lcdp dissolution was achieved , as represented in figure 2b .
polymer controlled , and up to 1:6% w / w , sd should be crystalline drug controlled .
the results with 1:6 and 1:8 sd did not support a transition point of crystalline drug - controlled to polymer - controlled dissolution , but a transition range .
scatter plots for % drug release from ( a ) lcdp + pvp k29/32 physical mixture ( in x : y% w / w ) and ( b ) lcdp + pvp k29/32 solid dispersion ( in x : y% w / w )
characteristic drug crystals , dispersed or adhered to the surface of spherical particles of pvp , were detectable only in pms , whereas original morphology of both drug and pvp disappeared in co - evaporated and co - ground sd system in 1:10% w / w , which appeared as aggregates of glassy flakes , making it impossible to differentiate the two components .
when this system was heated at 2c / min from room temperature to 200c , both an increase of crystal size and drug crystallization in the rim of pvp plates were detected ; finally , melting of the drug was seen at about 180c , as represented in figure 3 .
increase in crystal size during heating is attributed to the opening of drug pvp intermolecular binding .
the dsc diagram of lcdp , represented as a in figure 4 , exhibited a sharp endothermic peak at 180.6c , indicating the melting point of lcdp . during scanning of pvp ,
a broad endotherm ranging from 70c to 130c , represented as b in figure 4 , was observed , indicating the loss of water due to extremely hygroscopic nature of pvp .
the pm of lcdp / pvp ( 1:10 ) showed the broad endothermic peak of lcdp around 180c and the broad endothermic peak belonging to pvp simultaneously , represented as c in figure 4 , with about 11c decrease in melting onset temperature of the drug , suggesting that the drug has to be melted gradually in the polymer matrix .
this was because pvp k29/32 had a glass transition temperature at 169.0c , with a result of the drug to be melted in the liquid phase of pvp k29/32 with increase in the temperature .
lcdp / pvp sd ( 1:10 ) showed endothermic peaks that may be ascribed to pvp ; however , the endothermic peak of lcdp disappeared in lcdp / pvp ( 1:10 ) sd , seen as d marked in figure 4 .
dsc scans of : ( a ) lcdp per se ; ( b ) pvp k29/32 per se ; ( c ) lcdp + pvp k29/32 physical mixture ( in 1:10% w / w ) ; and ( d ) lcdp + pvp k29/32 solid dispersion ( in 1:10% w / w )
crystallinity is indicated by the presence of sharp peaks that are absent in the case of amorphous drugs .
the inhibitory effect of pvp on the crystallization of many drugs may be due to the interaction of the drugs with pvp , resulting in a change in the molecular mobility of the drugs , ultimately leading to an amorphous form of the drugs . in x - ray diffractogram
, lcdp shows characteristic peaks at 7.61 , 13.31 , 14.71 , 17.22 , 23.54 , 25.32 , 26.35 , and 27.95 , represented as a in figure 5 , while pvp does not show any characteristic peak within the observed range of 540 ( 2q ) , represented as b in figure 5 .
the diffractograms of the lcdp / pvp pm show the characteristic peaks of lcdp , represented as c in figure 5 , and it looks like a superimposition of that of pvp and lcdp .
the diffractogram of lcdp / pvp ( 1:10%w / w ) sd , represented as d in figure 5 , is more like that of pvp , indicating absence of lcdp crystalline form .
xrd scans of : ( a ) lcdp per se ; ( b ) pvp k29/32
per se ; ( c ) lcdp + pvp k29/32 physical mixture ( in 1:10% w / w ) ; and ( d ) lcdp + pvp k29/32 solid dispersion ( in 1:10% w / w )
as seen in a marked in figure 6 , the characteristic absorption peaks of lcdp appeared at 3348.2 , 2977.4 to 2808 , 1705 to 1652 , and 1627 cm , respectively , denoting stretching vibration of nh , ch , c = o , and c = c functional groups .
the characteristic absorption peaks of pvp appeared at 3454.32 and 1654.48 cm corresponding to oh and the carbonyl on the pyrrolyl ring stretching vibration , represented as b in figure 6 . in the pm spectrum
, the characteristic peaks of both lcdp and pvp could be observed , and the spectrum can be regarded as a simple superimposition of that of lcdp and pvp , seen as c marked in figure 6 .
however , obvious changes occurred in the feature and fingerprint region of the ftir spectra of lcdp / pvp ( 1:10 ) sd , represented as d in figure 6 . in the feature region , the 3350 cm nh stretching vibration peak of lcdp disappeared in the sd .
it seemed that intermolecular hydrogen bond between nh of lcdp and c = o of pvp has formed .
the ch3 and ch2 stretching vibration ( approximately at 2890 cm ) was also influenced by the formation of hydrogen bond between the n atoms on the pyridine ring and the o atom on carbonyl group of pvp , and their peaks were hardly discernible in the sd spectrum .
in general , the variation in absorption peaks of all functional groups was within the range of 20 wave numbers , which strongly suggested the formation of intermolecular hydrogen bonds between lcdp and pvp , rather than more stable chemical bonds .
the formation of hydrogen bond with energy below 42 kj / mol between drugs and pvp , far less than the covalent bond , was beneficial not only to improve the dissolution rate , but also to enhance the stability and to slow down the aging of solid dispersions .
pvp is capable of forming hydrogen bond either through the nitrogen or carbonyl group on the pyrrole ring .
however , steric hindrance precludes the involvement of nitrogen atom in intermolecular interactions , thus making the carbonyl group more favorable for hydrogen bonding .
ftir scans of : ( a ) lcdp per se ; ( b ) pvp k29/32
per se ; ( c ) lcdp + pvp k29/32 physical mixture ( in 1:10% w / w ) ; and ( d ) lcdp + pvp k29/32 solid dispersion ( in 1:10% w / w )
the enhancement of dissolution rate was obtained by sd containing 1:10 mass ratio of lcdp : pvp k29/32 .
the results of dsc and xrd indicate that lcdp was present in an amorphous or molecular state in the sd and the presence of hydrogen bonding interaction between the > nh of lcdp and c = o of pvp k29/32 in sd was confirmed by combining ftir .
the amorphous state of lcdp coupled with the presence of hydrogen bond between lcdp and pvp k29/32 was the main cause for the marked enhancement of dissolution rate .
pvp k29/32 sd prepared by the solvent evaporation could be used as a means of enhancing lcdp dissolution rates . | background : lacidipine ( lcdp ) is a 1,4-dihydropyridine derivative categorized as an anti - hypertensive ca2 + channel blocker having very low solubility , and thus very low oral bioavailability , which presents a challenge to the formulation scientists .
homogeneous distribution of poorly water - soluble drugs like lcdp in polyvinylpyrrolidone ( pvp ) , a hydrophilic carrier , is definitely a suitable way to improve the bioavailability of such drugs.materials and methods : the aim of the study was to develop a combined thermal , imaging , and spectroscopic approach , and characterize physical state , dissolution behavior , and elucidation of drug pvp interaction in lcdp / pvp solid dispersion ( sd ) using differential scanning calorimetry ( dsc ) , x - ray diffractometry ( xrd ) , fourier transform infrared ( ftir ) spectroscopy , and hot stage microscopy ( hsm ) , which is the prerequisite for the development of a useful drug product.results:dissolution studies of lcdp and its physical mixture with pvp showed less than 50% release even after 60 min , whereas sd of lcdp / pvp ratio of 1:10% w / w showed complete dissolution within 45 min .
dsc and powder xrd proved the absence of crystallinity in lcdp / pvp sd at a ratio of 1:10% w / w .
the ftir spectroscopy indicated formation of hydrogen bond between lcdp and pvp . in the sd ftir spectra
, the nh stretching vibrations and the c = o stretch in esteric groups of lcdp shift to free nh and c = o regions , indicating the rupture of intermolecular hydrogen bond in the crystalline structure of lcdp.conclusion:solid-state characterization by hsm , dsc , xrd , and ftir studies , in comparison with corresponding physical mixtures , revealed the changes in solid state during the formation of dispersion and justified the formation of high - energy amorphous phase . |
similar document detection is the problem of finding similar documents of two parties , alice and bob , and it has been widely used in version management of files , copyright protection , and plagiarism detection@xcite .
recently , secure similar document detection(ssdd)@xcite has been introduced to identify similar documents while preserving privacy of each party s documents as shown in figure [ fig : fig1 ] .
that is , ssdd finds similar document pairs whose cosine similarity@xcite exceeds the given tolerance while not disclosing document vectors to each other party .
ssdd is a typical example of privacy - preserving data mining(ppdm)@xcite , and has the following applications@xcite .
first , in two or more conferences that are not allowing double submissions , ssdd finds the double - submitted papers while not disclosing the papers to each other conference .
second , in the insurance fraud detection system , ssdd searches similar accident cases of two or more insurance companies while not providing sensitive and private cases to each other company .
+ jiang et al.@xcite have proposed a novel solution for ssdd by exploiting secure multiparty computation(smc)@xcite in a semi - honest model .
their solution has preserved privacy of two parties by using the secure scalar product in computing cosine similarity between document vectors . as the secure scalar product ,
they have suggested random matrix and homomorphic encryption methods@xcite . in this paper , we use the random matrix method as a base protocol , and we call it ssdd - base . however , ssdd - base has a critical problem of incurring severe computation and communication overhead .
let alice s and bob s document sets be @xmath2 and @xmath3 , respectively , then ssdd - base requires @xmath4 secure scalar products . in many cases ,
the dimension @xmath5 of document vectors reaches tens of thousands or even hundreds of thousands , and ssdd - base incurs a very high complexity of @xmath6 , which is not practical to support a large volume of document databases .
in particular , if there are many parties or frequent changes in document databases , the overhead becomes much more critical . to alleviate the computation and communication overhead of ssdd - base , in this paper we present a 2-step protocol that exploits the feature selection of lower - dimensional transformation .
the feature selection transforms high dimensional document vectors to low dimensional feature vectors , and in general it selects tens to hundreds dimensions from thousands to tens of thousands dimensions .
we call the feature selection _ fs _ in short .
representative fs includes rp(random projection)@xcite , df(document frequency)@xcite , and lda(linear discriminant analysis)@xcite . in this paper , we use rp and df since they are known as simple but efficient feature selections@xcite . to devise a 2-step protocol , we need to find an upper bound of cosine similarity for the filtering process
thus , we first present an upper bound of fs and formally prove its correctness . using the upper bound property of fs ,
we then propose a generic 2-step protocol , called ssdd - fs .
the proposed ssdd - fs works as follows : in the first _ filtering _ step , it converts @xmath5-dimensional vectors to @xmath7-dimensional vectors and applies the secure protocol to @xmath8-dimensional vectors to filter out non - similar @xmath5-dimensional vectors ; in the second _ post - processing _ step , it applies the base protocol ssdd - base to the non - filtered @xmath5-dimensional vectors . in the filtering step , ssdd - fs prunes many non - similar _ high _ dimensional vectors by comparing _
low _ dimensional vectors with relatively less complexity of @xmath9 , and thus , it significantly improves the performance compared with ssdd - base . to make ssdd - fs be efficient , fs should be highly discriminative , i.e. , fs should filter out as many high dimensional vectors as possible if they are non - similar . in this paper , we analyze ssdd protocols in detail and propose four different techniques as the discriminative implementation of fs .
we can think rp first as an easiest way of implementing fs .
rp randomly selects @xmath8 dimensions from @xmath5 dimensions .
rp is easy , but its filtering effect will be very low due to the randomness . to solve the problem of rp , we exploit df that selects feature dimensions based on frequencies in all document vectors . in particular , by referring the concept of df , we present three variants of df , called lf(local frequency ) , gf(global frequency ) , and hf(hybrid frequency ) . first , lf considers term frequencies of alice s current querying vector(we call it the _ current vector _ ) , and it selects dimensions whose frequencies higher than the others in the current vector .
lf focuses on the _ locality _ , which means that considering the current vector only might be enough to decrease the upper bound of cosine similarity .
second , gf means df itself , that is , gf counts the number of documents containing each term(dimension ) , constructs a frequency vector from those counts(we call it the _ whole vector _ ) , and selects high frequency dimensions from the whole vector .
gf focuses on the _ globality _ since it considers all the document vectors . to implement gf ,
however , we need to make a secure protocol for obtaining the whole vector from both alice s and bob s document sets . for this
, we propose a secure protocol securedf as a secure implementation of df .
third , hf takes advantage of both locality of lf and globality of gf .
hf computes a _ difference vector _ between current and whole vectors and selects high - valued dimensions from the difference vector .
this is because hf tries to maximize the value difference between alice s and bob s vectors for each selected dimension and eventually decrease the upper bound of cosine similarity .
table [ tbl : tbl1 ] summarizes these four feature selections and their corresponding ssdd protocols , ssdd - rp , ssdd - lf , ssdd - gf , and ssdd - hf , to be proposed in section [ sec : sec4 ] .
.feature selection methods to be used for ssdd - fs .
[ cols="^,<",options="header " , ] in this paper , we empirically evaluate the base protocol , ssdd - base , and our four ssdd - fs protocols(ssdd - rp , ssdd - lf , ssdd - gf , ssdd - hf ) using various data sets .
experimental results show that the ssdd - fs protocols significantly outperform ssdd - base .
this means that the proposed 2-step protocols effectively prune a large number of non - similar sequences early in the filtering step .
in particular , ssdd - hf that takes advantage of both locality of ssdd - lf and globality of ssdd - gf shows the best performance .
compared with ssdd - base , ssdd - hf reduces the execution time of ssdd by three or four orders of magnitude .
the rest of this paper is organized as follows .
section [ sec : sec2 ] explains related work and background of the research .
section [ sec : sec3 ] presents the fs - based 2-step protocol , ssdd - fs , and proves its correctness .
section [ sec : sec4 ] introduces four novel feature selections , rp , lf , gf , and hf , and it proposes their corresponding secure protocols . section [ sec : sec5 ] explains experimental results on various data sets .
we finally summarize and conclude the paper in section [ sec : sec6 ] .
we use cosine similarity as the basic operation of similar document detection .
the cosine similarity of two @xmath5-dimensional vectors @xmath10 and @xmath11 is computed as @xmath12 , where @xmath13 is the scalar product of @xmath14 and @xmath15 , that is , @xmath16 . if we can compute @xmath13 securely in two parties , we can also compute @xmath17 securely .
there are two representative methods for the secure scalar product@xcite .
the first one is the random matrix method@xcite , where two parties share the same random matrix and compute the scalar product securely using the matrix .
the second one is the homomorphic encryption method@xcite , where two parties use the homomorphic probability key system for the secure computation of scalar products . in this paper
, we use the random matrix method since it is more efficient than the homomorphic encryption one , but we can also instead use the homomorphic encryption method for the protocols to be discussed later . without loss of generality , we assume that vectors @xmath14 and @xmath15 are normalized to size @xmath18 .
that is , @xmath19 , and thus , simply @xmath20 .
figure [ fig : fig2 ] shows the protocol of ssdd - base , the recent solution of ssdd by jiang et al.@xcite .
ssdd - base uses the random matrix method@xcite for secure scalar products , where alice and bob share the same matrix @xmath21 and securely determine whether two vectors @xmath14 and @xmath15 are similar or not . for the correctness and detailed explanation on * protocol * ssdd - base ,
readers are referred to @xcite . in ssdd , we perform ssdd - base for each pair of document vectors . more formally , if @xmath2 and @xmath3 are sets of document vectors owned by alice and bob , respectively , we perform ssdd - base for each pair @xmath22 , where @xmath23 and @xmath24 . as we mentioned in section
[ sec : sec1 ] , however , ssdd - base incurs the severe computation and communication overhead of @xmath25 , which will be much serious if there are several parties , or a large number of documents are changed dynamically . to alleviate this critical overhead , in this paper
we discuss the 2-step solution for ssdd .
+ in text mining and time - series mining , many lower - dimensional transformations have been proposed to solve the dimensionality curse problem@xcite of high dimensional vectors .
we can classify lower - dimensional transformations into feature extractions and feature selections@xcite .
first , the feature extraction _ creates _ a few _ new _ features from an original high dimensional vector .
representative examples of feature extractions include lsi(latent semantic indexing)@xcite , lpi(locality preserving indexing)@xcite , dft(discrete fourier transform)@xcite , dwt(discrete wavelet transform)@xcite , and paa(piecewise aggregate approximation)@xcite . in contrary
, the feature selection _ selects _ a few _ discriminative _ features from an original ( or transformed ) high dimensional vectors .
representative examples of feature selections include rp , df , lda , and pca(principal component analysis)@xcite . in this paper
, we use rp and df with appropriate variations .
this is because rp and df are much simpler than other transformations , and accordingly , they are easily applied to ssdd with low complexity ; on the other hand , lsi , lpi , lda , and pca may provide very accurate feature vectors , but they are too complex to be applied to ssdd . for the detailed explanation on lower - dimensional transformations for text mining ,
readers are referred to @xcite .
there have been many efforts on ppdm@xcite .
ppdm solutions can be classified into four categories : data perturbation , @xmath26-anonymization , distributed privacy preservation , and privacy preservation of mining results@xcite .
ssdd can be regarded as an application of distributed privacy privation . for the detailed explanation on problems and solutions of data perturbation and @xmath26-anonymization ,
readers are referred to survey papers@xcite .
in this paper , we use fs , feature selection , for the secure 2-step protocol . to transform an @xmath5-dimensional vector to an @xmath8-dimensional vector
, fs chooses randomly or highly frequent @xmath8 dimensions from @xmath5 dimensions , and thus , its transformation process is very simple . in this section
, we first assume that fs can select @xmath8 dimensions from @xmath5 dimensions in a secure manner , and we then propose the secure 2-step protocol of ssdd by using the secure fs . to use a lower - dimensional transformation @xmath27 for ssdd , we need to find an upper bound function @xmath28 that satisfies eq .
( [ eq : eq1 ] ) , where @xmath29 and @xmath30 are @xmath8-dimensional feature vectors selected from @xmath5-dimensional vectors , @xmath14 and @xmath15 , respectively , by the transformation @xmath27 . in eq .
( [ eq : eq1 ] ) , @xmath10 , @xmath11 , @xmath31 , and @xmath32 .
@xmath33 the reason why the transformation @xmath27 should satisfy eq .
( [ eq : eq1 ] ) is that ssdd of using @xmath27 should not incur any false dismissal , and this is known as parseval s theorem(the lower bound property of euclidean distances ) in time - series matching@xcite . to obtain an upper bound of the lower - dimensional transformation @xmath27
, we first define an upper bound of @xmath27 as follows .
[ def : def1 ] if a lower - dimensional transformation @xmath27 transforms @xmath5-dimensional vectors , @xmath14 and @xmath15 , to @xmath8-dimensional vectors , @xmath29 and @xmath30 , respectively , we define an _ upper bound function _ of @xmath27 , denoted by @xmath28 , as eq .
( [ eq : eq2 ] ) .
@xmath34 where @xmath35 is the squared euclidean distance between @xmath29 and @xmath30 , i.e. , @xmath36 .
@xmath37 in this paper , we want to use fs as a lower - dimensional transformation @xmath27 , and thus , we formally prove that the upper bound function of fs satisfies eq .
( [ eq : eq1 ] ) , the upper bound property of cosine similarity .
[ th : th1 ] if a feature selection fs transforms @xmath5-dimensional vectors , @xmath14 and @xmath15 , to @xmath8-dimensional vectors , @xmath38 and @xmath39 , respectively , @xmath40 is an upper bound of @xmath17 , that is , eq . ( [ eq : eq3 ] ) holds .
proof : first , let @xmath10 , @xmath11 , @xmath42 , and @xmath43 . then ,
( [ eq : eq4 ] ) and ( [ eq : eq5 ] ) hold for @xmath14 and @xmath15 .
@xmath44 @xmath45 we note that all entry values of @xmath14 and @xmath15 are non - negative , and fs constructs @xmath38 and @xmath39 by choosing @xmath8 features from @xmath14 and @xmath15 .
based on this property , eq .
( [ eq : eq6 ] ) holds .
@xmath46 finally , eq . ( [ eq : eq7 ] ) holds by eqs .
( [ eq : eq5 ] ) , ( [ eq : eq6 ] ) , and eq . ( [ eq : eq2 ] ) of definition [ def : def1 ] .
@xmath47 therefore , @xmath40 is an upper bound of @xmath17 .
@xmath37 by using the upper bound property of fs , we now propose a generic 2-step protocol ssdd - fs .
figure [ fig : fig3 ] shows * protocol * ssdd - fs . as shown in the protocol , ssdd - fs maintains @xmath8-dimensional @xmath38 and @xmath38 as well as @xmath5-dimensional @xmath14 and @xmath15 of ssdd - base . also , alice and bob share an @xmath48 matrix @xmath49 as well as an @xmath50 matrix @xmath21 of ssdd - base .
lines 1 to 7 of ssdd - fs are the first step of discarding non - similar @xmath5-dimensional vectors in the @xmath8-dimensional space . first , lines 1 to 4 securely compute the scalar product @xmath51 for @xmath8-dimensional vectors @xmath38 and @xmath39 . except using @xmath8-dimensional vectors instead of @xmath5-dimensional vectors ,
these steps are the same as those of ssdd - base .
the only difference from ssdd - base is that bob additionally sends @xmath52 to alice in line 3 for computing @xmath53 . in line 5 ,
alice computes @xmath54 by using eq .
( [ eq : eq8 ] ) .
@xmath55 after then , alice computes an upper bound function of fs , @xmath40 , in line 6 . in line 7
, we perform the filtering process by comparing the upper bound(@xmath56 ) and the given tolerance(@xmath57 ) .
if the upper bound is less than the tolerance , i.e. if @xmath58 , the actual cosine similarity will also be less than the tolerance , and we do nt need to compute it in the next @xmath5-dimensional space .
that is , if @xmath58 , we can skip line 8 of the second step .
thus , line 8 is executed only if @xmath5-dimensional vectors of @xmath22 are not filtered out by the upper bound . in line 8
, we compute the actual cosine similarity for @xmath22 by using ssdd - base .
+ we here note that how ssdd - fs improves the performance compared with ssdd - base depends on how many @xmath5-dimensional vectors are discarded in the first step .
this filtering effect largely depends on the discriminative power of the feature selection , i.e. , efficiency of fs . in other words ,
if fs exploits the filtering effect largely , ssdd - fs can reduce the computation and communication overhead from @xmath59 to @xmath60 .
based on this observation , we need to maximize the filtering effect of fs , and this can be seen a problem of how we choose @xmath8 dimensions from @xmath5 dimensions for maximizing the discriminative power of fs .
therefore , we propose efficient fs variants and their ssdd protocols in section [ sec : sec4 ] and evaluate their performance in section [ sec : sec5 ] .
in this section , we propose four methods to implement fs of * protocol * ssdd - fs .
figure [ fig : fig4 ] shows the procedure of ssdd - fs including the feature selection step . as shown in the figure , we first obtain @xmath38 and @xmath39 from @xmath14 and @xmath15 through the feature selection which should also be done securely . as mentioned in section [ sec : sec1 ] , we present rp , lf , gf , and hf as the feature selection method , and we explain how they work in detail in sections [ ssec : sec41 ] to [ ssec : sec44 ] . in figure
[ fig : fig4 ] , the secure feature selection corresponds to line ( 1 ) of * protocol * ssdd - fs , and the other two steps correspond to the first step(lines 1 to 7 ) and the second step(line 8) , respectively .
+ rp is an easiest way of implementing fs , which selects @xmath8 dimensions randomly from @xmath5 dimensions .
we can think two different methods in applying rp to ssdd - fs .
the first one selects @xmath8 dimensions dynamically for each document pair @xmath22 ; the second one first determines @xmath8 dimensions and then uses those pre - determined dimensions for all document pairs . to use the first rp method ,
alice and bob should share @xmath8 indexes , @xmath61 , of randomly selected @xmath8 dimensions for each @xmath22 before starting the first step of ssdd - fs .
this sharing process can be implemented as alice randomly selects @xmath8 dimensions and sends their indexes to bob , or alice and bob share the same seed of the random function .
that is , we can implement the first rp method by modifying line ( 1 ) of * protocol * ssdd - fs as lines ( 1 - 1 ) to ( 1 - 3 ) of figure [ fig : fig5 ] .
+ the second rp method uses the same @xmath8 dimensions for all @xmath22 pairs .
we can easily implement this method as alice and bob share the same @xmath8 indexes only once before starting ssdd - fs .
these first and second rp methods do not disclose any values of alice s and bob s document vectors , and thus , they are said to be secure .
also , these two methods have the same effect in selecting @xmath8 dimensions randomly .
thus , we use the second one since it is much simpler than the first one , and we call the second one ssdd - rp by differentiating it from ssdd - fs .
ssdd - rp proposed in section [ ssec : sec41 ] has a problem of exploiting only a little filtering effect in the first filtering step .
this low filtering effect is due to that rp chooses features without any consideration of characteristics of document vectors .
according to the real experiments , ssdd - rp shows a very little improvement in ssdd performance compared with ssdd - base . to solve the problem of ssdd - rp and to enlarge the filtering effect , in this paper
we consider how frequent each term is in the document or document set , i.e. , we use the term frequency(tf ) . in general
, we use the tf concept as follows : we first compute the number of occurrences(i.e . ,
frequency ) of each term throughout the whole data set and then choose the highly frequent dimensions .
we call this selection method df(document frequency ) as in @xcite . the reason why we consider tf(or df ) in ssdd - fs
is that , if we select the highly frequent @xmath8 dimensions , we can obtain relatively small upper bounds @xmath28 s by relatively large @xmath35 s of eq .
( [ eq : eq2 ] ) , and accordingly , we can exploit the filtering effect largely . as a feature selection using term frequencies ,
we first consider how frequent each term is in an individual document rather than the whole document set , that is , we first propose the feature selection of exploiting _ locality _ of each document .
more precisely , for a pair of documents @xmath22 , the locality - based selection chooses @xmath8 dimensions highly frequent in alice s current vector @xmath14 .
this selection is based on the simple intuition that , even without considering whole vectors of the document set , the current vector itself will make a big influence on the upper bound @xmath62 . in this selection , we can instead use bob s vector @xmath15 rather than alice s vector @xmath14 as the current vector , or we can also use both alice s and bob s vectors @xmath14 and @xmath15 . using @xmath15 , however , incurs the additional communication overhead , and thus , in this paper we consider a simple method of using alice s @xmath14 as the current vector .
we call this selection method _ lf_(local frequency ) since it considers individual ( i.e. , local ) documents rather than whole documents , and we denote the protocol of applying lf to ssdd - fs as ssdd - lf .
ssdd - lf exploits the locality by selecting @xmath8 dimensions for each document at every start time .
figure [ fig : fig6 ] shows how we implement ssdd - lf by modifying line ( 1 ) of ssdd - fs of figure [ fig : fig3 ] . in line ( 1 - 2 ) , alice first selects top @xmath8 frequent dimensions from her current vector @xmath14 .
she sends those indexes of the selected @xmath8 dimensions to bob in line ( 1 - 3 ) .
thus , they can share the same indexes and obtain @xmath8-dimensional feature vectors by using the same @xmath8 indexes in line ( 1 - 4 ) .
+ we now analyze the computation and communication overhead of feature selection in ssdd - lf .
as shown in figure [ fig : fig6 ] , for each vector @xmath14 , alice ( 1 ) chooses the top @xmath8 frequent dimensions from @xmath5 dimensions of @xmath14 and ( 2 ) communicates with bob to share those @xmath8 indexes .
first , alice needs the additional computation overhead of @xmath63 to select top @xmath8 frequent dimensions from the current @xmath5-dimensional vector .
second , alice and bob need the additional communication overhead to share the @xmath8 indexes .
however , this communication process can be done with line ( 3 ) of ssdd - base of figure [ fig : fig2 ] , that is , alice can send @xmath8 indexes together with the encrypted vector @xmath64 to bob .
the amount of @xmath8 indexes is much smaller than that of the @xmath5-dimensional vector , and the overhead of @xmath8 indexes can be negligible .
thus , we can say that ssdd - lf causes the computation overhead of @xmath63 , but the communication overhead can be ignored . in particular
, we compare each vector @xmath14 of alice with a large number of vectors @xmath65 of bob , and thus , the computation overhead of @xmath63 can also be ignored as a pre - processing step . another considering point in ssdd - lf
is whether its feature selection process is secure or not .
that is , there should be no privacy disclosure when alice selects @xmath8 indexes and shares them with bob .
fortunately , alice sends only indexes @xmath66 to bob rather than entry values @xmath67 of @xmath14 , and the sensitive values @xmath67 are not disclosed in the selection process .
unfortunately , however , the information that which @xmath8 dimensions are frequent in @xmath14 is revealed to bob .
if the user can not be allowable even this limited disclosure of information , s / he can not use ssdd - lf as a secure protocol . in this case
, we recommend to use the previous ssdd - rp or the next ssdd - gf or ssdd - hf as the more secure protocol .
ssdd - lf of section [ ssec : sec42 ] has a problem of considering only alice s current vector but ignoring all the other vectors of bob . due to this problem ,
ssdd - lf exploits the filtering effect for only a part of bob s vectors , but it does not for most of other vectors . to overcome this problem , in this section
we propose another feature selection that uses the whole vector of which each element represents the number of documents containing the corresponding term .
unlike lf of focusing on the current vector only , it considers whole document vectors , and it has characteristics of globality .
we call this feature selection _ gf_(global frequency ) and denote the gf - based secure protocol as ssdd - gf . actually , gf is the same as df , which has been widely used as the representative feature selection , and it works as follows .
first , let @xmath68 be a whole vector and @xmath69 be a number of documents containing the @xmath26-th term , that is , @xmath69 be the df value of the @xmath26-th term .
then , to reduce the number of dimensions from @xmath5 to @xmath8 , gf simply selects @xmath8 dimensions whose df values are larger than those of the other @xmath70 dimensions .
we can get the whole vector by scanning all the document vectors once .
the traditional df constructs the whole vector based on the assumption that all the document vectors are maintained in a single computer . in ssdd , however , document vectors are distributed in alice and bob , and they do not want to provide their own vectors to each other .
thus , to use gf in ssdd , we first need to present a secure protocol of constructing the whole vector from the document vectors distributively stored in alice and bob .
figure [ fig : fig7 ] shows * protocol * securedf that securely constructs a whole vector @xmath71 from alice s and bob s document vectors and gets @xmath8 frequent dimensions from @xmath71 . in lines 1 to 8 ,
alice and bob computes their own whole vectors independently . that is , alice computes her own whole vector @xmath72 from her own document set @xmath2 , and bob gets @xmath73 from @xmath3 . in lines 4 and 8 , they share those whole vectors @xmath72 and @xmath73 with each other . in lines 9 to 11 , they then compute the aggregated whole vector @xmath71 from those vectors .
after obtaining the whole vector @xmath71 , alice and bob can select @xmath8 frequent dimensions from @xmath71 .
we note that alice sends @xmath72 to bob in line 4 , and bob sends @xmath73 to alice in line 8 .
vectors @xmath72 and @xmath73 , however , are not exact values of document vectors , but simple statistics , and thus , we can say that securedf does not reveal any privacy of individual documents .
computation and communication complexities of securedf are merely @xmath74 and @xmath75 , respectively .
also , securedf can be seen as a pre - processing step executed only once for all document vectors of alice and bob .
thus , its complexity can be negligible compared with the complexity @xmath76 of ssdd - base .
+ we now explain ssdd - gf which exploits securedf as the feature selection .
figure [ fig : fig8 ] shows how we modify line ( 1 ) of figure [ fig : fig3 ] for converting ssdd - fs to ssdd - gf . in line ( 1 - 0 ) , we first perform securedf to obtain the whole vector @xmath71 and determine @xmath8 indexes which are most frequent in @xmath71 . for current @xmath5-dimensional vectors @xmath14 and @xmath15 , alice and bob get @xmath8-dimensional vectors @xmath38 and @xmath39 by using the determined @xmath8 indexes .
as shown in figure [ fig : fig8 ] , the current vectors and even their term frequencies are not disclosed to each other , and thus , we can say that ssdd - gf is a secure protocol of ssdd .
+ lf and gf proposed in sections [ ssec : sec42 ] and [ ssec : sec43 ] have the following characteristics in a viewpoint of the filtering effect .
first , lf considers alice s current vector @xmath14 only , and thus , the filtering effect will be large for only a part of bob s vectors whose tf patterns much differ from the current vector , but the effect are less exploited for most of the other vectors .
in other words , lf can exploit the better filtering effect than gf when alice s current vector quite differs from the whole vector in tf patterns .
second , gf considers the whole vector @xmath71 obtained by securedf without considering the current vector , and it thus can exploit the filtering effect relatively evenly on many of bob s document vectors .
that is , gf can exploit the better filtering effect than lf when alice s current vector has the similar characteristics with the whole vector in tf patterns . to take advantage of both locality of lf and globality of gf ,
we now propose a hybrid feature selection , called _
hf_(hybrid frequency ) .
that is , hf uses the current vector for exploiting locality of lf , and at the same time it also use the whole vector for exploiting globality of gf .
we then present an advanced secure protocol ssdd - hf by applying hf to the ssdd - fs . simply speaking ,
hf compares current and whole vectors and selects feature dimensions whose differences are larger than those of the other dimensions . in more detail
, we select feature dimensions which have one of the following two characteristics : ( 1 ) the dimensions which frequently occur in alice s current vector but seldom occur in the whole vector(i.e . , whose values are relatively large in the current vector but relatively small in the whole vector ) ; or on the contrary , ( 2 ) the dimensions which seldom occur in alice s current vector but frequently occur in the whole vector .
this is because the larger @xmath77(= the difference between values of the selected feature dimension ) , the smaller @xmath78 , i.e. , the larger @xmath79 of eq .
( [ eq : eq2 ] ) , which exploits the larger filtering effect
. however , we can not directly compare alice s current vector @xmath14 and the whole vector @xmath71 by securedf .
the reason is that @xmath14 represents `` frequencies of terms '' in a single vector while @xmath71 represents `` frequencies of documents '' containing those terms .
that is , the meaning of frequencies in @xmath14 differs from that of @xmath71 , and thus , their scales are also different . to resolve this problem , before comparing two vectors @xmath14 and @xmath71 ,
we first normalize them using their mean(@xmath80 ) and standard deviation(@xmath81 ) .
more precisely , we first normalize @xmath14 and @xmath71 to @xmath82 and @xmath83 by eq .
( [ eq : eq9 ] ) , and we next obtain the difference vector @xmath84 . after then , we select the largest @xmath8 dimensions from @xmath85 and use them as the features of ssdd - hf .
@xmath86 figure [ fig : fig9 ] shows how we modify line ( 1 ) of ssdd - fs in figure [ fig : fig3 ] to implement ssdd - hf .
first , as in ssdd - gf , line ( 1 - 0 ) constructs the whole vector @xmath71 by executing securedf .
next , in lines ( 1 - 2 ) and ( 1 - 3 ) , we normalize the current and whole vectors and obtain the difference vector @xmath85 from those normalized vectors . finally , in lines ( 1 - 4 )
to ( 1 - 6 ) , alice chooses @xmath8 dimensions from the difference vector @xmath85 and shares those dimensions with bob . that is , lines ( 1 - 4 ) to ( 1 - 6 ) are the same as lines ( 1 - 2 ) to ( 1 - 4 ) of ssdd - lf in figure [ fig : fig6 ] except that ssdd - lf uses the current vector @xmath14 while ssdd - hf uses the difference vector @xmath85 .
+ the overhead of feature selection in ssdd - hf can be seen as the summation of those in ssdd - lf and ssdd - gf .
that is , like ssdd - gf , it has the overhead of performing securedf to obtain the whole vector @xmath71 , and at the same time , like ssdd - lf , it has the overhead of choosing the largest @xmath8 dimensions from the @xmath5-dimensional difference vector @xmath85
. these overheads , however , can be negligible by the following reasons : ( 1 ) as we explained in ssdd - gf of section [ ssec : sec43 ] , securedf having @xmath74 and @xmath75 of computation and communication complexities can be seen as a pre - processing step executed only once for all document vectors , and its overhead can be negligible in the whole process of ssdd ; ( 2 ) as we explained in ssdd - lf of section [ ssec : sec42 ] , the computation complexity @xmath63 of choosing @xmath8 dimensions from an @xmath5-dimensional vector can be ignored since it can also be seen as the pre - processing step .
one more notable point is that ssdd - hf is a secure protocol like ssdd - gf since it uses securedf and the difference vector which are secure and do not disclose any original values or any sensitive indexes of individual vectors .
in this section , we empirically evaluate feature selection - based ssdd protocols proposed in section 4 . as the experimental data , we use three datasets obtained from the document sets of uci repository@xcite .
these datasets are kos blog entries , nips full papers , and enron emails , which have been frequently used in text mining .
the first dataset consists of kos blog entries collected from dailykos.com , and we call it _ kos_. kos consists of 3,430 documents with 6,906 different terms(dimensions ) , and it has total 467,714 terms
. the second dataset contains nips full papers published in neural information processing systems conference , and we call it _ nips_. nips consists of 1,500 documents with 12,419 different terms , and it has about 1.9 million terms in total .
the third dataset contains e - mail messages of enron , and we call it _ emails_. emails consists of 39,861 e - mails with 28,102 different terms , and it has about 6.4 million terms in total . we experiment five ssdd protocols : ssdd - base as the basic one and four proposed ones of ssdd - rp , ssdd - lf , ssdd - gf , and ssdd - hf . in the experiment , we basically measure the elapsed time of executing ssdd for each protocol . in the first experiment , we vary the number of dimensions for a fixed tolerance , where the number of dimensions means @xmath8 , i.e. , the number of _ selected _ features(dimensions ) by the feature selection . in the second experiment , we vary the tolerance for a fixed number of dimensions . for these two experiments ,
we use kos and nips , which have a relatively small number of documents compared with emails . on the other hand ,
the third experiment is to test scalability of each protocol , and we thus use emails whose number of documents is much larger than those of kos and nips .
the hardware platform is hp proliant ml110 g7 workstation equipped with intel(r ) xeon(r ) quad core cpu e31220 3.10ghz , 16 gb ram , and 250 gb hdd ; its software platform is centos 6.5 linux .
we use c language for implementing all the protocols .
we perform ssdd in a single machine using a local loop for network communication .
the reason why we use the local loop is that we want to intentionally ignore the network speed since different network speeds or environments may largely distort the actual execution time of each protocol .
we measure the execution time spent for that alice sends each document to bob and identifies its similarity securely .
more precisely , we store the whole dataset in bob and select ten query documents for alice .
after then , we execute each ssdd protocol for those ten query documents and use their sum as the experimental result . figure [ fig : fig10 ] shows the experimental results for kos .
first , in figure [ fig : fig10](a ) , we set the tolerance to 0.80 and vary the number of documents by 70 , 210 , 350 , 490 , and 640 , which correspond to 1% , 3% , 5% , 7% , and 9% of kos documents .
as shown in the figure , @xmath87 axis shows the number of ( selected ) dimensions , and @xmath88 axis does the actual execution time .
note that the @xmath88 axis is a log scale .
+ figure [ fig : fig10](a ) shows that all proposed protocols significantly outperform the basic ssdd - base .
even ssdd - rp of selecting features randomly beats ssdd - base by exploiting the filtering effect in the first step of the 2-step protocol .
next , ssdd - gf shows the better performance than ssdd - rp since it selects the frequently occurred features throughout the whole dataset by using df . in case of ssdd - rp and ssdd - gf
, we note that , as the number of dimensions increases , the execution time decreases .
this is because the more number of dimensions we use , the larger filtering effect we can exploit .
ssdd - lf of using locality of the current vector also outperforms ssdd - rp as well as ssdd - base .
in particular , ssdd - lf is better than ssdd - gf for a small number of dimensions , but it is worse than ssdd - gf for a large number of dimensions .
this is because only a small number of dimensions make a big influence on the locality of the current vector .
finally , ssdd - hf of taking advantage of both ssdd - lf and ssdd - gf shows the best performance for all dimensions . in figure [ fig : fig10](a ) , we note that the execution time of ssdd - lf and ssdd - hf slightly increases as the number of dimensions increases .
the reason is that , as the number @xmath8 of dimensions increases , the filtering effect increases relatively slowly , but the overhead of obtaining a current / difference vector and choosing @xmath8 dimensions from that vector increases relatively quickly .
second , in figure [ fig : fig10](b ) , we set the number of dimensions to 70(1% of total dimensions ) and vary the tolerance from 0.95 to 0.75 by decreasing 0.05 . note that the closer to 1.0
the tolerance is , the stronger similarity we use . as shown in the figure ,
all proposed protocols significantly improve the performance compared with ssdd - base .
in particular , ssdd - lf and ssdd - hf , which exploits the locality , show the better performance than the other two proposed ones .
we here note that , as the tolerance decreases , execution times of all proposed protocols gradually increase .
this is because the smaller tolerance we use , the more documents we get as similar ones .
that is , as the tolerance decreases , the more documents pass the first step , and thus , the more time is spent in the second step . in summary of figure
[ fig : fig10 ] , the proposed ssdd - lf and ssdd - hf significantly outperform ssdd - base by up to 726.6 and 9858 times , respectively . figure [ fig : fig11 ] shows the experimental results for nips .
as in figure [ fig : fig10 ] of kos , we measure the execution time of ssdd by varying the number of dimensions and the tolerance . in figure
[ fig : fig11](a ) , we set the tolerance 0.80 and increase the number of dimensions from 120(1% ) to 600(5% ) by 120(1% ) , where 120 means 1% of total 12,419 documents .
next , in figure [ fig : fig11](b ) , we set the number of dimensions to 120 and decrease the tolerance from 0.95 to 0.75 by 0.05 . the experimental results of figures [ fig : fig11](a ) and [ fig : fig11](b ) show a very similar trend with those of figures [ fig : fig10](a ) and [ fig : fig10](b ) .
that is , all proposed protocols significantly outperform ssdd - base , and ssdd - hf shows the best performance . in figure
[ fig : fig11 ] , ssdd - hf extremely improves the performance compared with ssdd - base by up to 16620 times .
+ figure [ fig : fig12 ] shows the results for scalability test using a large volume of high dimensional dataset , emails .
we set the tolerance and the number of dimensions to 0.80 and 70 , respectively , and we increase the number of documents(emails ) from 40(0.1% ) to 39,861(100% ) by 10 times . in this experiment , we exclude the results of ssdd - base , ssdd - rp , and ssdd - gf for the case of 39,861 documents due to excessive execution time .
as shown in the figure , like the results of kos and nips , our feature selection - based protocols outperform ssdd - base at all cases , and in particular , ssdd - lf and ssdd - hf show the best performance regardless of the number of documents .
we also note that all proposed protocols show a pseudo linear trend on the number of documents .
( please note that @xmath87- and @xmath88-axis are all log scales . )
that is , the protocols are pseudo linear solutions on the number of documents , and we can say that they are excellent in scalability as well as performance .
in this paper , we addressed an efficient method of significantly reducing computation and communication overhead in secure similar document detection .
contributions of the paper can be summarized as follows .
first , we thoroughly analyzed the previous 1-step protocol and pointed out that it incurred serious performance overhead for high dimensional document vectors .
second , to alleviate the overhead , we presented the feature selection - based 2-step protocol and formally proved its correctness .
third , to improve the filtering efficiency of the 2-step protocol , we proposed four feature selections : ( 1 ) rp of selecting features randomly , ( 2 ) lf of exploiting locality of a current vector , ( 3 ) gf of exploiting globality of all document vectors , and ( 4 ) hf of considering both locality and globality .
fourth , for each feature selection , we presented its formal protocol and analyzed its secureness and overhead .
fifth , through experiments on three real datasets , we showed that all proposed protocols significantly outperformed the base protocol , and in particular , the hf - based secure protocol improved performance by up to three or four orders of magnitude .
as the future work , we will consider two issues : ( 1 ) use of feature extraction(feature creation ) instead of feature selection for dimensionality reduction and ( 2 ) use of homomorphic encryption rather than random matrix for the secure scalar product .
s. berchtold , c. bohm , and h .-
kriegel , `` the pyramid - technique : towards breaking the curse of dimensionality , '' in _ proc .
intl conf . on management of data
_ , acm sigmod , seattle , washington , pp .
142 - 153 , june 1998 .
e. bertino , d. lin , and w. jiang , `` a survey of quantification of privacy preserving data mining algorithms , '' in _ privacy - preserving data mining : models and algorithms _ , c. c. aggarwal and p. s. yu ( eds . ) , pp . 183 - 205 , kluwer academic publishers , june 2008 .
e. bingam and h. mannila , `` random projection in dimensionality reduction : applications to image and text data , '' in _ proc . the 7th intl conf . on knowledge discovery and data mining _ , acm sigkdd , san francisco , california , pp .
245 - 250 , aug . 2001 .
chan , a. w .- c .
fu , and c. t. yu , `` haar wavelets for efficient similarity search of time - series : with and without time warping , '' _ ieee trans . on knowledge and data engineering _ , vol .
686 - 705 , jan./feb . 2003 .
s. deerwester , t. dumais , w. furnas , k. landauer , and r. harshman , `` indexing by latent semantic analysis , '' _ journal of the american society for information science _
41 , no . 6 , pp .
391 - 407 , sept .
1990 . c. faloutsos , m. ranganathan , and y. manolopoulos ,
`` fast subsequence matching in time - series databases , '' in _ proc . of intl conf . on management of data
_ , acm sigmod , minneapolis , minnesota , pp .
419 - 429 , may 1994 .
b. goethals , s. laur , h. lipmaa , and t. mielikainen , `` on secure scalar product computation for privacy - preserving data mining , '' in _ proc . of the 7th annual intl conf . in information security & cryptology
_ , seoul , korea , pp .
104 - 120 , dec . 2004 .
han , j. lee , y .- s .
moon , s. hwang , h. yu , `` a new approach for processing ranked subsequence matching based on ranked union , '' in _ proc .
of intl conf . on management of data
_ , acm sigmod , athens , greece , pp .
457 - 468 , june 2011 .
w. jiang , m. murugesan , c. clifton , and l. si , `` similar document detection with limited information disclosure , '' in _ proc .
of the 24th ieee intl conf . on data
engineering _ , cancun , mexico , pp .
735 - 743 , apr . 2008 .
moon , k .- y .
whang , and w .- s .
han , `` generalmatch : a subsequence matching method in time - series databases based on generalized windows , '' in _ proc . of intl conf .
on management of data _ , acm sigmod , madison , wisconsin , pp . 382 - 393 , june 2002 .
moon , h .- s .
kim , and e. bertino , `` publishing time - series data under preservation of privacy and distance orders , '' in _ proc . of the 21st intl conf . on database and expert systems applications _ ,
part ii , pp .
17 - 31 , bilbao , spain , aug . 2010 .
moon , b .-
kim , m. s. kim , and k .- y .
whang , `` scaling - invariant boundary image matching using time - series matching techniques '' , _ data & knowledge engineering _ , vol .
1022 - 1042 , oct .
2010 .
n. shivakumar and h. garcia - molina , `` scam : a copy detection mechanism for digital documents , '' in _ proc .
of the 2nd intl conf . in theory and practice of digital libraries _ ,
austin , texas , pp . 398 - 409 , june 1995 .
b. tang , m. shepherd , and e. milios , `` comparing and combining dimension reduction techniques for efficient text clustering , '' in _ proc . of intl workshop on feature selection for data mining : interfacing machine learning and statistics _ , newport beach , california , pp .
17 - 26 , apr . 2005 .
j. vaidya and c. clifton , `` privacy preserving association rule mining in vertically partitioned data , '' in _ proc . of the 8th intl conf . on knowledge discovery and data mining _ , acm sigkdd , alberta , canada , pp .
639 - 644 , july 2002 . | secure similar document detection(ssdd ) identifies similar documents of two parties while each party does not disclose its own _ sensitive _ documents to another party . in this paper
, we propose an efficient 2-step protocol that exploits a feature selection as the lower - dimensional transformation and presents discriminative feature selections to maximize the performance of the protocol .
for this , we first analyze that the existing 1-step protocol causes serious computation and communication overhead for high dimensional document vectors . to alleviate the overhead , we next present the feature selection - based 2-step protocol and formally prove its correctness .
the proposed 2-step protocol works as follows : ( 1 ) in the _ filtering _ step , it uses low dimensional vectors obtained by the feature selection to filter out non - similar documents ; ( 2 ) in the _ post - processing _ step , it identifies similar documents only from the non - filtered documents by using the 1-step protocol . as the feature selection , we first consider the simplest one , random projection(rp ) , and propose its 2-step solution ssdd - rp .
we then present two discriminative feature selections and their solutions : ssdd - lf(local frequency ) which selects a few dimensions locally frequent in the current querying vector and ssdd - gf(global frequency ) which selects ones globally frequent in the set of all document vectors .
we finally propose a hybrid one , ssdd - hf(hybrid frequency ) , that takes advantage of both ssdd - lf and ssdd - gf .
we empirically show that the proposed 2-step protocol outperforms the 1-step protocol by three or four orders of magnitude . + * keywords * : secure similar document detection , cosine similarity , feature selection , lower - dimensional transformation , term frequency , document frequency * efficient 2-step protocol and its discriminative feature selections + in secure similar document detection * sang - pil kim@xmath0 , myeong - sun gil@xmath0 , yang - sae moon@xmath0 , and hee - sun won@xmath1 @xmath0department of computer science , kangwon national university + 1 kangwondaehak - gil , chuncheon - si , gangwon 200 - 701 , republic of korea @xmath1electronics and telecommunications research institute + 218 gajeong - ro , yuseong - gu , daejeon 305 - 701 , republic of korea e - mail : \{spkim , gils , ysmoon}@kangwon.ac.kr , [email protected] |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Extremely Hazardous Materials
Transportation Security Act of 2005''.
SEC. 2. RULEMAKING.
(a) In General.--Not later than 180 days after the date of
enactment of this Act, the Secretary of Homeland Security, in
consultation with the heads of other appropriate Federal, State, and
local government entities, security experts, representatives of the
hazardous materials shipping industry and labor unions representing
persons who work in the hazardous materials shipping industry, and
other interested persons, shall issue, after notice and opportunity for
public comment, regulations concerning the shipping of extremely
hazardous materials.
(b) Purposes of Regulations.--The regulations shall be consistent,
to the extent the Secretary determines appropriate, with and not
duplicative of other Federal regulations and international agreements
relating to the shipping of extremely hazardous materials and shall
require--
(1) physical security measures for such shipments, such as
the use of passive secondary containment of tanker valves and
other technologies to ensure the physical integrity of
pressurized tank cars used to transport extremely hazardous
materials, additional security force personnel, and
surveillance technologies and barriers;
(2) concerned Federal, State, and local law enforcement
authorities (including, if applicable, transit, railroad, or
port authority police agencies) to be informed before an
extremely hazardous material is transported within, through, or
near an area of concern;
(3) the creation of terrorism response plans for shipments
of extremely hazardous materials;
(4) the use of currently available technologies and systems
to ensure effective and immediate communication between
transporters of extremely hazardous materials and all entities
charged with responding to acts of terrorism involving
shipments of extremely hazardous materials;
(5) comprehensive and appropriate training in the area of
extremely hazardous materials transportation security for all
individuals who transport, load, unload, or are otherwise
involved in the shipping of extremely hazardous materials or
who would respond to an accident or incident involving a
shipment of extremely hazardous material or would have to
repair transportation equipment and facilities in the event of
such an accident or incident; and
(6) for the transportation of extremely hazardous materials
through or near an area of concern, the Secretary to determine
whether or not the transportation could be made by one or more
alternate routes at lower security risk and, if the Secretary
determines the transportation could be made by an alternate
route, the use of such alternate route, except when the
origination or destination of the shipment is located within
the area of concern.
(c) Judicial Relief.--A person (other than an individual) who
transports, loads, unloads, or is otherwise involved in the shipping of
hazardous materials and violates or fails to comply with a regulation
issued by the Secretary under this section may be subject, in a civil
action brought in United States district court, for each shipment with
respect to which the violation occurs--
(1) to an order for injunctive relief; or
(2) to a civil penalty of not more than $100,000.
(d) Administrative Penalties.--
(1) Penalty orders.--The Secretary may issue an order
imposing an administrative penalty of not more than $1,000,000
for failure by a person (other than an individual) who
transports, loads, unloads, or is otherwise involved in the
shipping of hazardous materials to comply with a regulation
issued by the Secretary under this section.
(2) Notice and hearing.--Before issuing an order described
in paragraph (1), the Secretary shall provide to the person
against whom the penalty is to be assessed--
(A) written notice of the proposed order; and
(B) the opportunity to request, not later than 30
days after the date on which the person receives the
notice, a hearing on the proposed order.
(3) Procedures.--The Secretary may issue regulations
establishing procedures for administrative hearings and
appropriate review of penalties issued under this subsection,
including necessary deadlines.
SEC. 3. WHISTLEBLOWER PROTECTION.
(a) In General.--No person involved in the shipping of extremely
hazardous materials may be discharged, demoted, suspended, threatened,
harassed, or in any other manner discriminated against because of any
lawful act done by the person--
(1) to provide information, cause information to be
provided, or otherwise assist in an investigation regarding any
conduct which the person reasonably believes constitutes a
violation of any law, rule or regulation related to the
security of shipments of extremely hazardous materials, or any
other threat to the security of shipments of extremely
hazardous materials, when the information or assistance is
provided to or the investigation is conducted by--
(A) a Federal regulatory or law enforcement agency;
(B) any Member of Congress or any committee of
Congress; or
(C) a person with supervisory authority over the
person (or such other person who has the authority to
investigate, discover, or terminate misconduct);
(2) to file, cause to be filed, testify, participate in, or
otherwise assist in a proceeding or action filed or about to be
filed relating to a violation of any law, rule or regulation
related to the security of shipments of extremely hazardous
materials or any other threat to the security of shipments of
extremely hazardous materials; or
(3) to refuse to violate or assist in the violation of any
law, rule, or regulation related to the security of shipments
of extremely hazardous materials.
(b) Enforcement Action.--
(1) In general.--A person who alleges discharge or other
discrimination by any person in violation of subsection (a) may
seek relief under subsection (c), by--
(A) filing a complaint with the Secretary of Labor;
or
(B) if the Secretary has not issued a final
decision within 180 days of the filing of the complaint
and there is no showing that such delay is due to the
bad faith of the claimant, bringing an action at law or
equity for de novo review in the appropriate district
court of the United States, which shall have
jurisdiction over such an action without regard to the
amount in controversy.
(2) Procedure.--
(A) In general.-- An action under paragraph (1)(A)
shall be governed under the rules and procedures set
forth in section 42121(b) of title 49, United States
Code.
(B) Exception.--Notification made under section
42121(b)(1) of title 49, United States Code, shall be
made to the person named in the complaint and to the
person's employer.
(C) Burdens of proof.--An action brought under
paragraph (1)(B) shall be governed by the legal burdens
of proof set forth in section 42121(b) of title 49,
United States Code.
(D) Statute of limitations.--An action under
paragraph (1) shall be commenced not later than 90 days
after the date on which the violation occurs.
(c) Remedies.--
(1) In general.--A person prevailing in any action under
subsection (b)(1) shall be entitled to all relief necessary to
make the person whole.
(2) Compensatory damages.--Relief for any action under
paragraph (1) shall include--
(A) reinstatement with the same seniority status
that the person would have had, but for the
discrimination;
(B) the amount of any back pay, with interest; and
(C) compensation for any special damages sustained
as a result of the discrimination, including litigation
costs, expert witness fees, and reasonable attorney
fees.
(d) Rights Retained by Person.--Nothing in this section shall be
deemed to diminish the rights, privileges, or remedies of any person
under any Federal or State law, or under any collective bargaining
agreement.
SEC. 4. REPORT ON EXTREMELY HAZARDOUS MATERIALS TRANSPORTATION
SECURITY.
(a) In General.--Not later than 180 days after the date of
enactment of this Act, the Secretary of Homeland Security, in
consultation with the heads of other appropriate Federal agencies,
shall transmit to Congress a report on the security of, and risk of a
terrorist attack on, shipments of extremely hazardous materials.
(b) Content.--The report under subsection (a) shall include--
(1) information specifying--
(A) the Federal and State agencies that are
responsible for the regulation of the transportation of
extremely hazardous materials; and
(B) the particular authorities and responsibilities
of the heads of each such agency; and
(2) an assessment of the vulnerability of the
infrastructure associated with the transportation of extremely
hazardous materials.
(c) Form.--The report under subsection (a) shall be in unclassified
form but may contain a classified annex.
SEC. 5. DEFINITIONS.
In this Act, the following definitions apply:
(1) Extremely hazardous material.--The term ``extremely
hazardous material'' means--
(A) a material that is toxic by inhalation;
(B) a material that is extremely flammable;
(C) a material that is highly explosive; and
(D) any other material designated by the Secretary
to be extremely hazardous.
(2) Area of concern.--The term ``area of concern'' means an
area that the Secretary determines could pose a particular
interest to terrorists. | Extremely Hazardous Materials Transportation Security Act of 2005 - Directs the Secretary of Homeland Security to issue regulations concerning the shipping of extremely hazardous materials that require: (1) physical security measures; (2) Federal, State, and local law enforcement authorities to be informed before such material is transported within, through, or near an area of concern; (3) the creation of response plans for shipments of extremely hazardous materials; (4) the use of currently available technologies and systems to ensure effective communication between transporters of extremely hazardous materials and all entities charged with responding to acts of terrorism involving shipments of such materials; (5) comprehensive training for all individuals involved in the shipping of such materials; and (6) the Secretary to determine whether transportation through or near an area of concern could be made by alternate routes at a lower security risk.
Subjects a person (other than an individual) who violates such a regulation to injunctive relief or a civil penalty of up to $100,000. Authorizes the Secretary to impose administrative penalties.
Sets forth whistleblower protections for persons involved in the shipment of extremely hazardous materials.
Requires the Secretary to report to Congress on the security of, and risk of a terrorist attack on, such shipments.
Defines "extremely hazardous material" as material that is toxic by inhalation, extremely flammable, highly explosive, or otherwise designated by the Secretary. |
low back pain ( lbp ) is a health problem that brings about extensive lost wages and additional medical expenses with the total cost ranging from $ us 7,000 to $ us 16,000 million per year .
it affects people in various occupations , including agricultural farmers . a high prevalence of lbp over 12-month period among agricultural farmers
was reported ( ranging from 18.5% to 84% ) in comparison to the general working population ( ranging from 44.4% to 48.2% ) .
these risk factors include exposure to vibrations , repetitive trunk flexion and rotation , and lifting or carrying more than 25 pounds with 2 hands or above the shoulder , sleep problems , mental distress , and interpersonal stress at work , low education , low income , history of back pain , other current musculoskeletal complaints , low flexibility of the back muscles , low physical activity levels , and poor lumbo - pelvic stability .
rubber farming , one sector of agricultural farming , is an important occupation in south - east asia .
the top three producers of natural rubber in the world are all in south - east asia , namely , indonesia , malaysia , and thailand .
although thailand has fewer rubber plantations , in term of area , than indonesia , thailand is the world 's largest rubber producer . in general ,
rubber tapping is when rubber farmers use knives to cut lines on the bark of rubber trees .
rubber tapping starts when the circumference of the tree trunk reaches 50 centimeters at a height of 150 centimeters above the ground and with the line gradually moving down each time .
normally , the tree trunk is divided circumferentially into three facets with each facet being tapped for about five years before moving on to the next facet .
rubber collecting takes place when rubber farmers collect a cup filled with rubber latex that has dripped from the bark line and pour it into a big tub ( 20 liters ) .
the big tub is carried along until full , and its content is then poured into several bigger tanks placed on a cart ready for being transported .
rubber sheeting involves lifting and transferring rubber latex to a big container for processing into rubber sheets .
thus , the work of rubber farmers involves physical labor tasks such as trunk twisting , bending , and extension as well as lifting heavy buckets repetitively over a prolonged period of time .
previous cross - sectional studies showed that lbp was the most common musculoskeletal disorder affecting rubber famers .
approximately 55% of rubber farmers reported lbp at 1 month , 52.9% at 3 months , and 66.2% at 12 months .
to date , only one study has investigated risk factors for lbp in rubber famers .
meksawi et al . reported that tapping levels and tapping postures , a high frequency of weight lifting , low level of social support , low level of education , and income were associated with lbp .
there is limited evidence of relations among physical capacity and lbp in rubber farmers , although poor physical capacity , such as reduced trunk flexion , decreased trunk muscle endurance , and instability of the spine , have been linked to lbp in general population .
limited knowledge of physical capacity factors affect prevention efforts and the development of optimal treatment programs to minimize the risk of lbp occurrence .
the purposes of this study were to examine the prevalence of lbp in rubber farmers and to identify the associations between potential risk factors and 12-month lbp in rubber farmers .
such information will inform stakeholders about the health status and related factors concerning thai rubber farmers in order to develop effective interventions or preventive measures for lbp .
a cross - sectional study was conducted during january to march 2015 in thai rubber farmers in five sub - districts of thungsong district , nakhonsrithammarat province , thailand , using cluster random sampling .
of 13 sub - districts in thungsong district , 5 were selected using random numbers . in each sub - district ,
thai rubber famers who were employed in a rubber plantation for at least 1 year and were between 18 and 70 years old were included .
participants who had any history of major back trauma such as a motor vehicle injury , fall from height , serious spinal conditions including cancer were excluded .
lbp was defined as pain localized between the 12 rib and the inferior gluteal folds , with or without leg pain that lasted for at least 24 hours and had a pain score of 3 out of 10 or higher .
the pain had to be greater than or equal to 3 out of 10 on the visual analogue scale ( vas ) which was considered to be higher than the minimal clinically important change for lbp .
the duration of pain for at least 24 hours would exclude any pain caused by fatigue or discomfort that could be resolved within a few hours .
an explanation of the study was given to all participants and formal informed signed consents were obtained before any data were collected .
the sample size used in this study was calculated based on the prevalence of lbp in rubber tappers from prior research . with an estimated 10% non - response rate ,
the project was approved by the ethics review committee for research involving human research subjects , health sciences group , chulalongkorn university , thailand .
participants who indicated pain in the low back region of the specifically modified nordic questionnaire and scored their pain at greater than or equal to 3 out of 10 on the vas were categorized as having lbp .
a preliminary study found this modified questionnaire to be valid ( the content validity was 0.81 and cronbach 's alpha was 0.84 ) and reliable ( intraclass correlation coefficient ( icc ) was 0.84 ) .
risk factors for lbp in rubber farmers were examined using a questionnaire and objective measures . the questionnaire consisted of individual , occupational , and psychosocial risk factors .
individual risk factors included age , gender , bmi , educational level , underlying disease , smoking and alcohol usage , level of physical activity , and functional disability .
the items related to level of physical activity followed those of the global physical activity questionnaire ( gpaq ) which classified individuals as engaging in low , moderate and high levels of physical activity .
functional disability was assessed using the modified oswestry low back pain disability questionnaire ( thai version ) which grouped individuals as having minimal , moderate , severe , crippled , and bed - bound conditions .
two additional individual risk factors were investigated by 2 objective measures , namely , flexibility of back and leg muscles and stability of the lumbopelvic region .
flexibility of back and leg muscles was measured with a sit and reach box being placed on the floor .
participants were asked to slowly reach forward with parallel hands as far as possible without bending the knees while sitting on the floor with both legs fully extended and with the soles of the feet against a box . the furthest distance point in inches reached with the fingertips for 3 trials was recorded . with different criteria for males and females , the recorded distance was then classified as very low , low , moderate , good , and very good flexibility according to the sports authority of thailand criteria .
for instance , a distance below nine inches for a female and five inches for a male were classified as very low flexibility .
a distance of more than 21 inches for a female and 18 inches for a male were classified as very good flexibility .
stability of lumbopelvic region was measured using a pressure biofeedback unit by asking the participants to perform some tasks in progressive fashion while simultaneously maintaining the pressure on the gauge .
a deviation of more than 10 mmhg indicates that the stabilization action of the stabilizer muscle has been lost .
the stability of the lumbopelvic region was measured and was classified into 6 levels ( 0 - 5 ) according to sahrmann 's core stability test criteria .
occupational risk factors comprised working experience , work posture , tapping level , having a secondary job , and duration of work in each task ( i.e. , tapping , collecting , and sheeting ) .
psychosocial risk factors included sleep hours and stress level which were asked in concordance with the suanprung stress test that was shown to have an overall cronbach 's alpha greater than 0.7 .
the suanprung stress test contains 20 items rated on a 5-point likert scale with item responses ranging from " 1 " ( no stress ) to " 5 " ( extremely high stress ) .
the total scores were classified into four levels : 0 to 23 as mild , 24 to 41 as moderate , 42 to 61 as high , and more than 61 as severe stress .
chi - square analysis was carried out to determine the association between the 12-month prevalence of lbp with individual , occupational , and psychosocial factors .
any factors with a p - value 0.2 from chi - square analysis were eligible for addition into the multivariate logistic regression analysis .
other variables that were logically reasonable and were previously found to be related to lbp were also included in the multivariate models . these were gender and stress . the odds ratios ( or ) associated with particular factors
all statistical analyses were performed using spss statistical software , version 17.0 ( spss inc , chicago , il , usa ) .
a cross - sectional study was conducted during january to march 2015 in thai rubber farmers in five sub - districts of thungsong district , nakhonsrithammarat province , thailand , using cluster random sampling .
of 13 sub - districts in thungsong district , 5 were selected using random numbers . in each sub - district ,
thai rubber famers who were employed in a rubber plantation for at least 1 year and were between 18 and 70 years old were included .
participants who had any history of major back trauma such as a motor vehicle injury , fall from height , serious spinal conditions including cancer were excluded .
lbp was defined as pain localized between the 12 rib and the inferior gluteal folds , with or without leg pain that lasted for at least 24 hours and had a pain score of 3 out of 10 or higher .
the pain had to be greater than or equal to 3 out of 10 on the visual analogue scale ( vas ) which was considered to be higher than the minimal clinically important change for lbp .
the duration of pain for at least 24 hours would exclude any pain caused by fatigue or discomfort that could be resolved within a few hours .
an explanation of the study was given to all participants and formal informed signed consents were obtained before any data were collected .
the sample size used in this study was calculated based on the prevalence of lbp in rubber tappers from prior research . with an estimated 10% non - response rate ,
the project was approved by the ethics review committee for research involving human research subjects , health sciences group , chulalongkorn university , thailand .
participants who indicated pain in the low back region of the specifically modified nordic questionnaire and scored their pain at greater than or equal to 3 out of 10 on the vas were categorized as having lbp .
a preliminary study found this modified questionnaire to be valid ( the content validity was 0.81 and cronbach 's alpha was 0.84 ) and reliable ( intraclass correlation coefficient ( icc ) was 0.84 ) .
risk factors for lbp in rubber farmers were examined using a questionnaire and objective measures . the questionnaire consisted of individual , occupational , and psychosocial risk factors .
individual risk factors included age , gender , bmi , educational level , underlying disease , smoking and alcohol usage , level of physical activity , and functional disability .
the items related to level of physical activity followed those of the global physical activity questionnaire ( gpaq ) which classified individuals as engaging in low , moderate and high levels of physical activity .
functional disability was assessed using the modified oswestry low back pain disability questionnaire ( thai version ) which grouped individuals as having minimal , moderate , severe , crippled , and bed - bound conditions .
two additional individual risk factors were investigated by 2 objective measures , namely , flexibility of back and leg muscles and stability of the lumbopelvic region .
flexibility of back and leg muscles was measured with a sit and reach box being placed on the floor .
participants were asked to slowly reach forward with parallel hands as far as possible without bending the knees while sitting on the floor with both legs fully extended and with the soles of the feet against a box . the furthest distance point in inches reached with the fingertips for 3 trials was recorded . with different criteria for males and females ,
the recorded distance was then classified as very low , low , moderate , good , and very good flexibility according to the sports authority of thailand criteria .
for instance , a distance below nine inches for a female and five inches for a male were classified as very low flexibility .
a distance of more than 21 inches for a female and 18 inches for a male were classified as very good flexibility .
stability of lumbopelvic region was measured using a pressure biofeedback unit by asking the participants to perform some tasks in progressive fashion while simultaneously maintaining the pressure on the gauge .
a deviation of more than 10 mmhg indicates that the stabilization action of the stabilizer muscle has been lost .
the stability of the lumbopelvic region was measured and was classified into 6 levels ( 0 - 5 ) according to sahrmann 's core stability test criteria .
occupational risk factors comprised working experience , work posture , tapping level , having a secondary job , and duration of work in each task ( i.e. , tapping , collecting , and sheeting ) .
psychosocial risk factors included sleep hours and stress level which were asked in concordance with the suanprung stress test that was shown to have an overall cronbach 's alpha greater than 0.7 .
the suanprung stress test contains 20 items rated on a 5-point likert scale with item responses ranging from " 1 " ( no stress ) to " 5 " ( extremely high stress ) .
the total scores were classified into four levels : 0 to 23 as mild , 24 to 41 as moderate , 42 to 61 as high , and more than 61 as severe stress .
participant characteristics were described using means and standard deviation or proportions . chi - square analysis was carried out to determine the association between the 12-month prevalence of lbp with individual , occupational , and psychosocial factors .
any factors with a p - value 0.2 from chi - square analysis were eligible for addition into the multivariate logistic regression analysis .
other variables that were logically reasonable and were previously found to be related to lbp were also included in the multivariate models . these were gender and stress . the odds ratios ( or ) associated with particular factors
all statistical analyses were performed using spss statistical software , version 17.0 ( spss inc , chicago , il , usa ) .
of the 450 participants , 17 rubber farmers were excluded because they did not meet the inclusion criteria of having at least 1 year of experience in farming and with no history of back trauma .
the 12-month prevalence of lbp in rubber farmers was 55.7% ( n=241 ) with the point prevalence of 33% ( n=143 ) .
almost all of the participants who had lbp at the current time ( 97% ) also had a history of lbp within the preceding 12 months .
the average ( standard deviation ) pain intensity on the visual analog scale was 4.21.7 .
however , all of the participants who had lbp at the time of the study were found to have minimal to moderate functional disability .
the average ( standard deviation ) disability score on the modified oswestry low back pain disability questionnaire ( thai version ) was 9.617.29 .
approximately two - thirds of the participants defined their farm work as involving low to moderate physical activity level .
nearly all of them ( 96.77% ) were involved in at least 2 tasks of rubber farming ( rubber tappers and rubber collectors ) .
the majority of rubber farmers had no additional job off the farm and worked solely as rubber farmers .
demographic characteristics ( n=433 ) when multivariable logistic regression was used , the results revealed that bmi ( adjusted or 1.05 ; 95% ci : 1.00 - 1.11 ) , primary school education ( adjusted or 2.45 95% ci : 1.13 - 5.32 ) , exposure to pesticides ( adjusted or 1.63 ; 95% ci : 1.04 - 2.55 ) , and tapping level below their knee ( adjusted or 2.64 ; 95% ci : 1.02 - 6.85 ) were associated with lbp in rubber farmers after controlling for other variables as shown in table 2 .
prevalence and adjusted odds ratio ( oradj ) with 95% confidence intervals ( 95%ci ) of lbp within the preceding 12 months with respect to factors in the final modeling ( n=433 )
this study found that the 12-month prevalence of lbp in this group of rubber farmers was high ( 55.7% ) with the point prevalence at 33% .
the factors that showed significant associations with lbp were bmi , primary education , exposure to pesticides , and tapping below knee level . surprisingly , physical capacity , including flexibility of the back and leg muscles and stability of the lumbopelvic region ,
this study investigated the prevalence of lbp during the previous 12 months , therefore seasonal variation should not have any effect on the results .
the high 12-month prevalence of lbp in this study supports previous findings that this problem is common in rubber farmers .
the prevalence of approximately 50% is also consistent with findings reported in similar groups of participants . as almost all of the participants who had lbp at the current time
also had a history of lbp within the preceding 12 months , these results suggest that lbp in this group of participants was of recurrent nature . in this current cohort , only individual and occupational factors , but no psychosocial factors , were found to be associated with lbp .
these findings are inconsistent with a previous study that demonstrated that all individual , occupational , and psychosocial factors were risk factors for lbp in rubber farmers .
this inconsistency might be related to the discrepancy in the components of the psychosocial factors examined between studies .
the previous study only investigated psychosocial factors limited to farm work whereas this current study examined psychosocial factors related to both farm and non - farm work .
nevertheless , the low level of stress found among this group of participants in spite of lbp may suggest that they are able to cope with the problems well .
the finding that bmi was significantly associated with lbp in rubber farmers concurs with previous studies .
the mechanisms underlying this association remain unclear , but this relationship may be due to the increased risk of lumbar disc degeneration particularly with an increased bmi of greater than 25 kg / m .
the significant association between the educational level of rubber farmers and lbp confirms the previous study in rubber tappers that reported education at primary school level is a risk factor for lbp .
each additional year of formal education was also found to be associated with decreased risk for disability pensioning from lbp .
this finding might be due to the limited possibility of upward mobility to less physically demanding tasks . as a result ,
rubber farmers who graduated at primary school level might be at greater risk of career - long exposure to labor intensive work which is known to be risk factor for lbp .
in contrast , previous studies in other farmers reported that there were no associations between educational level and lbp .
nevertheless , it must be noted that the educational level in this study referred to formal education at school , which does not normally teach strategies for minimizing lbp . in - depth interviews with some participants revealed that they had no knowledge on how to minimize lbp on the work site .
rubber farmers who exposed to pesticides were at increased risk of lbp by 1.5 times .
although pesticides use might differ between rubber and tobacco farming , tobacco farmers exposed to pesticides also reported an increased risk of chronic lbp by 2.37 times .
mechanically , farmers must carry a heavy pesticide tank around while spraying the substance on the farm for prolonged periods . as a result , sustained spinal loading
neurologically , pesticides could indirectly lead to lbp as they may induce acute psychological effects including anxiety , depression , irritability and restlessness .
the association between tapping below knee level and lbp was in line with the association between tapping below waist level and lbp reported in previous studies .
working at this tapping level requires a certain degree of trunk flexion which stimulates the back muscle to work continuously .
together with the repetitive trunk flexion found in rubber farming , this occupational factor could therefore be a potential risk for lbp .
the finding of mild to minimal functional disability in the majority of the participants who reported lbp in this study even though the pain intensity on average was moderate was also unanticipated . however , this phenomenon might be plausible if an individual uses drugs or medications that could mask pain perception . some drugs or medications such as analgesics , muscle relaxants , and nonsteroidal anti - inflammatory drugs
a previous study revealed that one - third of rubber tappers used kratom ( mitragynine speciosa ) which has mild pain relieving effect .
consequently , there is a risk of underreporting the lbp prevalence . in order to improve data accuracy ,
the use of drugs and medications should be recorded and be taken into account in the future studies . moreover ,
the healthy worker effect which enhances individuals who have no adverse effects from work to persist in their careers could be a potential bias in this study .
it was noted that the participants in the current study had worked as rubber farmers for 21 years on average .
such a long work duration might help screen individuals who could no longer tolerate the work requirement for this profession .
to minimize this form of bias , it would be better to study newly employed workers .
the study determined broad bio - psychosocial risk factors for their contribution to lbp among rubber farmers .
first , this study did not obtain any data regarding the use of drugs or medications which might alter pain perception .
second , this study did not gather data about prior history of lbp so the association between this variable and lbp could not be ascertained .
third , this study evaluated physical load at work using only a questionnaire . to clearly confirm these results
, further studies should assess physical load at work using observation or other objective examination .
fourth , when using factors from the results of the present study , one has to be aware that this study was a cross - sectional study .
fifth , this study was conducted on rubber farmers so the results should not be generalized to other groups of farmers .
lastly , in the present study psychosocial risk factors only included sleep hours and stress level measured by the suanprung stress test .
the study determined broad bio - psychosocial risk factors for their contribution to lbp among rubber farmers .
first , this study did not obtain any data regarding the use of drugs or medications which might alter pain perception .
second , this study did not gather data about prior history of lbp so the association between this variable and lbp could not be ascertained .
third , this study evaluated physical load at work using only a questionnaire . to clearly confirm these results
, further studies should assess physical load at work using observation or other objective examination .
fourth , when using factors from the results of the present study , one has to be aware that this study was a cross - sectional study .
fifth , this study was conducted on rubber farmers so the results should not be generalized to other groups of farmers .
lastly , in the present study psychosocial risk factors only included sleep hours and stress level measured by the suanprung stress test .
further research is needed to address preventive strategies to reduce lbp among rubber farmers .
acknowledgments : this study was funded by a grant ( i d ahs - cu 57008 ) received from the faculty of allied health sciences , chulalongkorn university , thailand .
conflicts of interest : no commercial party having a direct financial interest in the results of the research supporting this article has or will confer a benefit upon the authors or upon any organization with which the authors are associated . | objectives : low back pain ( lbp ) is one of the most prevalent musculoskeletal disorders in the general population , especially among manual laborers . moreover , it often brings about lost wages and additional medical expenses . however , the potential risk factors for lbp are unknown . this study aimed to estimate the prevalence of lbp and to determine the individual , occupational , and psychosocial factors associated with lbp among rubber farmers .
methods : a cross - sectional survey was conducted among 450 thai rubber farmers using cluster random sampling .
data were collected using face - to - face interviews and objective examination and were analyzed using multivariate logistic regression.results : of the 433 rubber farmers , the point and 12-month prevalence of lbp in rubber farmers was 33% and 55.7% , respectively .
bmi , primary school education , exposure to pesticides , and tapping below knee level were statistically associated with lbp after controlling for other variables .
conclusions : low back pain is common among rubber farmers .
only four factors were identified as being associated with the high prevalence of lbp .
however , these factors might be altered if more variables are taken into account .
further research investigating the causal relation between these factors and lbp should be conducted . |
the fabrication of planar ( 2d ) microcavities has provided experimental access to the strong coupling regime between light and matter in a spatially extended system @xcite .
the excitations of the coupled modes are polaritons , quasi - particles which are bosons with a very small effective mass , typically @xmath2 times that of an electron .
this makes it possible to study quantum effects in a solid state system at a relatively high temperatures @xcite .
the short life time of the polaritons , @xmath3 ps @xcite , means that the system has to be constantly pumped to observe bose - einstein condensation ( bec ) .
it can be achieved either resonantly @xcite or non - resonantly @xcite producing condensates with long coherence times @xcite and long range spatial coherence @xcite .
these have been shown to exhibit interesting nonlinear and many - body phenomena , such as vortex formation @xcite , solitons @xcite and superfluid - like flow @xcite . in recent experiments @xcite , microcavity polaritons
have been confined in wire like geometries , to study the peculiar interaction effects of bosons in a 1d system as well as a step to the realisation of polariton circuits @xcite .
the extra confinement was provided in two different ways : by etching of the whole planar structure to form a wire @xcite and by excitation of surface acoustic waves ( saw ) to form a dynamic 1d lattice @xcite .
it is the spatial coherence measurements in the latter setup that are the subject of the present paper . in the experiment ,
the same sample area was studied with and without the saw , so it was possible to measure how the coherence length changed when the 1d confinement was introduced . without confinement , the entire 2d emission spot was coherent , that is , the coherence length was the same as the spot size @xcite . with confinement
it was observed that the coherence length in the direction perpendicular to the saw wavefronts was reduced to the saw wavelength , as the condensates confined in the individual minima became decoupled .
however , there was also a significant reduction in the coherence length measured parallel to the wavefronts @xcite , along the ` wires ' , which is less easily explained . in this paper
we analyse possible mechanisms for the reduction of the spatial coherence along the 1d wires .
we consider the effect @xcite of polariton - polariton interactions , disorder , and finite momentum modes occupation in a harmonic trap on the first order correlation function @xmath0 .
the theoretical predictions are compared with the experimentally measured @xmath0 @xcite which has a gaussian shape up to the noise floor .
this shape can only be reproduced by the non - equilibrium occupation of a set of the finite momentum modes . performing a more quantitative fit , using the occupation numbers as fitting parameters , we find that the number of the excited modes in the experiment @xcite is @xmath4 .
the rest of the paper is organised as follows . in section 2
we formulate the model of the 1d polariton system with different types of interactions . in section 3
we calculate the spatial dependence of the coherence function @xmath0 when finite momentum modes are occupied in a harmonic trap .
section 4 contains discussion of the anderson localisation due to disorder in a 1d system . in section 5
we discuss the effect of the short range polariton - polarion interaction on @xmath0 .
section 6 contains the fitting of the experimental data with the theoretical predictions from the previous sections .
we consider one dimensional polaritons formed by the equal mixture of exciton and photon at the zero momentum following the experimental setup of @xcite .
the motion of photons in @xmath5direction is restricted by a planar microcavity which is formed by a pair of bragg mirrors giving a small mass to the 2d photons @xmath6 .
excitons are bound electron - hole pairs , they are bosons @xmath7 , which are confined to the same 2d plane by a quantum well , see the scheme in fig .
the eigenmodes of a strongly interacting @xmath7 and @xmath6 are two hybrid particles , lower and upper polarions ( lp and up ) .
we will consider only the lp branch as bosons @xmath8 with a parabolic dispersion and mass @xmath9 assuming that the photons and the excitons are at resonance at @xmath10 , e. g. a lp around the zero momentum consists of equal mixture of the exciton and the photon @xmath11 .
bosonic nature of these polaritons allows a macroscopic occupation of the single zero momentum state that leads to a long - range phase coherence in the system .
access to this regime is not straightforward as the short lived polaritons do not have time to relax at low momenta due to a slow emission of the low energy phonons forming a relaxation bottleneck and forbidding formation of the truly equilibrium condensate .
this difficulty can be overcome in a nonequilibrium regime under a strong pumping when the @xmath10 state is populated directly or indirectly by employing an optical parametric oscillator ( opo ) . in the opo scheme
the system is excited by the pump at a finite momentum @xmath12 and energy @xmath13 . due to the polariton - polariton interactions the @xmath14 excitations can scatter fast to a low momentum . above a threshold density of the directly pumped polaritons
the interaction becomes strong enough to reach the zero momentum state @xmath15 , signal , which also forms a significant population in the idler state @xmath16 due to the conservation of the energy @xmath17 and the momentum @xmath18 by the scattering process . above the threshold the systems locks only to these three states @xcite with a significant population of the signal state at @xmath15
this approach is more advantageous than the direct pumping as in the opo regime the pump at a finite @xmath12 does not obstruct the emission from the condensate at @xmath15 .
the 2d polaritons are constrained further to a set of 1d wires by an in - plane surface acoustic wave propagating along @xmath19-direction , see the scheme in fig .
the @xmath19-component of the momentum belongs to the lowest energy subband leaving the only motion along @xmath20-axis unconstrained .
the polaritons are detected though the photonic part which constantly escapes the system forming a spatial distribution of the electric field outside of the cavity that contains information about the polaritons inside the structure , @xmath21 and can be registered by a detector . here @xmath22 are the eigenmodes of the external potential in @xmath20-direction , e. g. @xmath23 if the there is no extra potential along @xmath20-axis , @xmath24 is the dispersion of the electro - magnetic modes , and @xmath25 is the length of the wire .
scheme of the microcavity ( dbrs ) with a quantum well inside ( qw ) , and the surface acoustic wave ( red elongated ellipses are the polariton 1d wires ) . ]
we consider three possible mechanisms that can affect spatial coherence of free polaritons in the signal , @xmath26 around @xmath27 .
first is a confining potential originating from a finite size of the excitation spot .
we assume that the single particle potential is harmonic,@xmath28 where the frequency @xmath29 defines the system size as @xmath30 .
second is a disorder potential that has a finite correlation length @xmath31 , @xmath32 where @xmath33 is a function that vanished when @xmath34 , i. e. @xmath35 and @xmath36 .
and the third mechanism is the pair polariton - polariton interaction .
it has a short range interaction potential as the polaritons have zero electric charge with the interaction strength @xmath37 .
we neglect the interaction of polariton with phonons mechanism for decoherence as it is ineffective at low momenta .
the polariton - phonon scattering time is @xmath38 ps @xcite which is much longer than the polariton life time @xmath39 ps . the latter is due to detuning chosen in the experiment @xcite where the polaritons at @xmath40 consist of the equal mixture of the exciton and the photon . to analyse the spatial coherence that is observed via the photonic part of the lp we consider the first order correlation function , @xmath41\right\rangle } { \left\langle \hat{n}\left[e^{\dagger}\left(0\right)e\left(0\right)\right]\right\rangle } , \label{eq : g1_def}\ ] ] where @xmath42 $ ] is the normal ordering operator that eliminates the vacuum fluctuations of the quantised electro - magnetic field and @xmath43 is the average with respect to a state of the system .
the eigenstates of the free polariton hamiltonian are plain waves .
the expectation value over the zero momentum state occupied by many polaritons gives an infinite range coherence , @xmath44 . in the next three sections we analyse each of the possible mechanisms separately to see how they restrict this behaviour
here we obtain the single particle eigenstates of polaritons in the harmonic trap .
then we evaluate @xmath0 as the expectation value with respect to a density matrix assuming a non - equilibrium distribution of the polariton occupation numbers around the zero momentum state .
the single particle hamiltonian for a polariton in a harmonic potential , @xmath45 is the one of the quantum harmonic oscillator .
it is a well known and solved model . following the standard approach to the diagonalisation problem we search for a solution in form of a polynomial @xcite .
the eigenfunction problem for the model eq .
( [ eq : h_hamonic ] ) leads to hermite s differential equation which is solved by @xmath46 where @xmath47 are the hermite polynomials , @xmath30 is the size of the system , and @xmath48 is the number of the energy level that describes a finite momentum state of the 1d polariton .
the complementary eigenvalue problem can be solved via a taylor expansion . as a result
the eigenenergies of the finite momentum states @xmath49 have a linear spectrum @xmath50 a many polariton state , which is created in a pumping process , is described by a density matrix .
the expectation value in the correlation function eq .
( [ eq : g1_def ] ) has to be evaluated as an ensemble average , @xmath51 where @xmath52 is a density matrix . in a continuous wave experiment
the signal is accumulated on the time scale of seconds .
the phase coherence of a polariton condensate is limited by few hundreds of pico - seconds @xcite which partitions the long accumulation interval into a large ensemble of many very short measurements .
thus the total signal is averaged over many realisations with different distribution of the polariton occupation numbers and the total number of polaritons which also changes slightly in time due to the power fluctuation in the pumping laser @xcite . as the phase coherence is lost between different realisations the density matrix , which describes such an ensemble , is diagonal in the representation of the polariton occupation numbers , @xmath53 here
@xmath54 is a state with @xmath55 polaritons at the momentum @xmath56 , @xmath57 polaritons at the momentum @xmath58 , etc and @xmath59 are the diagonal matrix elements that describe a non - equilibrium distribution of the boson occupation numbers . substituting eqs .
( [ eq : rho_average ] ) in eq .
( [ eq : g1_def ] ) and using the eigenfunction eq .
( [ eq : psi_harmonic ] ) we obtain the first order coherence function as sum over a set of finite momentum modes , @xmath60 where @xmath61 are weight factors , e. g. in thermal equilibrium they are given by the bose - einstein distribution function .
note that @xmath62 are not completely equivalent to the occupation number of the polaritons , @xmath63 , due to the normalisation of the correlation function in eq .
( [ eq : g1_def ] ) .
when only a few low momenta modes are excited with probabilities that are approximately equal , @xmath64 , the shape of the correlation function is almost gaussian with a reduced system size @xmath65 , @xmath66 .
when many modes are excited with an arbitrary @xmath62 the shape is arbitrary as polynomials in eq .
( [ eq : psi_harmonic ] ) of a high order @xmath67 can approximate a wide range of different functions .
polaritons are subject to a random potential which originates from two sources .
one is disorder of the quantum well that influences the exciton component of polaritons . and another is imperfections of the cavity mirrors that influences the photon component @xcite . here
we consider a simplified model of the disorder characterised by a single correlation length eq .
( [ eq : disorder_def ] ) and we assume that the wave length of the surface acoustic wave is smaller then the correlation length of disorder potential in the 2d heterostructure to model the strictly 1d system .
if the latter is not true the polariton wire becomes a quasi-1d system which would give a longer localisation length . in this section
we also neglect interplay between disorder and polariton - polariton interactions which can lead to a many body metal - insulator transition above a finite temperature @xcite and consider the low temperature case only .
scaling analysis of the potential disorder problem @xcite shows that conductance of a 1d system is zero at low temperatures as all of the zero momentum states become localised when the system size increases to infinity .
the lowest energy eigenstates of the single particle model , @xmath68 are localised with the exponential tails @xmath69 , for example see a review @xcite .
therefore the first order correlation function in eq .
( [ eq : g1_def ] ) evaluated with respect to these states gives a finite coherence length with the same exponential tail , @xmath70 where , in general , the localisation length @xmath71 is not equal to the correlation length of the random potential @xmath31 .
here we do not perform a more detailed study of the features that are specific to the disorder in the polariton systems , e.g. @xcite , but only use the exponential tails of the correlation function as a characteristic feature of the localisation mechanism .
the mermin - wagner theorem @xcite states that a long range order in a 1d system is absent due to any finite - range exchange interaction which results in a large long - range quantum fluctuations . in this section
we discuss the manifestation of this general statement in the specific system of interacting 1d polaritons .
the exciton component of a polariton has zero charge but a finite dipole moment . neglecting the photonic non - linearity
, interaction between two polaritons has dipole - dipole nature thus it is short range .
this interaction limits temporal coherence of the polariton condensate @xcite .
the spatial coherence , within the model of density - density interaction with the delta - function interaction , was was analysed in details in @xcite .
it was shown that the coherence length is finite which is manifested by the exponential tails of the coherence function , @xmath72 in accordance with the general mermin - wagner theorem @xcite that forbids an infinite range order in the 1d systems .
it was also shown in @xcite that the coherence length decreases with increase of the interaction constant , @xmath73 .
for a typical gaalas microcavity the estimate of @xmath74 was given as few hundred micrometers in an optimal regime .
note that the functional form of the coherence function due to the polariton interactions eq .
( [ eq : g1_interactions ] ) and due to disorder eq .
( [ eq : g1_disorder ] ) coincides in the tails .
coherence function @xmath75 measured along 1d wire on the logarithmic scale .
the thin dashed line marks the noise floor and the thick dashed line is an exponential function . ]
coherence function @xmath75 measured along 1d wire on the normal scale .
thick line is the numerical fit using the eq .
( [ fig : modes_fit ] ) with @xmath62 as the fitting parameters .
thin dashed line is the plot of eq .
( [ fig : modes_fit ] ) with @xmath62 given by full triangles in fig .
[ fig : occupation_numbers ] . ]
occupation numbers @xmath62 in eq .
( [ eq : g1_modes ] ) to fit the experimentally measured @xmath75 : full ellipsis - result of a numerical fit ( think solid in fig .
[ fig : modes_fit ] ) , full triangle - a set of @xmath62 chosen by hand to occupy approximately @xmath76 modes ( thin dashed line in fig . [
fig : modes_fit ] ) . ] here we analyse experimental data @xcite on the coherence of polariton condensate along 1d wires . in this experiment
the 2d spot of the pumping laser defines the system size and , therefore , limits the coherence length of the polaritons .
then the microwave radiation is applied to confine the polaritons to a wire - like geometry .
it was observed that the coherence length in the direction perpendicular to the wavefront was reduced from the system size to the wave length of the surface acoustic wave but , unlike it was expected , the coherence length in the unconstrained direction , parallel to the wavefront , was also reduced .
we start from the qualitative analysis of the shape of the measured coherence function , @xmath75 , to identify the main mechanism that reduces the coherence length along the wires .
the data on @xmath75 is presented in fig .
[ fig : g1_log ] on a logarithmic scale .
the thick dashed lines marks an exponential function , which would correspond to the anderson localisation or polariton - polariton interaction mechanisms from sections 4 and 5 , and serve as guides for the eye . investigating both tails simultaneously , starting from the noise flow marked by the thin dashed line in fig .
[ fig : g1_log ] , we find that the measured function has a super - linear character away from the maximum point @xcite .
thus we conclude that the main mechanism is the non - equilibrium occupation of a set of the finite momentum modes from section 3 .
having identified the mechanism we perform a more quantitative fitting .
we extract the 2d spot size from the data on @xmath75 which was measured without the surface acoustic wave @xcite .
gaussian fit gives the system size @xmath77 .
then we fit the data on @xmath75 , which was measured with the surface acoustic wave , by the correlation function from eq .
( [ eq : g1_modes ] ) using @xmath62 as the fitting parameters .
the gradient descent method gives the thick full line in fig .
[ fig : modes_fit ] .
the set of @xmath62 for the thick line , which is presented by filled ellipses in fig .
[ fig : occupation_numbers ] , shows that approximately ten modes are occupied .
this fitting of a single function with many parameters is not unique .
another set of @xmath62 that is chosen ad hoc to occupy approximately the same number of modes , triangles in fig .
[ fig : occupation_numbers ] , also fits satisfactory the experimental data , thin dashed line in fig .
[ fig : modes_fit ] .
it is not possible to extract the set of @xmath62 from the data on @xmath75 uniquely but characteristic number of the occupied modes is determinable .
we also analyse distribution of @xmath62 at thermal equilibrium .
bose - einstein distribution , @xmath78 where @xmath24 is the single particle spectrum of a harmonic oscillator from eq .
( [ eq : modes_spectrum ] ) and @xmath79 is an inverse temperature , at low temperatures gives the distribution of the occupation numbers as @xmath80 . it does not fit a satisfactory the experimental data .
thus we can conclude that polaritons are not in thermal equilibrium .
this reflect the strong out - of - equilibrium nature of the constantly pumped polariton system .
we have analysed different mechanisms that can limit the spatial coherence of a polaritonic condensate in 1d wires formed by the surface acoustic wave and have compared their predictions with the shape of the first order coherence function @xmath0 measured in @xcite . from a qualitative analysis of the experimental data we have found that the main mechanism is the non - equilibirum occupation of a set of the finite momentum modes . out of the three effects that we considered : anderson localisation and the polariton - polariton interaction give an exponential tails of @xmath0 , the finite momentum modes occupation gives a gaussian shape . in the experimental data @xcite the shape is gaussian , up to the noise floor . performing a more quantitative fit , using the occupation number as the fitting parameters
, we have found that the number of the excited modes in the experiment @xcite is @xmath4 . from the analysis in the present paper
it is advised to use a static method of the polariton confinement to extend the spatial coherence in the 1d geometry .
the finite momentum modes are most probably excited due to dynamical nature of the acoustic lattice that interacts with the zero momentum polaritons , populated indirectly in the opo setup , and scatters them to finite momenta .
therefore , for example , etching the whole planar structure to form a wire will remove the main source of dephasing that limits currently the coherence in the spatial domain .
such structures were recently produced @xcite and a coherent propagation over a long distance of tenth of micrometers in these structures was reported @xcite .
acoustic lattice induced regime of a 1d condensate , which was identified in section 6 , opens a new way to study distinct non - equilibirum properties of the polaritons .
a better insight can be obtain in more complex frameworks , such as @xcite , that can capture finer details and can provide a further understanding of the non - equilibirum processes in microcavities .
we thank m. s. skolnick and d. n. krizhanovskii for discussions and for their experimental data on the spatially resolved @xmath75 function which was communicated to us .
this work was supported by the epsrc programme grant ep / g001642/1 .
a. p. d. love , d. n. krizhanovskii , d. m. whittaker , r. bouchekioua , d. sanvitto , s. al rizeiqi , r. bradley , m. s. skolnick , p. r. eastham , r. andr , and le si dang , phys .
* 101 * , 067404 ( 2008 ) .
j. kasprzak , m. richard , s. kundermann , a. baas , p. jeambrun , j. m. j. keeling , f. m. marchetti , m. h. szymanska , r. andre , j. l. staehli , v. savona , p. b. littlewood , b. deveaud , and le si dang , nature ( london ) * 443 * , 409 ( 2006 ) .
note that noticeable skewness in the measured @xmath81 in figs .
2 and 3 is a spurious artefact of the measurement .
the conclusion about the super - exponential character of the tails was drawn after a symmetrisation . | several mechanisms are discussed which could determine the spatial coherence of a polariton condensate confined to a one dimensional wire .
the mechanisms considered are polariton - polariton interactions , disorder scattering and non - equilibrium occupation of finite momentum modes . for each case
, the shape of the resulting spatial coherence function @xmath0 is analysed .
the results are compared with the experimental data on a polariton condensate in an acoustic lattice from [ e. a. cerda - mendez _ et al _ , phys .
rev .
lett . * 105 * , 116402 ( 2010 ) ] .
it is concluded that the shape of @xmath0 can only be explained by non - equilibrium effects , and that @xmath1 modes are occupied in the experimental system . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Do-Not-Track Online Act of 2011''.
SEC. 2. REGULATIONS RELATING TO ``DO-NOT-TRACK'' MECHANISMS.
(a) In General.--Not later than 1 year after the date of the
enactment of this Act, the Federal Trade Commission shall promulgate--
(1) regulations that establish standards for the
implementation of a mechanism by which an individual can simply
and easily indicate whether the individual prefers to have
personal information collected by providers of online services,
including by providers of mobile applications and services; and
(2) rules that prohibit, except as provided in subsection
(b), such providers from collecting personal information on
individuals who have expressed, via a mechanism that meets the
standards promulgated under paragraph (1), a preference not to
have such information collected.
(b) Exception.--The rules promulgated under paragraph (2) of
subsection (a) shall allow for the collection and use of personal
information on an individual described in such paragraph,
notwithstanding the expressed preference of the individual via a
mechanism that meets the standards promulgated under paragraph (1) of
such subsection, to the extent--
(1) necessary to provide a service requested by the
individual, including with respect to such service, basic
functionality and effectiveness, so long as such information is
anonymized or deleted upon the provision of such service; or
(2) the individual--
(A) receives clear, conspicuous, and accurate
notice on the collection and use of such information;
and
(B) affirmatively consents to such collection and
use.
(c) Factors.--In promulgating standards and rules under subsection
(a), the Federal Trade Commission shall consider and take into account
the following:
(1) The appropriate scope of such standards and rules,
including the conduct to which such rules shall apply and the
persons required to comply with such rules.
(2) The technical feasibility and costs of--
(A) implementing mechanisms that would meet such
standards; and
(B) complying with such rules.
(3) Mechanisms that--
(A) have been developed or used before the date of
the enactment of this Act; and
(B) are for individuals to indicate simply and
easily whether the individuals prefer to have personal
information collected by providers of online services,
including by providers of mobile applications and
services.
(4) How mechanisms that meet such standards should be
publicized and offered to individuals.
(5) Whether and how information can be collected and used
on an anonymous basis so that the information--
(A) cannot be reasonably linked or identified with
a person or device, both on its own and in combination
with other information; and
(B) does not qualify as personal information
subject to the rules promulgated under subsection
(a)(2).
(6) The standards under which personal information may be
collected and used, subject to the anonymization or deletion
requirements of subsection (b)(1)--
(A) to fulfill the basic functionality and
effectiveness of an online service, including a mobile
application or service;
(B) to provide the content or services requested by
individuals who have otherwise expressed, via a
mechanism that meets the standards promulgated under
subsection (a)(1), a preference not to have personal
information collected; and
(C) for such other purposes as the Commission
determines substantially facilitates the functionality
and effectiveness of the online service, or mobile
application or service, in a manner that does not
undermine an individual's preference, expressed via
such mechanism, not to collect such information.
(d) Rulemaking.--The Federal Trade Commission shall promulgate the
standards and rules required by subsection (a) in accordance with
section 553 of title 5, United States Code.
SEC. 3. ENFORCEMENT OF ``DO-NOT-TRACK'' MECHANISMS.
(a) Enforcement by Federal Trade Commission.--
(1) Unfair or deceptive acts or practices.--A violation of
a rule promulgated under section 2(a)(2) shall be treated as an
unfair and deceptive act or practice in violation of a
regulation under section 18(a)(1)(B) of the Federal Trade
Commission Act (15 U.S.C. 57a(a)(1)(B)) regarding unfair or
deceptive acts or practices.
(2) Powers of commission.--
(A) In general.--Except as provided in subparagraph
(C), the Federal Trade Commission shall enforce this
Act in the same manner, by the same means, and with the
same jurisdiction, powers, and duties as though all
applicable terms and provisions of the Federal Trade
Commission Act (15 U.S.C. 41 et seq.) were incorporated
into and made a part of this Act.
(B) Privileges and immunities.--Except as provided
in subparagraph (C), any person who violates this Act
shall be subject to the penalties and entitled to the
privileges and immunities provided in the Federal Trade
Commission Act (15 U.S.C. 41 et seq.).
(C) Nonprofit organizations.--The Federal Trade
Commission shall enforce this Act with respect to an
organization that is not organized to carry on business
for its own profit or that of its members as if such
organization were a person over which the Commission
has authority pursuant to section 5(a)(2) of the
Federal Trade Commission Act (15 U.S.C. 45(a)(2)).
(b) Enforcement by States.--
(1) In general.--In any case in which the attorney general
of a State has reason to believe that an interest of the
residents of the State has been or is threatened or adversely
affected by the engagement of any person subject to a rule
promulgated under section 2(a)(2) in a practice that violates
the rule, the attorney general of the State may, as parens
patriae, bring a civil action on behalf of the residents of the
State in an appropriate district court of the United States--
(A) to enjoin further violation of such rule by
such person;
(B) to compel compliance with such rule;
(C) to obtain damages, restitution, or other
compensation on behalf of such residents;
(D) to obtain such other relief as the court
considers appropriate; or
(E) to obtain civil penalties in the amount
determined under paragraph (2).
(2) Civil penalties.--
(A) Calculation.--Subject to subparagraph (B), for
purposes of imposing a civil penalty under paragraph
(1)(E) with respect to a person that violates a rule
promulgated under section 2(a)(2), the amount
determined under this paragraph is the amount
calculated by multiplying the number of days that the
person is not in compliance with the rule by an amount
not greater than $16,000.
(B) Maximum total liability.--The total amount of
civil penalties that may be imposed with respect to a
person that violates a rule promulgated under section
2(a)(2) shall not exceed $15,000,000 for all civil
actions brought against such person under paragraph (1)
for such violation.
(C) Adjustment for inflation.--Beginning on the
date on which the Bureau of Labor Statistics first
publishes the Consumer Price Index after the date that
is 1 year after the date of the enactment of this Act,
and annually thereafter, the amounts specified in
subparagraphs (A) and (B) shall be increased by the
percentage increase in the Consumer Price Index
published on that date from the Consumer Price Index
published the previous year.
(3) Rights of federal trade commission.--
(A) Notice to federal trade commission.--
(i) In general.--Except as provided in
clause (iii), the attorney general of a State
shall notify the Federal Trade Commission in
writing that the attorney general intends to
bring a civil action under paragraph (1) before
initiating the civil action.
(ii) Contents.--The notification required
by clause (i) with respect to a civil action
shall include a copy of the complaint to be
filed to initiate the civil action.
(iii) Exception.--If it is not feasible for
the attorney general of a State to provide the
notification required by clause (i) before
initiating a civil action under paragraph (1),
the attorney general shall notify the Federal
Trade Commission immediately upon instituting
the civil action.
(B) Intervention by federal trade commission.--The
Federal Trade Commission may--
(i) intervene in any civil action brought
by the attorney general of a State under
paragraph (1); and
(ii) upon intervening--
(I) be heard on all matters arising
in the civil action; and
(II) file petitions for appeal of a
decision in the civil action.
(4) Investigatory powers.--Nothing in this subsection may
be construed to prevent the attorney general of a State from
exercising the powers conferred on the attorney general by the
laws of the State to conduct investigations, to administer
oaths or affirmations, or to compel the attendance of witnesses
or the production of documentary or other evidence.
(5) Preemptive action by federal trade commission.--If the
Federal Trade Commission institutes a civil action or an
administrative action with respect to a violation of a rule
promulgated under section 2(a)(2), the attorney general of a
State may not, during the pendency of such action, bring a
civil action under paragraph (1) against any defendant named in
the complaint of the Commission for the violation with respect
to which the Commission instituted such action.
(6) Venue; service of process.--
(A) Venue.--Any action brought under paragraph (1)
may be brought in--
(i) the district court of the United States
that meets applicable requirements relating to
venue under section 1391 of title 28, United
States Code; or
(ii) another court of competent
jurisdiction.
(B) Service of process.--In an action brought under
paragraph (1), process may be served in any district in
which the defendant--
(i) is an inhabitant; or
(ii) may be found.
(7) Actions by other state officials.--
(A) In general.--In addition to civil actions
brought by attorneys general under paragraph (1), any
other officer of a State who is authorized by the State
to do so may bring a civil action under paragraph (1),
subject to the same requirements and limitations that
apply under this subsection to civil actions brought by
attorneys general.
(B) Savings provision.--Nothing in this subsection
may be construed to prohibit an authorized official of
a State from initiating or continuing any proceeding in
a court of the State for a violation of any civil or
criminal law of the State.
SEC. 4. BIENNIAL REVIEW AND ASSESSMENT.
Not later than 2 years after the effective date of the regulations
initially promulgated under section 2, the Federal Trade Commission
shall--
(1) review the implementation of this Act;
(2) assess the effectiveness of such regulations, including
how such regulations define or interpret the term ``personal
information'' as such term is used in section 2;
(3) assess the effect of such regulations on online
commerce; and
(4) submit to Congress a report on the results of the
review and assessments required by this section. | Do-Not-Track Online Act of 2011 - Requires the Federal Trade Commission (FTC) to promulgate: (1) regulations that establish standards for the implementation of a mechanism by which an individual can indicate whether he or she prefers to have personal information collected by providers of online services, including by providers of mobile applications and services; and (2) rules that prohibit such providers from collecting personal information on individuals who have expressed a preference not to have such information collected.
Requires such rules to allow for the collection and use of personal information if: (1) the information is necessary to provide a service requested by the individual so long as identifying particulars are removed or the information is deleted upon the provision of such service; or (2) the individual receives clear, conspicuous, and accurate notice on, and consents to, such collection and use.
Provides for FTC and state enforcement of such rules and regulations. |
we present a 27-month - old male infant with pseudohypoaldosteronism , with two novel -subunits , epithelial sodium channel ( enac ) mutations . despite the presence of the enac in the lungs , kidneys , and exocrine glands
, he continues to only have renal and exocrine involvement , stressing differential effects of the mutation in each organ .
pseudohypoaldosteronism type 1 ( pha 1 ) presents in the first few weeks of life with life - threatening salt - wasting syndrome , hyperkalemia , and metabolic acidosis .
pha 1 includes both an autosomal dominant ( ad ) and a recessive form ( ar ) .
pha 1 is rare , with a reported incidence ranging from 1:47,000 to 1:80,000 newborns .
pha 1(ad ) is caused by mutations in the mineralocorticoid receptor encoded by the nr3c2 gene ( chr4:148999915 - 149363672 ; nm_000901 ) , with the salt - wasting syndrome involving only the renal tubules .
typically , the extent of salt wasting improves over time , with resolution of the disease [ 35 ] .
in contrast , pha 1(ar ) is caused by loss of function of the amiloride - sensitive epithelial sodium channel ( enac ) .
this is a highly conserved channel which is expressed in the kidneys , lungs , colon , salivary and sweat glands . in the kidneys ,
enac regulates sodium reabsorption and electrolyte balance , while in the lungs it regulates lung fluid .
the enac is a trimeric channel , consisting of 3 genetically unique subunits assembled as alpha ( ) , beta ( ) , and gamma ( ) ( encoded by the genes scnn1a , scnn1b and scnn1 g , respectively ) .
each subunit has two transmembrane segments , an extracellular loop , and an intracellular n- and c- terminus region .
the -subunit has been described as the main sodium conductor , while the and -subunits are involved in channel flow regulation [ 69 ] .
in addition to severe salt wasting , affected patients may have symptoms very similar to cystic fibrosis ( chronic pulmonary symptoms , elevated sweat chloride secretion , and failure to thrive ) .
the availability of genetic testing has shown that regardless of which subunit is mutated , there appears to be variability of phenotypic presentation . here ,
we report a new case of pha 1(ar ) with 2 novel mutations and discuss its clinical and genetic characteristics and compare it to previously described cases .
a 27-month - old boy presented at 7 days of age with severe hyperkalemia , hyponatremia , and dehydration .
he was born via c - section due to premature rupture of membranes at 36 weeks , with birth weight of 2.021 kg ( 3.5% ile ) , length of 40.6 cm ( 0.2% ile , and head circumference of 33 cm ( 55% ile ) .
the parents were nonconsanguineous , of caucasian descent , and mother was gravida 4 para 1 ( the other 3 pregnancies ended in elective abortions ) . at birth ,
the baby had no respiratory difficulty , had normal electrolyte panel ( na 141 mmol / l , k 5.1 mmol / l , hco3 22 mmol / l ) , and remained under routine care due to prematurity . at 1 week of age , he was noted to have mottling , jaundice , and difficulty feeding which were associated with marked electrolyte disturbances ( na 123 mmol / l , k 9.0 mmol / l , hco3 17
urine electrolytes analysis revealed na of 69 mmol / l , k 2.6 mmol / l , cl 42 mmol / l .
the patient 's sepsis work - up and metabolic newborn screening including cystic fibrosis were negative .
he was treated with an iv hyperkalemia cocktail consisting of sodium bicarbonate , calcium gluconate , insulin , and glucose .
further investigations revealed a cortisol level of 12.7 mcg / dl ( 211 mcg / dl ) , 17-hydroxyprogesterone 20 ng / dl ( 11170 ng / dl ) , progesterone 3.13 ng / dl ( 752 ng / dl ) , plasma renin activity 93.96 ng ml / h ( 235 ng ml / h ) , and aldosterone 587.6 ng / dl ( < 217 ng / dl ) ( table 1 ) . a preliminary diagnosis of pseudohypoaldosteronism was then made . by day 12 of life , the patient stabilized clinically while receiving sodium supplementation at 8.5 mmol kg / day . on day 25 of life , he then had recurrence of poor feeding and severe electrolyte imbalance ( na 126 mmol / l , k 10 mmol / l ) in addition to failure to thrive ( 2.085 kg ) .
patient was put on the acute hyperkalemia cocktail , aggressive sodium and fluid resuscitation . due to a markedly increased urine output volume ( 16 cc kg / h with urine na 175 mmol / l ,
urine k < 2 ) , urine replacement with normal saline was set at 1:1 . during this time , the sodium intake was approximately 61 mmol kg / day , resulting in normalization of his sodium levels .
the patient continued to tolerate oral feeds , and both hemodynamic and respiratory functions remained stable . once a new steady - state had been achieved , the patient 's daily sodium requirement remained at 42.5 mmol kg / day ( total of 118 mmol na given orally , with 56 mmol from oral sodium chloride , 32 mmol from oral sodium citrate , and 32 mmol from oral sodium polystyrene . ) .
for emergency access and ease of medication administration , a gastrostomy tube was placed . at 40 days of age , the patient developed tachypnea and lowered oxygen saturation requiring oxygen supplementation via nasal cannula .
he responded quickly to a small dose of hydrochlorothiazide . while the congestion may have been the result of high salt and fluid intake , a systemic form of pha
laboratory findings in the patient day 7 of life represents the age of presentation . on day 40 ,
he was discharged at 3 months of age on an oral regimen of sodium chloride , sodium citrate , and sodium polystyrene ( total 17 mmol kg / day ) .
he is currently 27 months old and has not required hospitalization despite occasional self - limited upper respiratory tract infections .
his dietary management of sodium and potassium has been complicated by the family 's preference of a vegan diet .
he is gaining weight well , although continues to be small for his age , and is attaining appropriate milestones .
due to abnormally elevated sweat chloride despite normal irt ( immunoreactive trypsinogen ) levels on state newborn screening for cystic fibrosis , cftr mutation analysis using the cftr - inplex 40-mutation panel ( hologic - third wave , madison , wi , usa ) was done .
no mutation was found in this limited panel offered as a second tier test by the state of kansas newborn screening laboratory .
in addition , the patient did not have cutaneous lesions ( such as milaria rubra ) , nasal discharge or gastrointestinal problems .
mutation analysis of the -enac gene encoded by scnn1a [ chr12:6456009 - 6484905 , nm_001038 ] revealed two potentially deleterious novel mutations : c.416g > a ( p.arg139lys ) and c.1360 + 1g > t .
since the mutations lie within the exon / intron splice consensus sequence , the mutations were predicted to destroy the normal splicing of exon 2 and 8 donor sites , respectively ( fig.1 ) .
this was not confirmed via rt - pcr since we did not have access to the patient 's rna .
since the mutation being described is an ar one , neither parent was tested as they were expected to be asymptomatic carriers .
analysis shows a missense mutation in exon 2 ( c.416g > a [ p.arg139lys ] , left panel ) and a mutation in exon 8 ( c.1360 + 1g > t , right panel ) , both of which are predicted to destroy splicing resulting in exon skipping .
mutated nucleotide is highlighted and its position is indicated at the bottom of the tracing which is shown in duplicate .
over the past decade , the number of patient reports on pha 1(ar ) had increased somewhat significantly , allowing better understanding of the disorder . in 2005 ,
edelheit et al . reported 3 new patients , with 22 independent mutations in the coding region of the enac subunit known worldwide .
abnormal phenotypes have been reported in all 3 subunits ranging from mild to severe disease , but -subunit involvement was most common .
most of the mutations reported were typically single - base deletion / insertion , or splice site mutation , all of which were associated with severe phenotypes .
numerous other novel mutations have been since been reported in the literature , with varying disease severity . in our patient , two novel splice site mutations in scnn1a were identified , both of which are predicted to cause exon skipping and a shorter unstable transcript . as expected , and while the patient had an abnormal sweat chloride test , cftr mutation analysis was negative , therefore excluding cystic fibrosis as a possible explanation . in addition , his newborn screen for cystic fibrosis using irt was normal .
this illustrates the fact that patients with pha 1(ar ) are not expected to be identified via abnormal newborn screening for cystic fibrosis .
cftr gene analysis was not performed on the whole gene , but this was not felt to be necessary , as the gold standard for the diagnosis of cf remains to be a compatible clinical diagnosis plus abnormal sweat chloride test . the clinical presentation ( electrolyte abnormalities ) ,
negative newborn screening ( irt ) test as well negative cftr mutation panel strongly argue against cf as a diagnosis .
the disease severity has been defined in the literature as requiring frequent hospitalizations , persistent critical salt wasting and potentially life - threatening hyperkalemia , significant respiratory dysfunction and growth failure .
the respiratory dysfunction is due to the inability to absorb airway fluid properly , with resulting increased airway liquid volume , and impaired mucociliary function .
this results in a high incidence of lower respiratory tract involvement , with repeated episodes of coughing , cyanosis , dyspnea , wheezing , and fever , similar to cystic fibrosis . based on our experience and review of the literature
, we propose to define severity based on the presence of sustained pulmonary disease and electrolyte imbalance , as the outcome and long - term management can be drastically different .
our patient had a severe initial presentation ; however , it was predominantly an electrolyte issue .
while still continuously needing sodium supplementation ( chloride , bicarbonate , and polysterene ) , he did not need medication adjustments for almost 2 years .
the patient has had a few mild upper respiratory tract infections from which he recovered fully within a few days and without hospitalization , typical of any healthy infant .
absence of significant pulmonary involvement in a patient with severe salt wasting illustrates the complexity of enac .
for example , -subunit knockout mice die within 40 h of birth secondary to respiratory distress
. however , transgenic manipulation of the -subunit in these knockout mice resulted in low levels of enac that were sufficient to avoid pulmonary demise yet , severe salt wasting persisted .
clearly , while enac is a major determinant of fluid and electrolyte balance in both lungs and kidneys , the presence of other mechanisms unique to each organ may alter the effect of a mutation . to go one step further in evaluating the broad phenotypic spectrum of pha 1(ar ) , the experience with spontaneous clinical amelioration has been described .
hanukoglu described a patient with a missense mutation , who had clinical improvement after infancy , with smaller sodium chloride supplementation requirements after 9 years of age .
reported a preterm infant with a homozygous missense mutation of the -enac and salt - wasting syndrome which resolved completely by 6 months of age .
interestingly , that patient 's sibling , born at term with the same mutation was completely asymptomatic .
a possible important compensatory mechanism responsible for the clinical improvement was demonstrated in a patient with mild ar - pha , where investigators found an increased expression of the thiazide - sensitive transporter ( ncc ) , leading to increased absorption of sodium in upstream nephrons .
overall , our experience , in addition to evidence assessed in the literature , indicates a dichotomy of disease severity and effect between renal and pulmonary manifestations .
while pha1 ( ar ) is typically associated with multiorgan involvement , this may not be the case in every patient , and the lack of pulmonary involvement certainly improves long - term prognosis .
reporting of additional patients with pha1 ( ar , ad ) and long - term follow up , in vitro studies of various enac subunit mutations and the identification of other genetic modifiers will likely help us better understand this potentially life - threatening disease and improve its management .
multiorgan involvement should always be considered in infants presenting with type 1 pha , and a positive sweat test supports this ar form .
| key clinical massagewe present a 27-month - old male infant with pseudohypoaldosteronism , with two novel -subunits , epithelial sodium channel ( enac ) mutations . despite the presence of the enac in the lungs , kidneys , and exocrine glands
, he continues to only have renal and exocrine involvement , stressing differential effects of the mutation in each organ . |
the problem of the collision of two shoockwaves in four dimensions ( with lorentz signature ) that result from boosting two black holes to the speed of light is well known and has been extensively studied in the literature .
references @xcite provide only a subset of the work dedicated to the particular problem ( for some interesting recent developments see @xcite ) . in particular , @xcite deals with this problem pertubatively and in a series of three papers the authors compute the metric and derive a formula for the gravitational radiation .
the collision is axisymmetric while the bulk matter that creates the shockwaves as well as the back - reaction effects are not taken into account .
recently , the high energy physics community has also expressed special interest in the topic of shockwaves collisions in the framework of general relativity @xcite through the anti - de - sitter space / conformal field theory correspondence ( ads / cft ) @xcite .
the ads / cft duality , allows to formulate a process that is associated with non - abelian gauge theories at strong coupling as a purely gravitational problem . as a result
, one may map the problem of heavy ion collisions ( in four dimensions ) onto shockwaves collisions in gravity in five dimensions .
recently , we have been involved in such problems @xcite in gauge theories at strong coupling whose dual five dimensional gravitational description , exhibits similar characteristics with those of shockwave collisions in ( ordinary ) four dimensional gravity .
our goal in this work is to attempt to apply our earlier experience @xcite and especially apply the technique that we have developed in @xcite in ( ordinary gravity in ) four dimensions .
we choose to work in a different coordinate system than @xcite in order to retain the geometrical insight of the collision and in addition we take into account the matter responsible for the creation of the shockwaves and the back - reaction effects as well .
furthermore , the collision we consider here is not axisymmetric but is involves a non zero impact parameter .
we organize the paper as follows . in chapter
[ sup ] we state the problem we want to solve and construct the main set up .
our goal is to determine the evolution of the geometry assuming that we know it in some time interval ( negative times ) .
the initial geometry is given by two shockwaves which correspond to a non zero stress - energy tensor . ) with the speed of light and begin to interact for positive times ] our method is to construct a perturbative approach by expanding the metric around the background given by the flat metric along with the two shockwaves .
equation ( [ s12 ] ) shows the form of the metric at all times while figure [ interaction ] offers a diagrammatical intuition of the terms of the metric we attempt to calculate in this project . in chapter [ b2b ]
we take into account the interaction of the one particle with the gravitational field created from the other and vice versa . in terms of feynman diagrams , loosely speaking , these corrections correspond to the diagrams of figure [ selfint4d ] .
the corrections to the stress - energy tensor corresponding to these diagrams along with the corrections of the metric tensor corresponding to diagram of figure [ interaction ] form a consistent set ) ) .
] of the corrections that have to been taken into account .
we verify that the modified stress - energy tensor is conserved ( to the order of the expansion that we are working ) and we find that it is traceless .
this last condition results into some pleasing simplifications for einstein s equations ( compare ( [ ein ] ) with ( [ ee ] ) ) .
chapter [ fe ] deals with the field equations and the specification of the gauge .
our attempt is to perform a perturbative calculation about a metric that looks almost flat but contains two shockwaves moving opposite to each other and colliding ( see ( [ s12 ] ) ) .
the shockwaves provide an effective stress - energy tensor in addition to the ( actual ) stress - energy tensor ( see ( [ rmn2gtt ] ) ) that creates the two shockwaves ; these terms correspond to products of the form @xmath5 and @xmath6 of equation ( [ deq1 ] ) respectively .
a suitable gauge choice simplifies the field equations ( [ deq1 ] ) to equations ( [ deq ] ) which are solved in the next chapter . in chapter [ seq ]
we specify the boundary conditions of the field equations ( [ deq ] ) and the corresponding green s function .
in particular , we seek for causal solutions and therefore the associated green s function to the differential operator ( [ box ] ) is the retarded green s function .
we show how the integrations on the light - cone and transverse plane may be performed omitting some of the intermediate steps for appendices [ a ] , [ b ] and [ c ] .
eventually we derive a formula for the metric tensor , equation ( [ gmn2 ] ) , to the order we are working .
this is our final result and it generally has the structure ( [ gmn ] ) .
finally in chapter [ conc ] we discuss about the area of validity of our solution and summarize our conclusions . in particular , we argue that the presence of matter and the back - reaction effects may not be ignored as they result to an important contribution to the metric .
we also see that as the impact parameter tends to zero , the metric diverges logarithmically .
it is in our belief that this is a signal that a classical approach to the problem stops being valid and that a quantum description is required .
lastly , we talk about the general form of the metric . although its evolution is constrained by casualty as expected ; this evolution takes place in an intuitive way : at a given ( proper ) time , any arbitrary point on the transverse plane evolves according from whether the signal from the center of the one or the other shockwave or both , has enough time to reach the point under consideration .
we had encountered such a behavior in an analogous set up in @xcite although the geometry there was anti de sitter geometry in five dimensions . in @xcite
we had claimed that a similar evolution of the metric should also be observed in four dimensions ; in this work we verify our conjecture .
we begin by defining the coordinate system ( gauge ) we work .
we choose to work in light - cone coordinates defined by @xmath7 where @xmath8 is the time axis and @xmath9 cover @xmath10 .
the convection for the flat metric that we use is @xmath11 we suppose that we have a black hole metric that we boost to the speed of light along a given direction ( @xmath12 direction ) . the metric is known @xcite and is given by @xmath13 where @xmath14
according to equation ( [ ds1 ] ) we denote the transverse flat metric by @xmath15 while @xmath16 denotes the delta ( dirac ) function .
the parameter @xmath17 has dimensions of length and its physical meaning will become apparent in what follows while @xmath18 serves as an ultraviolet cutoff and whose physical meaning is discussed in the conclusions ( see section ( [ scon ] ) ) .
one may check directly whether ( [ ds1 ] ) solves einstein s equations which may be cast as @xmath19 where @xmath20 is the ricci tensor , @xmath6 the stress - energy tensor and @xmath21 the newton s constant ( in four dimensions ) . direct substitution of ( [ ds1 ] ) in the formula that computes @xmath22 results to @xmath23 where @xmath24 is a kronecker delta .
this implies that all components of @xmath22 are zero except from @xmath25 . in order to arrive to equation ( [ rmn ] )
we had to evaluate the following linear differential expression @xmath26 where @xmath27 is the laplace operator in two dimensions while we used the identity @xmath28 equations ( [ ein ] ) and ( [ rmn ] ) imply that the metric tensor of equation ( [ ds1 ] ) corresponds to a stress - energy tensor .
shifting the origin along negative @xmath29 for distance @xmath30 we find that the stress - energy tensor is given by @xmath31 the presence of the superscript @xmath32 on @xmath33 of ( [ t1 ] ) is to highlight that it is of first order in the parameter @xmath17 .
( i=1,2 assuming we have two sources ) appears .
generally in the object @xmath34 there exists the product @xmath35 or any linear combination of differentiation / integration of @xmath36 and @xmath37 with respect to their arguments @xmath38 . ]
this equation implies that the shockwave of ( [ ds1 ] ) is a consequence of a point particle moving along the @xmath39 direction with the speed of light and hence its massless .
indeed , the ratio @xmath40 has dimensions of mass as should .
one may check that @xmath6 of ( [ t1 ] ) is covariantly conserved .
we may gain some insight in the physical system at hand if we use a diagrammatical approach . despite that general relativity
is a non linear theory , the metric ( [ ds1 ] ) satisfies a linear differential equation .
the fact that ( [ ds1 ] ) is an expression of a perturbation of a flat metric proportional to @xmath17 suggests the diagram of figure [ vertex ] . which is measured at point @xmath38 .
this is a very special case where a single graviton exchange between the source and the bulk happens to be an exact solution to the non linear einstein s equations . ]
it represents the measurement of the gravitational field at point @xmath41 , which , loosely speaking , is created by a single graviton emission from the source ( point - like stress energy tensor ) of equation ( [ t1 ] ) with effective coupling proportional to @xmath42 . in other words for this special case of a single shockwave
, the first order solution happens to be the exact solution to all orders .
having defined all the necessary ingredients we now proceed to the main part of the setup .
we want to superimpose two such shockwaves whose sources are two point - like distributions of matter moving towards each other ) . ] in space - time .
we want to collide these shockwaves ( and as a result the corresponding stress - energy tensors as well ) at a non - zero impact parameter and hence study the problem within the classical theory of gravity .
therefore , @xmath6 has in addition to ( [ t1 ] ) the symmetric part @xmath43 which creates a second shockwave . in terms of space - time
, this would correspond in colliding " two metrics in an off center process .
figure [ offcenter ] represents the four dimensional picture , right before the collision of the two shockwaves .
following @xcite,@xcite , the metric that describes the process should look like axis and dragging a perpendicular gravitational field which is constant along the circular lines .
they collide at the origin producing a gravitational field in the forward light cone .
our goal to compute the produced " metric and in particular @xmath44 . ]
correction of the metric : it represents along with the diagram of figure [ selfint4d ] ( see chapter [ b2b ] ) , the first non trivial correction to ( [ s12 ] ) .
it shows how the two metrics that each one looks like ( [ s1 ] ) merge .
the gravitational field is measured at the point @xmath38 . ]
@xmath45 the first three terms correspond to the flat ( minkowski ) space .
the next two are of first order in @xmath17 and are created by the two point - like particles . these move ( initially ) towards each other along @xmath12 and they have an impact parameter @xmath46 along the @xmath29 axis as figure [ bten ] depicts .
as they are the sources of the two shockwaves , they correspond to two vertex diagrams that look like the one in figure [ vertex ] .
this is a superposition of two metrics with each one looking like ( [ s1 ] ) .
however , the non - linearities of the gravitational field require higher order terms . the second order corrections are explicitly displayed in ( [ s12 ] )
and they appear once the two shockwaves cross each other ; in the forward light cone .
this is precisely the meaning of the @xmath47-functions ; they emphasize that the metric ( [ s12 ] ) solves einstein s equations exactly in the presence of both shockwaves only for negative @xmath48 .
the additional terms of the metric appear in the forward light cone only and describe the effects of collision .
the main work of this paper is to show how these terms may be calculated to order @xmath49 , that is find @xmath44 .
the second order correction in @xmath17 of @xmath1 corresponds to the diagram of figure [ interaction ] .
as has been already mentioned below ( [ t1 ] ) , @xmath50 is conserved in the gravitational field of ( [ ds1 ] ) , ( [ s1 ] ) .
in fact , conservation for this case happens to be valid to all orders in @xmath17 appears in the resulting equations .
this is in accordance with our intuitive picture of figure [ vertex ] : gravity behaves linearly with respect to the metric ( [ ds1 ] ) . ]
@xmath51 changes with time .
the first source interacts via the gravitational field created by the other and vice versa .
the point @xmath41 is the space - time point where @xmath1 is measured . ]
conservation to first order is still valid when we consider simultaneously @xmath50 and @xmath52 in the presence of the gravitational field ( [ s12 ] ) .
however , this is no longer true at the second order in @xmath17 .
the reason is because the @xmath53 ( @xmath33 ) source moves in the gravitational field of the @xmath36 @xmath54 shockwave , altering its initial trajectory .
figure [ bten ] outlines what happens while figure [ selfint4d ] offers a diagramatical intuition regarding the self - corrections to @xmath6 .
this implies that we should correct @xmath55 in order to preserve conservation ( of the total stress - energy tensor ) .
however , since we do not know the nature ( equation of state ) of @xmath6 we make the assumption that these objects interact only via gravitational forces .
since these particles are point - like and massless , they should travel along null geodesics ; as it is rigorously shown in @xcite conservation of ( the total ) @xmath56 is then guaranteed .
this suggests that we need @xmath57 which gives the total stress -energy tensor of both point - like particles of mass @xmath58 each moving along the trajectory @xmath59 parameterized by @xmath60 .
the quantity @xmath61 is the determinant of the ( total ) metric tensor , the factor @xmath62 is in agreement with our convention ( that reproduces ( [ t1 ] ) - see below ) while the dots denote differentiation with respect to the parameter @xmath63 . before calculating higher order corrections to @xmath6
, we find it instructive to check whether this formula reproduces ( [ t1 ] ) in the case of one particle ( @xmath64 ) .
the trajectory of this particle which moves ultra - relativistic ally along negative @xmath12 is parameterized by @xmath65 in light cone coordinates and choosing to parameterize the trajectory by @xmath39 , equation ( [ tr1c ] ) then implies @xmath66 direct substitution of ( [ tr1lc ] ) to ( [ tpp ] ) yields @xmath67 which is exactly equation ( [ t1 ] ) ( for @xmath68 ) .
this computation also clarifies the convection of @xmath62 in ( [ tpp ] ) .
the part of @xmath6 of equation ( [ t2 ] ) due to the second particle is reproduced similarly .
the next step is to calculate the next ( second ) order corrections ( in @xmath17 ) of @xmath6 which translates into finding the corrections to the trajectories @xmath69 .
these may be obtained from the geodesic equations . in particular , we only need the first order corrections to @xmath69 as we already have a power of @xmath17 in front of the summation operator ( see ( [ tpp ] ) ) .
the geodesic equations we need read @xmath70 and are interpreted as the motion of particle @xmath71 in the gravitational field of the particle @xmath72 ( due to @xmath73 where @xmath74 are the christoffel symbols ) and vice versa ; this is precisely the meaning of the subscripts @xmath71 and @xmath72 .
we begin with computing the corrections to the particle @xmath64 whose ( first order ) perturbed trajectory looks like which labels the particle @xmath64 for simplicity . ] @xmath75 where we have chosen to parameterize the trajectory with @xmath39 ( i.e. @xmath76 ) .
the superscript @xmath77 denotes the order in the expansion . taking into account ( [ tpp ] ) and the fact that @xmath78 otherwise is zero
, we deduce that both of the terms @xmath79 or @xmath80 have to be of zeroth order ; i.e. the only choice is @xmath81 .
this implies that we need to determine @xmath82 to first order in @xmath17 that arise from the second particle which labels the second particle . ] and which read @xmath83 a few explanations about our notation are in order : the term @xmath37 is due to the second particle and is given in equation ( [ s12 ] ) .
the subscript @xmath84 on @xmath37 denotes ordinary differentiation of the source @xmath37 with respect to the coordinate @xmath38 .
according to ( [ s12 ] ) , the source @xmath37 is of first order in @xmath17 which denotes the order in @xmath17 from @xmath85 . ] and as a result the same applies for the @xmath74 s of ( [ gam ] ) ; it should be by now obvious that the christoffel symbols are due to the second particle ( @xmath37 ) .
the final step is to integrate ( [ geo ] ) using ( [ gam ] ) and using causal boundary conditions and substitute the result in formula ( [ tpp ] ) .
as we are interested in second order corrections in @xmath17 , we immediately conclude that at least one of @xmath86 or @xmath79 should be of order zero , i.e. @xmath87 or @xmath88 or @xmath89 , @xmath90 .
we also note that @xmath91 and hence according to ( [ tpp ] ) corrections from @xmath92 do not contribute to @xmath6 at @xmath93 .
we consider two cases in this case the modification of @xmath6 of equation ( [ tpp ] ) to this order we are working is in the arguments of the delta s .
we have that follow from now on ( see also appendices [ a ] and [ b ] ) will imply the obvious : @xmath94 stands for @xmath95 , @xmath96 stands for @xmath97 etc . ]
@xmath98 expanding the delta s to first order in the sources we obtain @xmath99.\end{aligned}\ ] ] we may cast ( the first order correction terms of the ) last equation in a compact form by expressing it in terms of @xmath36 and @xmath37 . using the identity
@xmath100 ( see ( [ dlog ] ) and ( [ s12 ] ) ) the @xmath93 terms of ( [ ti2 ] ) take the form @xmath101 where we restored the subscript @xmath102 and the superscript @xmath103 in order to highlight that this is the second order correction to @xmath6 of the first particle . in this case
the modification of @xmath6 of equation ( [ tpp ] ) to this order we are working is in the factor @xmath104 . combining ( [ geo ] ) and ( [ gam ] ) one may compute @xmath105 . plugging this result to ( [ tpp ] ) and employing the identity ( [ nab1 ] ) in order to write the transverse delta s in terms of @xmath36 yields to [ tpii ] @xmath106 the second order corrections to the stress energy tensor @xmath107 of the first particle is given by equations ( [ ti3 ] ) and ( [ tpii ] ) .
the corrections @xmath108 of the second particle may be found analogously and therefore the second order corrections to the total stress energy tensor read [ tmn2 ] @xmath109 the first equality in equation ( [ t+- ] ) is not completely obvious and so we prove it below by considering @xmath110 where in the fourth equality we used the fact that the stress - energy tensor of the point particles ( see ( [ t1 ] ) , ( [ t2 ] ) and ( [ nab1 ] ) ) to first order in @xmath17 may take the form @xmath111 despite working mostly with ( [ tmn2 ] ) which is a compact expression , for concreteness , we write @xmath6 of ( [ tmn2 ] ) in terms of the coordinates in order to clarify its form . defining @xmath112 and employing ( [ nab1 ] ) in ( [ tmn2 ] )
we obtain [ tmnc ] @xmath113 \label{t++c}\\ & ( t_{+1})^{(2)}= \frac{\pi \mu^2}{4 \kappa_4 ^ 2 |b| } \theta(x^-)\delta(x^+)\delta^{(2)}(\vec{r_1 } ) \label{t+-1c}\\ & ( t_{+2})^{(2 ) } = 0 \label{t+2c } \ ] ] the asymmetry between @xmath114 and @xmath115 is due to the fact that the impact parameter @xmath116 has only @xmath29 component ( see ( [ s12 ] ) ) .
the rest non zero components that complete ( [ tmnc ] ) may be obtained using the discrete symmetries of the problem : @xmath117 and @xmath118 may be obtained from @xmath119 and @xmath114 respectively by interchanging @xmath120 and @xmath121 .
we want to justify our claim that the @xmath122 corrections to @xmath6 correspond to null geodesics . for this
we consider the line element @xmath123 of the second shockwave @xmath37 ( the analogue of ( [ ds1 ] ) ) . for time - like distances and for fixed transverse position
we have @xmath124 and integrating over the discontinuity due to the trajectory of the first particle ( @xmath36 ) along the second shockwave @xmath54 we deduce that @xmath125 the superscript @xmath77 on this equation highlights the fact that the discontinuity along the shockwave @xmath36 is of first order in @xmath17 and it implies that the trajectory of the second particle will be modified by along @xmath126 by @xmath127 . but according to the argument of the first @xmath4-function of ( [ ti1 ] ) , equation ( [ d+ ] ) is exactly equal to the shift along @xmath126 that we have already encountered from the geodesic analysis ( for the first particle ) .
this completes our argument .
the second order corrections to ( the total ) @xmath6 have already been calculated in the previous section .
one , may check by a direct computation using ( [ nab^2 ] ) that this @xmath6 is covariantly conserved .
explicitly this means that @xmath128 where @xmath129 denotes a covariant derivative while the superscripts denote the order in @xmath17 while we have used the identity ( [ nab^2 ] ) .
therefore we conclude that @xmath6 is conserved if and only if the impact parameter is not zero .
causes problem in the metric as well . as we will see , as @xmath130
, the metric tensor diverges logarithmically ( see section [ scon ] ) .
] this is one of our main conclusions in this paper .
it is also useful to compute the trace of @xmath6 as it enters the field equations ( see ( [ ein ] ) ) .
a short computation yields to @xmath131 which shows that the stress - energy tensor is traceless to order @xmath49 .
tracelessness is very convenient as it simplifies einstein s equations which become @xmath132
in this section we wish to write an explicit form of ( [ ee ] ) up to order @xmath93 . in order to determine these ( differential ) equations we take into account that the zeroth order terms satisfy ( [ ee ] ) trivially as @xmath133 while @xmath134 ( resulting from the first order terms of ( [ s12 ] ) ) is compensated by @xmath55 of equations ( [ t1 ] ) and ( [ t2 ] ) .
thus , we only need @xmath135 where @xmath122 has already been calculated in the previous chapter and is given by ( [ tmn2 ] ) .
it is crucial to state that @xmath136 receives two different type of contributions : ( a ) the contribution due to the ( pre)existing shockwaves ( that is due to the @xmath36 and @xmath37 terms of ( [ s12 ] ) ) ; we denote this contribution by @xmath137 .
( b ) the contribution due to the ( second order ) corrections ( in @xmath17 ) of the metric ( that is due to @xmath138 ; we denote this contribution by @xmath139 . recalling equation ( [ s12 ] ) that gives the form of the metric at all times and expanding ( [ rmn2 ] ) to @xmath93
, we expect that it should have the form @xmath140 where @xmath137 ( and @xmath122 ) is known while @xmath139 is what we will use in order to determine @xmath138 ( see ( [ deq1 ] ) ) .
equation ( [ rmn2gt ] ) may also be cast in the form @xmath141 and view @xmath137 as an effective ( contribution to the tottal ) stress - energy tensor ( see ( [ deq1 ] ) and ( [ deq ] ) ) .
dropping the superscripts @xmath103 from @xmath44 and @xmath122 and the superscripts @xmath77 from @xmath85 for simplicity , we find that the components of ( [ ee ] ) to second order in @xmath17 read [ deq1 ] @xmath142=\kappa_4 ^ 2t_{++},\label{++1}\\ ( + -)\hspace{0.15 in } \frac{1}{4}\big [ & -2g_{+-,x^2x^2}-2g_{+-,x^1x^1}+2g_{+2,x^-x^2}+2g_{+1,x^-x^1}-2g_{++,x^-x^- } \notag\\ & + 2g_{-2,x^+x^2}+2g_{-1,x^+x^1}-2g_{11,x^+x^-}-2g_{22,x^+x^-}+4g_{+-,x^+x^-}\notag\\ & -2g_{--,x^+x^+}-2t_{1,x^1}t_{2,x^1}-2t_{1,x^2}t_{2,x^2}+t_{1,x^+}t_{2,x^-}\big]=\kappa_4 ^ 2t_{+-},\label{+-1}\\ ( + 1)\hspace{0.15 in } \frac{1}{4}\big [ & -2g_{+1,x^2x^2}+2g_{+2,x^+x^-}-2g_{++,x^-x^1}+2g_{12,x^+x^2}-2g_{22,x^+x^1 } \notag\\ & + 2g_{+-,x^+x^1}+2g_{+1,x^+x^-}-2g_{-1,x^+x^+}+t_{1,x^+}t_{2,x^1}\big]=\kappa_4 ^ 2t_{+1},\label{+11}\\ (
11)\hspace{0.15 in } \frac{1}{2}\big [ & -g_{11,x^2x^2}+2g_{12,x^1x^2}-g_{22,x^1x^1}+2g_{+-,x^1x^1}-2g_{+1,x^-x^1}-2g_{-1,x^+x^1}\notag\\ & + 2g_{11,x^+x^-}+t_{1,x^1}t_{2,x^1}+t_{1,x^1x^1}t_{2}+t_{1}t_{2,x^1x^1}\big]=k_{4}^{2 } t_{11}\label{111}=0,\\ ( 22)\hspace{0.15 in } \frac{1}{2}\big [ & -g_{11,x^2x^2}+2g_{12,x^1x^2}-g_{22,x^1x^1}+2g_{22,x^+x^-}-2g_{+2,x^-x^2}-2g_{-2,x^+x^1}\notag\\ & + 2g_{+-,x^2x^2}+t_{1,x^2}t_{2,x^2}+t_{1,x^2x^2}t_{2}+t_{1}t_{2,x^2x^2}\big ] = k_{4}^{2 } t_{22}\label{221}=0,\\ ( 12)\hspace{0.15 in } \frac{1}{4}\big [ & 4g_{+-,x^1x^2}-2g_{+1,x^-x^2}-2g_{+2,x^-x^1 } -2g_{-1,x^+x^2}-2g_{-2,x^+x^1}+4g_{12,x^+x^-}\notag\\ & t_{2,x^2 } t_{1,x^1}+ t_{1,x^2}t_{2,x^1}+2t_{2}t_{1,x^1x^2 } + 2t_{1}t_{2,x^1x^2 } \big]=\kappa_4 ^ 2t_{12}=0\label{121 } \ ] ] where @xmath143 are given by ( [ s12 ] ) and correspond to the geometry for negative times while the components of @xmath6 are given by ( [ tmn2 ] ) . indeed ( [ deq1 ] ) has the expected form of equation ( [ rmn2gt ] ) .
the above set of the field equations has been written without specifying the gauge . in the next section
, we will see how these may be simplified by making a convenient gauge choice .
we follow the standard procedure in order to solve ( [ deq1 ] ) : we define a new coordinate system @xmath144 with respect to the old one @xmath145 ( see ( [ lc ] ) ) by @xmath146 where @xmath147 is an arbitrary function of the ( old coordinates ) @xmath148 and is second order in @xmath17 .
obviously , this transformation induces a second order change in @xmath17 to @xmath44 but does not alter @xmath149 and @xmath150 .
more precisely the second order terms of the metric transform to @xmath151 where the semicolon denotes a covariant derivative is exactly equal to the action of the lie derivative acting on @xmath44 along the vector field @xmath152 , i.e. @xmath153@xmath154 . ] .
what is remarkable is that the field equations remain invariant under this transformation as it may be shown on general grounds @xcite . for concreteness ,
we exhibit it here for the @xmath155 component , equation ( [ 121 ] ) .
taking into account that at the order we are working , the covariant derivative may be replaced by ordinary differentiation , plugging the tensor @xmath156 into the differential part of ( [ 121 ] ) and dropping ( again ) the superscript @xmath103 from the @xmath157 s for simplicity , we obtain @xmath158 \notag\\ & = \frac{1}{4}\big[\left\ { \left(4\xi_{+,x^-,x^1,x^2}-2\xi_{+,x^1,x^-,x^2}-2\xi_{+,x^2,x^-,x^1}\right ) + ( + \leftrightarrow- ) \right \}\notag\\ & \hspace{0.25in}+\left\ { \left(4\xi_{1,x^2,x^+,x^-}-2\xi_{1,x^+,x^-,x^2}-2\xi_{1,x^-,x^+,x^2}\right)+(1\leftrightarrow2)\right\ } \big ] = 0\end{aligned}\ ] ] where we used the fact that partial derivatives commute .
so far , the vector field @xmath157 has been arbitrary while the result of the transformation ( [ ncs ] ) on ( [ deq1 ] ) is just the relabeling @xmath159 .
a convenient choice of @xmath160 is the one that satisfies de donder gauge @xmath161 where @xmath162 is the flat metric . applying this gauge to the field equations that we are interested , equations ( [ deq1 ] ) , and dropping the tilde symbol from @xmath163 for simplicity
, the field equations simplify to [ deq ] @xmath164 where we have used ( [ tmn2 ] ) while @xmath165 is the scalar operator in flat space ; that is @xmath166 in the next chapter we will see how equations ( [ deq ] ) may be solved imposing appropriate boundary conditions .
in this section we will show how to solve ( [ deq ] ) by seeking for causal solutions .
the casual boundary conditions imply that the flat metric in the presence of the shockwaves , as can be checked , is an exact solution to einsteins equations with a right hand side given by ( [ t1 ] ) and ( [ t2 ] ) , for negative times only . for positive times , that is for @xmath167 and @xmath168 , the second order corrections ( in @xmath17 ) of the metric are switched on as the point @xmath169 is the collision point .
simultaneously , the initial stress - energy tensor of the ( massless ) particles that induces the shock - waves suffers a change ( see figure [ selfint4d ] and ( [ tmn2 ] ) ) that also has to be taken into account . the retarded green s function we need corresponding to the differential operator ( [ box ] ) is known and in light - cone coordinates is given by @xmath170 where according to ( [ s1 ] ) , @xmath171 , @xmath47 denotes a theta ( step ) function while equation ( [ gf ] ) ( the retarded green s function ) satisfies @xmath172 the procedure we have to follow is standard : we convolute the right hand sides of ( [ deq ] ) with ( [ gf ] ) and integrate in all over space - time .
we find it convenient to introduce the following notation @xmath173 where @xmath174 is any arbitrary function of @xmath148 while the last integral denotes integration in the transverse plane .
we wish to perform the @xmath48 integrations for all the possible cases that we will encounter while specifying @xmath44 from ( [ deq ] ) .
we organize these integrations in five cases while we leave the details of the calculation for appendix [ a ] .
in addition to the five cases of the @xmath48 integrations which we solve in appendix [ a ] there exists another case that involves the evaluation of @xmath175 that arises from ( [ + + ] ) . due to the complication of the calculation
we evaluate this term in a separate appendix ( see appendix [ b ] , equation ( [ a2 ] ) ) .
the results of all of the integrations in the light - cone plane ( performed in appendices [ a ] and [ b ] ) are proportional to the product @xmath176 .
this implies that the second order corrections to @xmath1 appear in the forward light - cone ( see figure [ etatau ] ) which is what we have initially demanded by seeking for a causal solution .
the right hand sides of ( [ deq ] ) contains expressions of the form @xmath177 differentiated with respect to @xmath38 in some fashion . according to ( [ s12 ] ) ,
these expressions are proportional to @xmath178 or their derivatives .
our previous analysis has already taken care of the @xmath48 integrations and so from now on , by @xmath143 we will mean just the transverse part of @xmath143 : @xmath179 .
having performed the @xmath48 integrations we move to the integration over the transverse plane .
the quantities we have to integrate have the structure .
however , according to ( [ nab1 ] ) these ( transverse ) integrations are trivial as they involve delta functions . ]
@xmath180 or @xmath181 where @xmath182 .
the subscript @xmath183 ( c ) and the superscript @xmath184 @xmath185 denotes differentiation of the source @xmath186 @xmath187 with respect to the space - time coordinate @xmath188 @xmath189 .
we may reduce the number of the different integrals that we have to perform by working as follows .
we firstly introduce the vectors @xmath190 and generalize the form of ( the transverse part of ) @xmath143 given by ( [ s12 ] ) to @xmath191 where @xmath192 where defined by ( [ r12 ] ) .
the next step is to exchange the derivatives acting on @xmath143 , that is @xmath193 with differentiations with respect to @xmath30 s of ( [ b ] ) , that is with @xmath194 takes the form @xmath195 . ] .
finally , at the end of our calculations we take the limits @xmath196 looking equations ( [ + -])-([12 ] ) we see that they involve the product @xmath177 differentiated with respect to the transverse coordinates . ; see ( [ s12 ] ) while the @xmath48 contributions have already been taken into account in the previous section . ] exchanging the transverse differentiations , according to our earlier discussion in this section , with derivatives with respect to the components of @xmath197 and taking into account the transverse part of the green s function , ( [ gf ] ) , we see at once that we have to calculate the following integral @xmath198 where we have introduced the convenient factor @xmath199 .
now , the non trivial integration is the angular integration as the radial one becomes trivial due to the @xmath4-function .
both of the integrations are performed in appendix [ c ] and the final result reads @xmath200 where the @xmath201 s may be found with the help of table [ ta1 ] and equation ( [ ja ] ) .
equation ( [ j ] ) is the last ingredient that allows us to obtain the desired solutions for equations ( [ deq ] ) .
we display the results in the next section .
having performed all of the integrations arising from the convolution of the right hand sides of ( [ deq ] ) with the green s function ( [ gf ] ) , we are in a position to derive the final formulas for @xmath44 . on the corrections of @xmath1 in order to highlight the order in @xmath17 that we are working . ] the ingredients that we need , have been obtained or defined in the previous sections and in appendices [ a ] , [ b ] and [ c ] . to begin with , we need the defining equations for @xmath143 and @xmath122 given by ( [ s12 ] ) and ( [ tmn2 ] ) respectively but with the generalized @xmath197 ( see ( [ b ] ) ) instead ( of ( [ lim ] ) ) while identity ( [ nab1 ] ) is very useful .
we also need the value of the integral @xmath202 defined in ( [ jin ] ) and given by ( [ j ] ) , ( [ ja ] ) and table [ ta1 ] as well as ( [ r12 ] ) and ( [ tao ] ) that define @xmath203 and @xmath3 , @xmath204 respectively . the final formula for @xmath44 is eventually given below . [ gmn2 ]
@xmath205 \bigg\}\bigg\ } , \label{g++2 } \\
g_{+-}^{(2 ) } & \lim_{\vec{b}_{1,2}\to ( \pm b,0)}\bigg \ { \frac{1}{2 } \mu^2 \theta(x^+)\theta(x^- ) \text{sech } \hspace{0.02 in } \eta \bigg\{\frac{1}{2 \tau } \log \left(k|\vec{b}_2-\vec{b}_1|\right)\delta(\tau - r_1 ) \notag\\ & + \left [ \partial^2_{b_{11}b_{21}}-\frac{1}{4 } \left(\frac{1}{\tau^2}\text{sech$^2$}\hspace{0.02in}\eta+\frac{1}{2}\tau \partial_{\tau } \left(\frac{1}{\tau}\partial_{\tau } \right ) \right ) \right]{\cal j } ( r_1,r_2,\tau ) + \big ( 1\leftrightarrow2 \big ) \bigg\ } \bigg\ } , \label{+-2 } \\ g_{+1}^{(2 ) } & = \lim_{\vec{b}_{1,2}\to ( \pm b,0)}\bigg \ { \frac{1}{\sqrt{2 } } \mu^2 \theta(x^+)\theta(x^- ) \bigg \ { \frac{b_{11}-b_{21}}{|\vec{b}_2-\vec{b}_1|^2}\frac{r_1}{r_1 ^ 2 + 2 ( x^{\pm})^2 } \theta(\tau - r_1 ) \notag\\ & \hspace{1.20in}+\frac{1}{2 } ( \partial_{b_{21 } } ) \left [ \frac{1}{1+e^ { \pm 2\eta}}\partial_{\tau } -\frac{1}{2 \tau } \text{sech}^2\hspace { 0.02in}\eta \right ] { \cal j } ( r_1,r_2,\tau ) \bigg\ } \bigg\ } , \label{g+12 } \\
g_{11}^{(2 ) } & = \lim_{\vec{b}_{1,2}\to ( \pm b,0)}\bigg \ { -\frac{1}{2 } \mu^2 \theta(x^+)\theta(x^- ) \text{sech } \hspace { 0.02in}\eta \notag\\ & \hspace{1.8in}\times \big\ { \partial_{b_{11}b_{21}}^2 + \partial_{b_{11}b_{11}}^2 + \partial_{b_{21}b_{21}}^2 \big\ } { \cal j } ( r_1,r_2,\tau ) \bigg\ } , \label{g112 } \\
g_{12}^{(2 ) } & = \lim_{\vec{b}_{1,2}\to ( \pm b,0)}\bigg \ { -\frac{1}{4 } \mu^2 \theta(x^+)\theta(x^- ) \text{sech}\hspace { 0.02in}\eta\notag\\ & \hspace{1.2 in } \big\ { \partial_{b_{22}b_{11}}^2 + \partial_{b_{12}b_{21}}^2 + 2\partial_{b_{11}b_{12}}^2 + 2\partial_{b_{21}b_{22}}^2 \big\ } { \cal j } ( r_1,r_2,\tau ) \bigg\ } .
\label{g122 } \end{aligned}\ ] ] in order to arrive to ( [ gmn2 ] ) we have convoluted ( [ gf ] ) with the right hand side of ( [ deq ] ) and employed ( [ c1 ] ) , ( [ c2 ] ) , ( [ c3 ] ) , ( [ c4 ] ) , ( [ c5 ] ) and ( [ a2 ] ) . in particular we have applied ( [ c5 ] ) and ( [ a2 ] ) for ( [ + + ] ) , ( [ c1 ] ) and ( [ c2 ] ) for ( [ + - ] ) , ( [ c3 ] ) and ( [ c4 ] ) for ( [ + 1 ] ) and ( [ c1 ] ) for both ( [ 11 ] ) and ( [ 12 ] ) .
the reason in preferring to work with the generalized @xmath197 is because the @xmath206 may be obtain from @xmath207 under @xmath208 before taking the limits as in ( [ lim ] ) ; thus reducing the amount of calculations .
finally , @xmath209 are obtained from @xmath210 under the ( simultaneous ) interchanges @xmath211 and @xmath212 . these steps
complete the determination of @xmath44 .
formula ( [ gmn2 ] ) is the final result of this project and we will analyze it in the next chapter .
we have seen below ( [ s1 ] ) that the parameter @xmath17 we use in our expansion has dimensions of length . on the other hand we know that the components of the metric should be dimensionless .
hence , each power in @xmath17 is compensated by an inverse power of the coordinates times a logarithm ( with argument @xmath213 , @xmath214 or @xmath215 ) in some power at most . for simplicity ( although not necessary ) , we restrict our discussion at middle -rapidity where @xmath216 that is for @xmath217 ( see ( [ tao ] ) and figure [ etatau ] ) .
this means that any @xmath218-order ( in @xmath17 ) contribution to the metric , where @xmath18 is a positive integer , will generally have the form terms ( see ( [ gmn ] ) ) . ]
@xmath219 ( see ( [ gmn2]))where @xmath220 are dimensionless real functions of @xmath38.-functions , logarithms and real coefficients ] we thus believe that our expansion is valid at high energies , that is small @xmath17 , compared to @xmath203 and @xmath3 . in fact , only one of @xmath203 or @xmath3 has to be large compared to @xmath17 .
so for instance ( [ gmn2 ] ) is a good approximation for small @xmath3 but large @xmath203 ( see figure [ re ] , region @xmath71 ) and also for small @xmath203 but large @xmath3 ( region @xmath221 ) provided that the massless particles creating the shockwaves are not energetically enough ( @xmath17 is small ) .
this is in contrast to @xcite where @xmath17 has dimensions of length to the negative third power and hence the expansion there was valid for early proper times . and
@xmath221@xmath222 correspond to @xmath223 and @xmath224 respectively .
the dark dots are the centers of the shockwaves and are located at an impact parameter 2b apart while @xmath225 denote the distance of the arbitrary point @xmath2 from the center of each shockwave ( right and left respectively ) . according to causality , at any given proper time @xmath3 , the propagation from the centers will reach the points on the peripheries ( at most ) .
this suggests that any given point @xmath2 on the transverse plane of the produced " metric at given @xmath3 will evolve according to the region where it belongs to ( see equations ( [ gmn2 ] ) , ( [ ja ] ) and ( [ jap ] ) ) and there are three different possibilities ( and three from their mirror images ) . ] in this work we have found the first non - trivial causal corrections to the problem of shockwaves collisions in gravity created by boosting two black holes to the speed of light .
the collision is assumed asymmetric and occurs at low energies ( see previous section ) . in terms of feynman diagrams ,
our result , formula ( [ gmn2 ] ) , corresponds to the resummation of the diagrams of figures [ interaction ] and [ selfint4d ] .
our concussions are summarized as follows .
the corrections to @xmath1 evolve non trivially and constrained by causality in an intuitive way . in particular , the behavior of @xmath1 at any point on the transverse plane , is determined from whether the propagation from the center of each individual nucleus has enough proper time to reach the point under consideration or not .
figure [ re ] is a snapshot taken at given proper time @xmath3 and depicts the six kinematical regions where @xmath1 evolves differently while it has the general form @xmath226 the indices @xmath71 , @xmath227 , @xmath221 on @xmath228 correspond to the regions @xmath71 , @xmath227 , @xmath221 of figure [ re ] respectively .
terms cover region @xmath71@xmath222,@xmath227@xmath222 and @xmath221@xmath222 since under this interchange we have @xmath229 . ]
the @xmath4-function terms arise from differentiating the @xmath47-functions of the right hand sides of ( [ gmn2 ] ) ( see also ( [ j ] ) ) .
they represent two shockwaves centered at the center of each shockwave and expanding on the transverse plane with speed @xmath3 ( that is with the speed of light ) .
this particular behavior of the metric was our initial motivation for dealing with this problem and our calculations confirm our earlier conjecture @xcite .
the presence of matter , @xmath6 , and the back - reactions affect the metric ( see for example ( [ g++2 ] ) and ( [ g+12 ] ) ) not only on the forward light - cone but also inside .
this implies that we can not in principle solve einstein s equations in vacuum ( ignoring the point - like particles that create the shocks ) arguing that we are away from the sources unless we know the boundary conditions that these sources enforce on the metric inside the light - cone .
this is in analogy to classical electrodynamics : solving laplace equation for the scalar potential away from a point charge sitting at the origin without specifying the boundary conditions , one may obtain the trivial ( zero ) solution which obviously is not the correct one .
the presence of the impact parameter @xmath30 is a necessary requirement and not an additional complication introduced in the problem .
mathematically this is obvious from the fact that both @xmath6 and @xmath1 diverge when the impact parameter @xmath30 tends zero . from equation [ con ]
we have seen that conservation of @xmath6 is violated violently and behaves as @xmath230 .
the metric tensor also exhibits a problematic behavior in the zero impact parameter limit : the the formula for @xmath44 , equation ( [ fgmn ] ) , diverges logarithmically when @xmath231 as it is evident ( for instance ) from equation ( [ g++2 ] ) .
a similar ultraviolet ( uv ) divergence appears in perturbation theory @xcite of gauge theories .
this suggests that a head on collision may not be investigated using classical gravity .
instead , one has to apply a quantum theory of gravity in the same way one can not predict the electron - positron annihilation ( in head on collisions ) using maxwell s equations .
one has to turn into quantum electrodynamics in order to describe the process and predict the production of two photons .
one may argue that a black hole may be formed and hence hide the violation of conservation behind the event horizon .
\4 . for future projects
we propose that one could plot @xmath44 for several ( fixed ) impact parameters as a function of @xmath3 , @xmath29 and @xmath232 ( at central rapidities where @xmath233 ) in order to visualize the evolution of the metric .
a very important aspect we ignored in our analysis is the role of the ultraviolet cutoff @xmath18 ( see ( [ s1 ] ) ) which ( for the case of a single shockwave ) seems to define an ergoregion ( with radius @xmath234 ) on the transverse plane .
it would be interesting to check how our solution gets modified for impact parameters @xmath235 which would imply that the two ergoregions of the shockwaves overlap .
finally , one could compute the gravitational radiation , take the limit of @xmath231 and compare the result with the one obtained by @xcite .
in this appendix we perform the @xmath48 part of the integrations resulting when the green s function ( [ gf ] ) acts on the right hand side of ( [ deq ] ) . proceeding as in ( [ sh ] )
we find out that we have to deal with five different cases ( there also exists a sixth case that is calculated in appendix [ b ] ) .
this case is trivial and almost all terms of ( [ deq ] ) behave in this way . defining @xmath236 where @xmath3 is the proper time and @xmath204 the rapidity ( see figure [ etatau ] for a geometrical meaning ) we have that @xmath237 where we have shifted the integration variable setting @xmath238 . and @xmath204 .
the hyperbolas indicate curves of constant @xmath3 and increase along @xmath8 .
the straight lines are lines of constant @xmath204 and increase from left to right as the arrows indicate .
along @xmath48 we have @xmath239 while along @xmath8 we have @xmath240 and @xmath233 . ] this case is more complicated as we have to integrate by parts the @xmath4-functions .
three kind of terms will appear : ( a ) terms that differentiate the @xmath241 terms of ( [ gf ] ) and hence produce @xmath242 ( terms ) .
but the presence @xmath242 forces the @xmath4-function term appearing in ( [ gf ] ) to become @xmath243 which is zero .
hence these terms do not contribute .
( b ) we have terms that differentiate the denominator and these contribute to the integrations .
( c ) finally , we have terms that either differentiate the @xmath4-function of ( [ gf ] ) only or both , the @xmath4-function and the denominator .
in order to evaluate these terms we exchange the @xmath244 that act on @xmath245 with @xmath246 . shifting the transverse variable as in the previous case and performing the @xmath247 integrations
we find that the contribution of both the ( b ) and ( c ) terms is@xmath248 \int d^2\vec{r ' } \delta \left(\tau - r ' \right)f_{ii}(\vec{r}+\vec{r'})\end{aligned}\ ] ] where the differential operator @xmath249 acts on the integral while @xmath250 denotes a partial differentiation with respect to @xmath3 .
this is a simpler version of case @xmath227 and working in a similar fashion yields @xmath251 \int d^2\vec{r ' } \delta \left(\tau - r ' \right)f_{iii}(\vec{r}+\vec{r ' } ) . \ ] ] @xmath252 this is a combination of cases @xmath221 and @xmath253 with @xmath254 with @xmath255 as in ( [ r12 ] ) .
we have @xmath256 where in the first equality we ignored a term similar to case @xmath227 ( see term ( a ) ) when integrating by parts the @xmath257 while in the second equality we performed the @xmath258 integration and exchanged @xmath244 with @xmath246 .
the rest two steps are obvious .
we wish to evaluate the expression ( [ c6 ] ) by performing the integration on both the light - cone and the transverse plane .
we begin by performing the @xmath126 and @xmath39 integrations . using that @xmath259 we find @xmath260 where after the first equality we assume that the @xmath261 while the @xmath48 dependence is displayed explicitly and it is integrated out after the second equality .
the next step is to perform the transverse integrations .
the trick here is to integrate by parts the @xmath262 terms .
the by parts integration produces two kind of terms : ( a ) those that do not act on @xmath263 and ( b ) those that act on @xmath263 .
but the terms of case ( b ) are proportional to @xmath264 which is zero for non zero impact parameter @xmath265 while for zero impact parameter it diverges violently ; we conclude that an impact parameter is necessary ( see section [ scon ] ) .
we now proceed to the remaining terms . exchanging @xmath266 with @xmath267 and using ( [ dlog ] ) equation ( [ a1 ] )
gives @xmath268 where @xmath197 are given by ( [ b ] ) .
we wish to calculate the integral ( [ jin ] ) that we encountered in section [ tp ] .
we have @xmath269 the quantities @xmath203 are given by ( [ r12 ] ) .
the trick here is to expand the logarithms in their fourier space : @xmath270 with @xmath18 serving as an ultraviolet cutoff . expanding both logarithms and performing the angular integration
one obtains @xmath271 performing the trivial radial integration we find that@xmath201 is now defined by @xmath272 next we perform the @xmath273 and @xmath274 integrations .
these integrals have been calculated in @xcite ; we summarize the procedure : in order to perform these integrations one has to expand @xmath275 in an infinite sum of products of the form @xmath276 with @xmath277 an integer and do the angular integrals ( of q and l ) first .
this factors out the radial integrations over @xmath273 and @xmath274 into two independent integrals .
then one has to perform these integrations and finally sum over @xmath277 .
the final result reads [ ja ] @xmath278 , \label{jj}\\ \xi_{>(<)}&=max(min)(r_1,\tau ) \hspace{0.2 in } \eta_{>(<)}=max(min)(r_2,\tau ) , \label{ke}\\ & \hspace{1.2 in } ( \vec{r_1}).(\vec{r_2})=\cos(\alpha ) r_1 r_2 .
\label{a } \end{aligned}\ ] ] so here @xmath279 is the angle between @xmath280 and @xmath281 and @xmath282 is the dilogarithm function while @xmath201 is real as it should .
equation ( [ ja ] ) implies that @xmath201 depends from the ordering of @xmath283 , @xmath284 and @xmath3 .
there are in principle six distinct ways to order them .
however it turns out that the cases @xmath285 and @xmath286 are interdependent from the relative ordering of @xmath283 and @xmath284 .
this degeneracy reduces the possible cases to four which we organize by introducing table [ ta1 ] for instance means @xmath287 with @xmath288 given from ( [ ja ] ) . ] that in turn help us to write a ( unified ) formula for @xmath201 @xmath289 p. d. death and p. n. payne , `` gravitational radiation in high speed black hole collisions .
perturbation treatment of the axisymmetric speed of light collision , '' _ phys .
_ * d46 * ( 1992 ) 658674 .
d. death and p. n. payne , `` gravitational radiation in high speed black hole collisions .
reduction to two independent variables and calculation of the second order news function , '' _ phys .
_ * d46 * ( 1992 ) 675693 .
k. sfetsos , `` on gravitational shock waves in curved space - times , '' nucl .
b * 436 * , 721 ( 1995 ) [ arxiv : hep - th/9408169 ] .
d. m. eardley and s. b. giddings , `` classical black hole production in high - energy collisions , '' phys .
d * 66 * , 044011 ( 2002 ) [ arxiv : gr - qc/0201034 ] .
s. b. giddings and r. a. porto , `` the gravitational s - matrix , '' phys .
d * 81 * , 025002 ( 2010 ) [ arxiv:0908.0004 [ hep - th ] ] .
s. b. giddings , m. schmidt - sommerfeld and j. r. andersen , `` high energy scattering in gravity and supergravity , '' arxiv:1005.5408 [ hep - th ]
. a. taliotis , `` heavy ion collisions with transverse dynamics from evolving ads geometries , '' arxiv:1004.3500 [ hep - th ]
. j. l. albacete , y. v. kovchegov and a. taliotis , `` asymmetric collision of two shock waves in ads@xmath291 , '' jhep * 0905 * , 060 ( 2009 ) [ arxiv:0902.3046 [ hep - th ] ] .
j. bartels _ et al .
_ , `` proceedings of the 38th international symposium on multiparticle dynamics ( ismd08 ) , '' arxiv:0902.0377 [ hep - ph ]
. j. l. albacete , y. v. kovchegov and a. taliotis , `` modeling heavy ion collisions in ads / cft , '' jhep * 0807 * , 100 ( 2008 ) [ arxiv:0805.2927 [ hep - th ] ] .
d. grumiller and p. romatschke , `` on the collision of two shock waves in ads5 , '' jhep * 0808 * , 027 ( 2008 ) [ arxiv:0803.3226 [ hep - th ] ] .
s. s. gubser , s. s. pufu and a. yarom , `` off - center collisions in ads@xmath291 with applications to multiplicity estimates in heavy - ion collisions , '' jhep * 0911 * , 050 ( 2009 ) [ arxiv:0902.4062 [ hep - th ] ] .
s. s. gubser , s. s. pufu and a. yarom , `` entropy production in collisions of gravitational shock waves and of heavy ions , '' phys .
d * 78 * , 066014 ( 2008 ) [ arxiv:0805.1551 [ hep - th ] ] .
y. v. kovchegov and s. lin , `` toward thermalization in heavy ion collisions at strong coupling , '' jhep * 1003 * , 057 ( 2010 ) [ arxiv:0911.4707 [ hep - th ] ] .
a. duenas - vidal and m. a. vazquez - mozo , `` colliding ads gravitational shock waves in various dimensions and holography , '' arxiv:1004.2609 [ hep - th ] .
s. s. gubser , i. r. klebanov and a. m. polyakov , `` gauge theory correlators from non - critical string theory , '' phys .
b * 428 * , 105 ( 1998 ) [ arxiv : hep - th/9802109 ] .
s. s. gubser , i. r. klebanov and a. a. tseytlin , `` string theory and classical absorption by three - branes , '' nucl . phys .
b * 499 * , 217 ( 1997 ) [ arxiv : hep - th/9703040 ] .
s. de haro , s. n. solodukhin and k. skenderis , `` holographic reconstruction of spacetime and renormalization in the ads / cft correspondence , '' commun .
* 217 * , 595 ( 2001 ) [ arxiv : hep - th/0002230 ] .
a. papapetrou , lectures on general relativity , " ( d. reidel publishing company , holland 1974 ) . | the problem of collisions of shockwaves in gravity is well known and has been studied extensively in the literature . recently , the interest in this area has been revived trough the anti - de - sitter space / conformal field theory correspondence ( ads / cft ) with the difference that in this case the background geometry is anti de sitter in five dimensions . in a recent project that we have completed in the context of ads / cft ,
we have gained insight in the problem of shockwaves and our goal in this work is to apply the technique we have developed in order to take some farther steps in the direction of shockwaves collisions in ordinary gravity . in the current project , each of the shockwaves correspond to a point - like stress - energy tensor that moves with the speed of light while the collision is asymmetric and involves an impact parameter ( b ) .
our method is to expand the metric @xmath0 in the background of flat space - time in the presence of the two shockwaves and compute corrections that satisfy causal boundary conditions taking into account back - reactions of the stress - energy tensor of the two point - like particles .
therefore , using einstein s equations we predict the future of space - time using the fact that we know the past geometry . our solution respects causality as expected but this casual dependence takes place in an intuitive way . in particular , @xmath1 at any given point @xmath2 on the transverse plane at fixed @xmath3 evolves according from whether the propagation from the center of each of the shockwaves or from both shockwaves has enough proper time ( @xmath3 ) to reach the point under consideration or not . simultaneously around the center of each shockwave ,
the future metric develops a @xmath4-function profile with radius @xmath3 ; therefore this profile expands outwards from the centers ( of the shockwaves ) with the speed of light .
finally , we discuss the case of the zero impact parameter collision which results to the violation of conservation and we argue that this might be a signal for the formation of a black hole .
the author would like to thank prof .
andrzej derdzinski and especially prof .
gerlach ulrich for serving in his defense committee and also for stimulating discussions prior and during this thesis .
also the author thanks prof .
samir mathur for very informative discussions during the writing of this work .
in addition he would like to thank prof .
thomas kerler and herb clemens for making the transferring from the graduate program of the department of physics to the graduate program of the department of mathematics possible and in addition denise witcher for guiding him through all the steps of this process .
completion of the graduate courses would nt be possible without the help and teaching enthusiasm of prof .
jean - francois lafont , daniel shapiro and joseph ferrar and particularly prof .
james cogdell and alexander leibman for answering his endless emails with clarity , exactness and always availability .
his student mates corry christopherson , fatih olmez and zhi qui have played a crucial role during the coursework as not only they kept encouraging him but also have spent enormous time from their time in order to patiently answer all of his questions .
attending many of the required classes would had been impossible without the help of his best friend chen zang who baby sat the authors son , nikolas alexandrou taliotis , while the author and his wife maria alexandrou had to attend classes .
the author owns special gratefulness to his physics advisor , prof .
yuri kovchegov , for teaching him physics , for teaching him how to see through the complicate mathematics and nail down the physical picture and especially for showing him how to undertake his responsibilities not only inside but also outside of the academia .
special thanks to prof .
tom banks , steve giddings , yuri kovchegov and krishna rajagopal for their encouragement in submitting this thesis on the arxiv after it was defended .
lastly but most important , the author would like to thank maria alexandrou for all of her patience during these years and for lifting most of the weights of their journey in life while keep smiling and being an example of a kind human , a reliable partner and a wonderful woman : he delicates this work to her .
this work is sponsored in part by the u.s .
department of energy under grant no .
de - fg02 - 05er41377 and in part by the institution of governmental scholarships of cyprus ( iky ) .
born in nicosia , cyprus b.sc . in physics , ms in physics graduate research associate in the department of physics and graduate student in the department of mathematics of the ohio state university |
previous studies found interactions between the meaning of words and the screen location where the words were presented ( i.e. , spatial position ) . for instance , people were faster to decide that a stork flies if the word stork was presented at the top of the screen rather than at the bottom of the screen ( eti and domijan , 2007 ) .
similar effects were found in a semantic relatedness judgment task ( zwaan and yaxley , 2003 ) and also in a letter identification task in which participants identified a single letter ( x or o ) presented at the top or bottom of the screen immediately following the name of an object with a typical location ( e.g. , cowboy boot , estes et al . , 2008 ) .
researchers also found interactions between word meaning and spatial position when words refer to more abstract concepts , such as valence or power ( richardson et al .
, 2003 ; meier and robinson , 2004 ; schnall and clore , 2004 ; schubert , 2005 ; meier et al . , 2007 ;
although these abstract concepts have no inherent perceptual spatial positions , they are connected to spatial concepts through metaphorical relations such as good is up and bad is down .
these interactions between word meaning and spatial location provide important insight into the underlying mental representations of meaning .
the first explanation is that readers understand the meaning of a category by mentally simulating associated sensory - motor information ( e.g. , barsalou , 1999 ) .
thus , a representation of the meaning of flying animal involves simulating looking up at the sky and seeing the animal fly .
because such simulations occupy sensory - motor systems , they might interfere with other sensory - motor processing .
indeed , interference was found when people simultaneously performed mental visual imagery and a visual perception task ( craver lemley and reeves , 1992 ) .
( 2008 ) showed a similar interference effect without explicit imagery instructions . in their task ,
participants viewed word pairs in the center of the screen that referred to objects with typical vertical locations ( cowboy hat or cowboy boot ) . immediately after presentation of the word pair
estes et al . found that letter identification was slower and less accurate if the letter 's position matched the typical location of the word 's referent than if the position mismatched .
they did not report the separate effects for items at the top and bottom positions .
they presented sentences in which a verb had a horizontal ( push ) or vertical ( sink ) orientation , followed by a visual target .
the target could appear at one of four locations on the computer screen ; at the top or bottom ( horizontally centered ) , or on the left or right ( vertically centered ) .
they found that the orientation of the verb interfered with the position of the visual target .
for example , following the vertical word sink , responses to targets presented on the vertical axis ( top or bottom ) were slower than to targets presented on the horizontal axis ( left or right ) .
other studies , using somewhat different designs , found facilitation if word meaning and spatial location were congruent .
bergen et al . ( 2007 ) noted that findings of interference or facilitation might be due to differences in timing , but both effects are still explained by the same simulation account .
for example , eti and domijan ( 2007 ) presented words referring to flying or non - flying animals at the top or bottom of the computer screen .
decisions were faster and more accurate for words in a congruent than incongruent position ( e.g. , performance was better for stork at the top than at the bottom of the screen ) .
additionally , the task itself deciding whether or not animals fly might have directed spatial attention . in order to perform the flying animal task ,
participants may have systematically directed their mental simulations , and thus their spatial attention , towards the sky .
although eti and domijan ( 2007 ) did not find a main effect of position , other studies show such main effects ( e.g. , schubert , 2005 ) . on this account
, there may be both a general task - related benefit for words presented at the top of the screen ( through spatial attention ) as well as a word specific benefit for flying animals presented at the top of the screen ( because the mental simulation would be easier for a word in this position ) .
therefore , in the current study , we tested both task congruency ( e.g. , benefits for all words at the top ) as well as word congruency ( e.g. , additional benefits for flying animals at the top ) .
a second explanation for these congruency effects lies in the response selection process rather than the representation of meaning .
proctor and cho ( 2006 ; see also bar - anan et al . , 2007 ) proposed that in many binary decision tasks , the speed of response selection is affected by polarity correspondence .
stimulus dimensions with binary values are encoded as having a + ( plus ) polarity or ( minus ) polarity . in a similar vein
, response alternatives are also encoded as + or . response selection is faster when stimulus and response polarities correspond than when they do not correspond . according to proctor and cho ,
for example , a yes response is typically represented as + and a no response is represented as . up is represented as + and down is represented as . right is represented as + and left is represented as . accordingly , right key presses are coded as + and left key presses are coded as . related to this idea , klatzky et al .
( 1973 ) argued that many conceptual dimensions ( e.g. , height , valence ) also have polarity .
furthermore , the adjectives representing the opposite ends of these dimensions consist of a default , positive , or unmarked member ( e.g. , tall , good ) that can also be used to name the dimension in its entirety and a negative , or marked member ( e.g. , short , bad ) that is only used to name one end of the dimension . for example , the question how tall is he ? is neutral as to actual size , whereas the question how short is he ?
for example , in judgments of power , powerful may be the unmarked ( positive ) end and powerless may be the marked ( negative ) end of the power dimension .
alignment of powerful with up therefore leads to faster processing than alignment of powerful with down ( schubert , 2005 ) .
polarity correspondence can also explain similar results that examined spatial congruency with concepts such as valence or number magnitude ( fischer et al .
, 2003 ; meier and robinson , 2004 ; santens and gevers , 2008 ; bae et al . , 2009 ; but
these polarity correspondences might also explain the results of eti and domijan ( 2007 ) . in their task ,
the flying animals always required a yes response and the non - flying animals always required a no response .
therefore , in the congruent condition , the polarities of position and response ( up - yes and down - no ) were aligned , whereas they were misaligned in the incongruent condition ( up - no and down - yes ) . on this account ,
the results are due to a lack of counterbalancing between the yes / no answer and spatial position .
after all , there was no condition in which the task was reversed , such that non - flying animals required a yes - response ( e.g. , by asking is this a land animal ? ) . in sum
, stimuli in a task may have polarity values based on markedness , on task - specific judgment ( yes or no ) , and on spatial position ( up or down ) . in addition , responses also have polarity values , for example based on the spatial position of the manual responses .
thus , rather than supporting a theory of meaning representation through mental simulation , these congruency effects may instead reflect polarity alignment if the role of yes / no response assignment is not counterbalanced or not independently manipulated . to investigate whether the effects of spatial position of a target stimulus are better explained by polarity alignment or by congruence with mental representations , we used two semantic judgment tasks for which the same stimuli required opposite responses .
one task was an ocean judgment ( is it usually found in the ocean ? ) and the other task was a sky judgment ( is it usually found in the sky ? ) .
these tasks were chosen because sky and ocean have clear spatial positions but neither one is a linguistically marked or unmarked end of a dimension .
while most people will have more experience with looking at things in the sky than in the ocean , our study was run in san diego , which is situated on the pacific coast .
therefore , the participants in our study had above average experience with looking at the ocean . because perceptual simulation involves activation of previous perceptual experiences we assumed that simulations of seeing things in the sky and ocean would most likely take the perspective of someone standing on land looking straight ahead , with the sky taking up the top half of visual field and the ocean taking up the bottom half of the visual field .
the same set of stimuli was used in the two tasks , and consisted of names of things that are typically found in one of the two locations ( e.g. , whale , submarine , eagle , helicopter ) .
subjects responded using their two hands , and whether the yes response was given by the left or right hand was counterbalanced across participants .
the role of conceptual congruency and polarity alignment can be disentangled by a between task comparison of the interaction between word category and stimulus position .
if spatial congruency effects are due to mental simulation , then spatial attention should be directed towards the top of the screen in the sky decision task but towards the bottom of the screen in the ocean decision task .
this predicts that in general , reaction times to words at the top of the screen should be faster in the sky task whereas reaction times to words at the bottom of the screen should be faster in the ocean task . beyond this task congruency effect ,
the mental simulation account also predicts a word congruency effect . for this effect of congruency between a referent 's typical location and the position of the word
, performance should be better for sky words presented at the top of the screen as compared to sky words presented at the bottom of the screen .
similarly , performance should be better for ocean words presented at the bottom of the screen as compared to ocean words presented at the top of the screen .
critically , these effects should occur regardless of the task ( i.e. , regardless of the yes / no response ) .
if , on the other hand , congruency effects are due to polarity alignment , performance should be better for yes
responses to words presented at the top of the screen as compared to yes responses to words presented at the bottom of the screen .
furthermore , performance should be better for no responses to words presented at the bottom of the screen as compared to no responses to words presented at the top of the screen .
critically , these effects should occur regardless of the typical location ( ocean or sky ) of the word 's referent ( i.e. , regardless of the word 's meaning ) .
thus , in the ocean decision task , performance should be better for ocean words at the top and sky words at the bottom than for the opposite positions . moreover , because the right hand is coded as + polarity and the left hand as polarity ( according to proctor and cho , 2006 ) this effect may be restricted or at least most pronounced when yes
responses are given with the right hand and no responses are given with the left hand . in summary ,
the mental simulation account predicts both task congruency effects ( task and location ) and word congruency effects ( word and location ) , whereas polarity alignment only predicts response congruency effects ( response and location ) .
they were randomly assigned to the ocean decision ( n = 52 ) or sky decision task ( n = 50 ) .
forty words referred to things usually found in the ocean ( e.g. , whale , submarine ) and 40 referred to things usually found in the sky ( e.g. , eagle , helicopter ) .
the stimuli were selected from a larger set that had been tested in a pilot study . in this pilot study
50 participants made ocean or sky decisions ( 25 participants in each task ) to a larger set of words presented individually in random order in the center of the computer screen . from this larger set , 80 words were selected for which categorization agreement between participants was greater than 75% ( m = 92% ) .
the two sets of words were comparable on word length , log word frequency , and number of items from different types of taxonomic categories ( e.g. , animals , man - made objects , natural objects , persons ) .
participants in the ocean decision task were instructed to decide whether an item could be found in the ocean .
participants in the sky decision task were instructed to decide whether an item could be found in the sky .
items were presented individually and in random order on the computer screen following the procedure used by eti and domijan ( 2007 ) .
a trial started with a sequence of three consecutive fixation cues ( + + + ) presented for 300 ms each , which served to warn participants whether the target word would appear at the top or bottom .
the first fixation was presented in the center of the computer screen . in the top condition ,
the second fixation was presented at 40% from the top of the screen , the third at 30% from the top of the screen , followed by the target word at 20% from the top of the screen . in the bottom condition
these positions were at 40% , 30% , and 20% from the bottom of the screen respectively .
this sequence of fixations cues did not induce a sense of upwards or downwards motion because the vertical distance between fixations was too great ( i.e. , there was no apparent motion ) .
the target word was presented immediately after the final fixation cue and remained on the screen until the participant responded or 2,500 ms elapsed .
participants pressed the z or m key of the computer keyboard to indicate a yes or no response .
feedback was provided for 1,500 ms if the response was incorrect ( incorrect ) or slower than 2,500 ms ( too slow ) .
the next trial started immediately after the response or , in the case of feedback , after the feedback message .
every target word was tested twice , with the first occurrence during a first block of trials followed by a second occurrence during a second block of trials .
half of the items in a block were assigned to the top position and the other half to the bottom position such that ocean words and sky words were presented equally often at each position . in the second block ,
the order of blocks was counterbalanced between participants , as was assignment of the m and z keys to the yes and no responses .
they were randomly assigned to the ocean decision ( n = 52 ) or sky decision task ( n = 50 ) .
forty words referred to things usually found in the ocean ( e.g. , whale , submarine ) and 40 referred to things usually found in the sky ( e.g. , eagle , helicopter ) .
the stimuli were selected from a larger set that had been tested in a pilot study . in this pilot study
50 participants made ocean or sky decisions ( 25 participants in each task ) to a larger set of words presented individually in random order in the center of the computer screen . from this larger set
, 80 words were selected for which categorization agreement between participants was greater than 75% ( m = 92% ) .
the two sets of words were comparable on word length , log word frequency , and number of items from different types of taxonomic categories ( e.g. , animals , man - made objects , natural objects , persons ) .
participants in the ocean decision task were instructed to decide whether an item could be found in the ocean .
participants in the sky decision task were instructed to decide whether an item could be found in the sky .
items were presented individually and in random order on the computer screen following the procedure used by eti and domijan ( 2007 ) .
a trial started with a sequence of three consecutive fixation cues ( + + + ) presented for 300 ms each , which served to warn participants whether the target word would appear at the top or bottom .
the first fixation was presented in the center of the computer screen . in the top condition ,
the second fixation was presented at 40% from the top of the screen , the third at 30% from the top of the screen , followed by the target word at 20% from the top of the screen . in the bottom condition
these positions were at 40% , 30% , and 20% from the bottom of the screen respectively .
this sequence of fixations cues did not induce a sense of upwards or downwards motion because the vertical distance between fixations was too great ( i.e. , there was no apparent motion ) .
the target word was presented immediately after the final fixation cue and remained on the screen until the participant responded or 2,500 ms elapsed .
participants pressed the z or m key of the computer keyboard to indicate a yes or no response .
feedback was provided for 1,500 ms if the response was incorrect ( incorrect ) or slower than 2,500 ms ( too slow ) .
the next trial started immediately after the response or , in the case of feedback , after the feedback message .
every target word was tested twice , with the first occurrence during a first block of trials followed by a second occurrence during a second block of trials .
half of the items in a block were assigned to the top position and the other half to the bottom position such that ocean words and sky words were presented equally often at each position . in the second block ,
the order of blocks was counterbalanced between participants , as was assignment of the m and z keys to the yes and no responses .
correct reaction times were trimmed by removing reaction times that were more than three standard deviations from the participant 's mean for the corresponding response ( 2.04% of the correct rts ) . as block did not interact with any other variable , we collapsed the data across blocks .
the mean reaction times are presented in table 1 and the error rates are presented in table 2 .
separate anovas ( task category instruction positio ) were performed on the rts and error rates .
the factors task ( ocean vs. sky decision ) and instruction ( yes - is - right vs. yes - is - left ) were between - subject factors , while category ( ocean vs. sky word ) and position ( up vs. down ) were within - subject factors .
mean reaction times and standard errors in the two semantic decision tasks as a function of word position , word category , and response instruction .
mean error scores and standard errors in the two semantic decision tasks as a function of word position , word category , and response instruction .
mean reaction times in the two semantic decision tasks as a function of word category and word position .
error bars represent confidence intervals for the word category position within - subjects interaction ( loftus and masson , 1994 ) .
mean error rates in the two semantic decision tasks as a function of word category and word position .
error bars represent confidence intervals for the word category position within - subjects interaction ( loftus and masson , 1994 ) .
next , we report all significant results of both the rt and error rate anovas .
we do so in an intermixed fashion , such that for a given effect , both the rt and error rate results are simultaneously reported ( if both were significant ) . in this manner , it can be ascertained whether speed and accuracy traded off against each other ( e.g. , rt and error rate changing in opposite directions ) , or whether there was general performance change ( e.g. , just an rt effect , just an error rate effect , or situations where rt and accuracy went in the same direction ) .
this is because the combination of task and category defines whether yes or no is the correct response to a particular target word . in other words , in the sky task , responses should be easier to sky words at the top of the screen ( yes responses ) , whereas in the ocean task , responses should be easier to ocean words at the top of the screen ( also yes
responses ) . however , there was no three - way interaction between task , category , and position either in terms of rt or error rate , f < 1 for rts and f(1,98 ) = 2.12 , p = 0.15 for error rate .
the polarity principle also predicted that this response and position effect should be more pronounced for right - hand yes responses to words at the top position and for left - hand no responses to words at the bottom position .
this interaction between position , task , category , and instruction was not significant for rts or error rate , both fs < 1 .
yes responses were aligned with the top position . the only significant interaction that might be attributed to the polarity principle was between instruction , task , and category , f(1,98 )
= 4.59 , p = 0.04 , = 0.05 for rts but f(1,98 ) = 1.07 , p = 0.30 for error rate .
numerically , this interaction showed that in the sky decision task , right - hand responses were faster for sky words ( yes responses ) than for ocean words ( no responses ) , whereas left - hand responses were faster for ocean words than sky words . in the ocean decision task , however , right- and left - hand responses were both faster for ocean words than sky words . collapsing across tasks ,
participants were faster to give a no response with their left than with their right hand , while there was no difference for yes responses , but these simple effects were not statistically significant .
these effects might be consistent with a polarity account in which no and left - hand responses are both coded as polarity and responses were faster when these polarities were aligned .
however , the polarity account also predicts that responses should be faster when yes and right - hand responses were aligned ( both + polarity ) , which was not the case .
moreover this effect did not interact with spatial position and thus can not explain spatial congruency effects .
conceptual congruency based on mental simulation predicted both a task congruency effect as well as a word congruency effect . in support of this account
, there was a significant interaction between task and position , f(1,98 ) = 15.97 , p < 0.001 , = 0.14 for rts and f(1,98 ) = 3.61 , p = 0.06 , = 0.04 for error rate . in the ocean decision task , responses were faster to target words at the bottom than at the top , f(1,51 ) = 5.05 , p = 0.03 , = 0.09 , and in the sky decision task responses were faster and more accurate to target words at the top than at the bottom ,
f(1,49 ) = 11.97 , p = 0.001 , = 0.196 for rts and f(1,49 ) = 5.94 , p = 0.02 , = 0.11 for error rate .
thus , there was a highly reliable task congruency effect , although this effect did not interact significantly with word category , all ps > 0.20 .
conceptual congruency also predicted that performance would be better for sky words at the top and ocean words at the bottom in both tasks , because these are typical positions for the entities referred to by these words .
however , there was no interaction at all between word category and position , both fs < 1 .
thus , the current experiment failed to replicate the word congruency effect reported by eti and domijan ( 2007 ) .
in addition to these results , the anova on rts showed a theoretically uninteresting main effect of category , f(1,98 ) = 4.08 , p = 0.05 , = 0.04 , and an interaction between category and task , f(1,98 ) = 73.02 , p < 0.001 , = 0.43 .
overall , responses were faster to ocean than sky words . in the ocean decision task this
was also the case , but in the sky decision task responses were faster to sky than to ocean words , indicating that yes responses were faster than no responses .
this effect provides a manipulation check because it shows that the polarity of the items was reversed by the task .
in two semantic decision tasks ( ocean decisions and sky decisions ) performance was better for words that were presented at a position that was congruent with the task . specifically , performance was better for words at the bottom than at the top of the screen in ocean decision and better for words at the top than at the bottom of the screen in sky decision .
this finding is not explained by the polarity principle , but it was expected by the perceptual simulation account if the systematic nature of the task ( e.g. , a long sequence of sky decisions ) caused participants to direct their attention to the task appropriate position of the screen , which is something they might do to properly simulate words in the task indicated location ( sky or ocean ) .
task performance is facilitated by alignment of polarities between response and stimulus dimensions ( proctor and cho , 2006 ) . in the present study
this principle predicted that the yes response , top position of the target word , and the right - hand response would be aligned because they are all coded as + polarity , and the opposites ( no response , bottom position , left - hand response ) would be aligned because they are all coded as polarity .
however , our results showed no advantage when polarity was aligned : there were neither polarity effects when only considering response and screen position nor when considering response , screen position , and response hand . this failure to find polarity effects
is consistent with previous studies finding results in one condition that might be interpreted in terms of polarity alignment , but then failing to find these results in a complementary condition ( meier and robinson , 2004 ; van dantzig , 2009 ; boot and pecher , 2010 ) . in these studies ,
effects of congruency between spatial position and concept were observed when considering a sequence of events that went from concept to position ( e.g. , from power judgment to visual target identification ) but not in going from position to concept ( e.g. , from location decision to power judgment ) .
for example , van dantzig found that power judgments of words ( e.g. , dictator ) had an effect on subsequent identification of a letter presented at the top or bottom of the screen . in a separate experiment , identification of letters ( e.g. , a p at the top of the screen )
thus , power judgment affected spatial attention , but spatial attention did not affect power judgments . in both experiments , position and power had binary polarities .
if polarity alignment was the cause of the congruency effects in the concept followed by position experiment , there should have also been congruency effects for the position followed by concept experiment .
( 2008 ) noted that the polarity principle is a general principle that should be observed consistently .
one complication with previous findings is that the stimulus dimensions had polarities that were mostly fixed .
down spatial dimension , up always has + polarity , and in the power dimension , powerful always has + polarity .
therefore , it was impossible to disentangle polarity alignment and meaning congruency . in the present study
, we gave the same stimuli + or polarity by changing the decision task , and we manipulated response side orthogonally .
therefore , the absence of any effect of polarity alignment in the current experiment indicates that , at least for semantic tasks , the polarity principle does not contribute to performance .
the semantic congruency account predicted an interaction between word meaning and spatial position , but we did not find such an interaction .
previous findings ( richardson et al . , 2003 ; eti and domijan , 2007 ; estes et al . , 2008 )
were explained by congruency between a perceptual simulation of the word 's meaning and the spatial position or orientation of the target .
found interference when the visual target 's position or orientation was congruent with the meaning of the preceding word .
in contrast , eti and domijan ( 2007 ) obtained facilitation rather than interference for congruent stimuli .
in the current experiment , we found neither interference nor facilitation between word position and word meaning .
rather , we observed a task - wide advantage of the spatial location of the target , regardless of the specific meaning of the target word .
thus , the diversity of these findings indicates that the nature of spatial congruency effects is not fully clear yet .
some researchers ( estes et al . , 2008 ) have postulated two loci for congruency effects .
first , concepts that are associated to typical locations direct spatial attention toward their typical location .
this might interfere with processing when simulation and perception occur simultaneously and differ in perceptual details ( e.g. , simulation of a cowboy boot and identification of the letter x ) .
in contrast , the simulation may facilitate visual processing when simulation and visual target share perceptual details or when the simulation and visual target are presented sequentially ( bergen et al . , 2007 ) .
therefore , whether spatial congruency results in benefits or deficits and whether this effect is found for individual words or task - wide categories may depend on procedural details . these details may include the timing of spatial target presentation compared to the process of mental simulation , whether the simulated concept is concrete ( e.g. , cowboy boot ) or abstract ( e.g. , power ) ( see bergen et al . , 2007 ) , and what task is performed on the spatial target .
regardless of these procedural details , our results show that it is unlikely that the polarity principle can explain spatial congruency effects .
an explanation for our results that may also explain other findings is that the task directed spatial attention to the location that was congruent with the decision category . when the task was to decide whether a word referred to an entity typically found in the sky , participants directed their attention to the top of the screen , and when the task was to decide whether a word referred to an entity typically found in the ocean , participants directed their attention to the bottom of the screen .
this task - specific spatial attention can be explained by mental simulation of the task - relevant location rather than specific entities .
while performing the sky decision task , participants might mentally simulate looking at the sky without filling in perceptual details such as objects with specific shapes and colors . because such task - induced spatial attention does not involve a simulation with many perceptual details
, it will not interfere with perception of the target word . instead , increased attention at the task - congruent location facilitated processing of the target word , resulting in faster responses .
this attentional explanation is consistent with other findings in which lower level perceptual information facilitated higher level conceptual processing ( van dantzig et al .
, 2008 ) . in van dantzig et al.s study , participants verified concept - property pairs ( e.g. , banana - yellow ) presented as words .
responses were faster for pairs that were preceded by a simple perceptual stimulus from the same modality as the property ( e.g. , a flashing light ) than from a different modality ( e.g. , a burst of white noise ) . in this
study the correspondence between perception and representation was at a more global level , namely the activation of a sensory modality , rather than at the level of specific perceptual details .
in other studies investigating the effect of spatial position of words on conceptual processing ( meier and robinson , 2004 ; schubert , 2005 ; meier et al . ,
2007 ) , the task - relevant dimension was always congruent with the top position ( e.g. , valence , power , divinity ) , which provides an opportunity to assess whether there may have been an attentional effect similar to the current results .
close inspection of the results shows that , at least numerically , responses were faster to words presented at the top than at the bottom of the screen . because these studies did not use a second dimension in which the positive exemplars were congruent with the bottom position , it is impossible to say conclusively whether the advantage for stimuli at the top position was due to task - induced spatial attention or a more general advantage for stimuli at the top .
however , these previous studies are consistent with the present findings and our explanation . in conclusion
our results did show , however , that semantic decision tasks direct spatial attention in a more global way .
it may be that people perform a mental simulation of the task - congruent location , which directs spatial attention and facilitates processing of targets in that location .
the authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest . | we report an experiment that compared two explanations for the effect of congruency between a word 's on screen spatial position and its meaning . on one account ,
congruency is explained by the match between position and a mental simulation of meaning .
alternatively , congruency is explained by the polarity alignment principle . to distinguish between these accounts we presented the same object names ( e.g. , shark , helicopter ) in a sky decision task or an ocean decision task , such that response polarity and typical location were disentangled .
sky decision responses were faster to words at the top of the screen compared to words at the bottom of the screen , but the reverse was found for ocean decision responses .
these results are problematic for the polarity principle , and support the claim that spatial attention is directed by mental simulation of the task - relevant conceptual dimension . |
Singer Chris Brown was arrested on Tuesday following accusations that he assaulted a woman with a deadly weapon.
The alleged victim, who has now been identified as Baylee Curran, claimed that Brown threatened her while holding a gun inside his Los Angeles home.
Newly uncovered information suggests that Curran is reportedly a former beauty queen who was stripped of her crown after nude photos of her surfaced.
RELATED: While the LAPD waited outside his home, Chris Brown said “f**k the police” on Instagram
Pageant officials say Curran lied to staff and claimed that the photos were not of her, according to TMZ.
News outlets reported that after Curran was crowned Miss California Regional 2016, an anonymous person sent the photos to the pageant director, who told Curran, “This is not what the pageant stands for.”
RELATED: After LAPD waited outside his home for hours, Chris Brown has reportedly been arrested
After being stripped of her title, Curran reportedly told the pageant director, “You’re the director. You can make or bend the rules. I’m keeping the crown and we can move on.”
Curran also allegedly failed to show for community functions and repeatedly chose photo shoots over pageant responsibilities, according to the TMZ report. ||||| Chris Brown Arrested For Assault with a Deadly Weapon
Chris Brown Arrested for Felony Assault with a Deadly Weapon
Breaking News
3:30 AM PT -- Brown has been booked and released after posting bail.
Chris Brown has been arrested for assault with a deadly weapon ... a felony.
TMZ broke the story ... Brown allegedly pulled a gun on a guest at his home early Tuesday morning after an argument ... she claims over jewelry.
Baylee Curran filed a police report and cops obtained a search warrant. As we reported, while Chris was holed up in his house, he threw a duffel bag out his window that contained 2 guns and drugs.
Brown will be taken to the police station for processing. | – Chris Brown spent Tuesday in a standoff with the LAPD at his Tarzana home that ended with the singer being arrested, the LA Times reports. He has since been released on $250,000 bail. It all started when, a woman claims, Brown pointed a gun at her and she tried to escape his property. She eventually did flee, but one of his associates tried to get her to sign a nondisclosure agreement first, she says. She called police to the scene around 3am, and Brown wouldn't let them in; while they were outside, Brown posted videos to Instagram proclaiming his innocence. Sources tell TMZ he also allegedly threw a duffel bag containing two guns out a window. The woman, Baylee Curran, tells the Times she was there with a friend and a business associate and that they were discussing future projects with the singer. At one point, she was admiring the jewelry a man was showcasing inside Brown's home, and "that’s when he told me to back away from the diamond necklace and started cussing me out and calling me names,” she says. “That’s when Chris pulled his gun and told me to ‘Get out,’ he said. ‘I’m sick of you girls, get the F out!’” Ultimately, detectives entered Brown's home around 1pm and arrested him on suspicion of assault with a deadly weapon. (Click for more on Curran.) |
sentinel nodes ( sns ) are the first possible sites of metastasis via lymphatic drainage from a primary tumor .
the absence of metastasis in sns is thought to be correlated with the absence of metastasis in downstream lymph nodes , allowing unnecessary prophylactic lymphadenectomy to be avoided .
this concept was applied to melanoma and breast cancer , and studies showed that sn biopsy was a safe and accurate method to predict metastatic lymph nodes ( 1,2 ) .
subsequently , the sn concepts were extended to other solid tumors including gastric cancer .
to date
, a number of feasibility studies for sn concepts in gastric cancer have been conducted ( 3,4 ) . because the proportion of early gastric cancer among all gastric cancer has been increasing in east asia and the incidence of lymph node metastasis
was reported to be 8.0%20.0% in these early gastric cancer patients , sn navigation surgery has been noted as a new minimally invasive approach ( 5 - 8 ) .
this surgery not only reduces the extent of lymph node dissection but also enables stomach - preserving surgery and improves the quality of life in patients with negative sn metastasis .
most previous studies for sn in gastric cancer showed a high detection rate and acceptable accuracy of sn mapping ( 3,4,9 ) .
however , there is still debate about sn concepts regarding detailed detection techniques and oncological safety , and sn navigation surgery is not yet clinically used .
this review aimed to evaluate the current status of sn navigation surgery for gastric cancer and discuss several emerging issues ( 10 ) .
sn biopsy has been performed using various methods in more than 50 institutions for more than a decade .
each study proved the feasibility of sn biopsy with a high detection rate , whereas the studies were different in their indications ( only early gastric cancer or including advanced gastric cancer ) , approach ( openvs .
including immunohistochemistry or real - time polymerase chain reaction ( rt - pcr ) ] ( 3 ) . a meta - analysis study by ryuet al
. showed a significant inter - heterogeneity ( p<0.001 ) among the studies and suggested that sn biopsy is not clinically applicable .
however , another meta - analysis by wanget al . reached a positive conclusion for sn biopsy with a similar detection rate and sensitivity ( 4 ) .
the authors commented that sn biopsy was considered to be technically feasible and acceptable , and they also evaluated several factors to improve the sensitivity or detection rate .
two important multicenter phase ii clinical trials were performed in japan and the results were recently published ( 11,12 ) .
these two studies used different methods of sn biopsy and consequently obtained different results . in the study by kitagawaet al .
( 11 ) , sn mapping was performed using a dual tracer ( tc - tin colloid and isosulfan blue ) endoscopic submucosal injection technique .
however , the open subserosal injection technique using a single tracer [ indocyanine green ( icg ) ] was performed in the jcog0302 trial , and an unacceptably high false - negative rate of 46.4% was revealed .
this jcog0302 trial showed the limitations of utilizing a single tracer and intraoperative histological examination using only one plane ( 13 ) .
in korea ,
the long - term outcomes of a phase ii clinical trial on laparoscopic sn navigation surgery were recently reported ( 14 ) . in this study ,
the false - negative rate of an intraoperative pathological examination was 15.4% ( 2/13 ) compared with permanent pathology , and patients who underwent sn navigation surgery had a better quality of life than those who underwent conventional laparoscopic distal gastrectomy .
the 3-year relapse - free and overall survival rates for all patients were 96% and 98% , respectively .
although the results of multicenter clinical trials were reported , the clinical application of sn navigation surgery as a routine practice remains controversial .
further steps should be performed to provide sufficient evidence of oncological safety compared with conventional surgery . for this purpose ,
the sentinel node oriented tailored approach ( senorita ) trial was launched in january , 2013 .
the senorita trial is an investigator - initiated , open - label , parallel - assigned , multicenter randomized controlled phase iii trial ( 15 ) .
this study aims to prove the non - inferiority of laparoscopic sentinel basin dissection with stomach - preserving surgery compared with the standard laparoscopic gastrectomy in terms of long - term recurrence and survival .
eligible criteria included patients with a single early gastric cancer of less than 3 cm and a clinical stage of t1n0m0 according to the american joint committee for cancer ( ajcc ) 7th edition ( figure 1 ) ( 16 ) .
moreover , the lesion should be more than 2 cm apart from the esophagogastric junction or pylorus . in the laparoscopic sentinel basin dissection group , the endoscopic submucosal injection technique with dual tracer ( tc - human serum albumin and icg ) was performed , and then stomach - preserving surgery was performed when the sns were negative following the frozen section evaluation ( 17 ) .
stomach - preserving surgery includes endoscopic submucosal dissection , endoscopic full - thickness resection ( eftr ) , laparoscopic wedge resection , and laparoscopic segmental resection ( 8) . a total of 7 korean institutions participated in this study after an initial quality control study , and the planned sample size , calculated by a 5% margin of non - inferiority , was 290 patients in each group ( 580 patients in total ) ( 18 ) .
the enrollment was completed in december , 2016 , and regular follow - up and monitoring is currently being conducted .
sbd , sentinel basin dissection ; lnd , lymph node dissection ; dfs , disease - free survival ; rfs , recurrence - free survival ; os , overall survival .
except for the conventional dual tracer , image - guided sn mapping techniques have already been introduced ( 19 - 23 ) .
the infrared ray system has advantages in terms of the highly sensitive detection of not only lymph nodes but also the lymphatic vessels , in addition to safety and convenience , and could be another option in sn navigation surgery ( 24 ) .
initially , nimuraet al . reported that the combination of icg staining with an infrared ray system enhanced the sensitivity of detection of sn ( 100%vs .
recently , a multicenter prospective study for icg plus infrared ray was performed ( 25 ) .
although the sample size was too small to obtain statistical significance ( n=44 ) , this method highlighted the new possibility of sn biopsy with a detection rate of 100% and a false - negative rate of 0% .
the icg fluorescence imaging method was recently developed and is performed using an infrared camera system with a specific light source and detector ; the light source is a light - emitting diode that emits light at a wavelength of 760 nm , while the detector is a charge - coupled device ( ccd ) camera with a cut filter that filters light with a wavelength below 820 nm ( 22 ) .
so far , the outcomes of icg fluorescence imaging were comparable with those of conventional radio - guided methods [ detection rate of more than 94% ( 94.7%97.3% ) and a false - negative rate of less than 25% ( 14.3%25% ) ] ( 22,23 ) .
however , this new method has the great advantage of the clear visualization of sn and was recently used in not only gastric cancer but also other solid tumors as a promising technology ( 26 ) .
future studies are necessary to better understand the spreading speed of icg particles and detection timing ( 27 ) .
during sn navigation surgery , the approach for the primary tumor is an issue as important as the sn mapping method .
theoretically , stomach - preserving gastrectomy , including eftr , wedge resection , or segmental gastrectomy , can be performed after confirmation of negative sn metastasis ( 8) .
huret al . reported 13 cases of laparoscopy - assisted eftr with sentinel basin dissection and 9 patients successfully underwent the procedure without conversion ( 28 ) .
however , during this eftr or wedge resection , opening of the gastric wall is inevitable because a surgeon should check the tumor location and margin . as such
, these procedures could be criticized regarding intra - abdominal infection due to leakage of gastric fluid or tumor implantation ( 29 ) .
non - exposed endoscopic wall - inversion surgery ( news ) was developed to solve the problem of trans - luminal communication in eftr ( 30 - 32 ) . in the news
, markings are made in both the mucosal and the serosal sides and then laparoscopic seromuscular dissection and suture are conducted .
finally , the lesion is dissected using a conventional endoscopic submucosal dissection ( esd ) technique ( 31 ) .
however , the laparoscopic circumferential seromuscular incisions and suture along the incision sites are considered as a difficult and time - consuming procedure .
the mean operation time with the news technique was 153 min in porcine models , whereas it was more than 3 h in patients with small subepithelial tumors ( 31 ) .
moreover , one in six patients experienced conversion to eftr with subsequent laparoscopic suture closure because of poor recognition of the tumor margin . therefore , further efforts to overcome the technical problems of news are required .
recently , non - exposure endolaparoscopic full - thickness resection with a simple suturing technique was introduced in a porcine model ( figure 2 ) ( 33 ) . in this procedure ,
laparoscopic seromuscular suturing is done without seromuscular dissection after both mucosal and serosal markings ( table 1 ) .
then , eftr of the inverted stomach wall is performed with a conventional needle knife , and finally endoscopic mucosal suturing is performed with endoloops and clips .
the operation time was shorter in this procedure compared with the news procedure , and thus seems to be more practical .
now , a prospective feasibility study for this procedure is ongoing in patients with subepithelial tumors , and the next step is expected to expand this method to early gastric cancer .
this procedure could be a promising non - exposure approach for primary tumors in sn navigation surgery .
( a ) endoscopic circumferential incision of the mucosal layer ; ( b ) laparoscopic seromuscular suturing which results in inversion of the stomach wall ; ( c ) endoscopic full - thickness resection ; ( d ) endoscopic mucosal suturing by placement of endoloops and clips .
3 ) , additional surgery is recommended for patients who underwent non - curative endoscopic resection for gastric cancer ( 34 ) .
previous studies reported 3%18% of lymph node metastases in patients with tumors out of indication , and standard gastrectomy with d1 + lymph node dissection is recommended for these patients ( 35 - 40 ) .
however , the majority of these patients have no lymph node metastasis and additional surgery is considered as overtreatment .
therefore , sn navigation surgery could have a critical role in reducing unnecessary treatment .
to date , few studies have evaluated the role of sn mapping after non - curative endoscopic resection .
arigamiet al . examined sn mapping using a single tracer ( tc - tin colloid ) in patients who underwent non - curative endoscopic resection ( 41 ) .
a total of 16 patients were included in this study , and the detection rate and false - negative rate were 100% and 0% , respectively . in a larger study by mayanagiet al .
, forty patients underwent sentinel mapping using a dual tracer ( tc - tin colloid plus blue dye ) , and similar results were demonstrated ; a 100% detection rate ( 40/40 ) and 0% false - negative rate .
these studies suggested that the sn is not significantly affected by endoscopic resection and that sentinel concepts could also be applied to lesions following endoscopic resection ( 42 ) .
sentinel concepts may be beneficial to patients who underwent non - curative endoscopic resection , because additional gastrectomy can be omitted if sentinel lymph nodes are negative .
therefore , further prospective studies and clinical trials are essential to confirm the feasibility and safety of sn navigation surgery after non - curative endoscopic resection .
although many feasibility studies and some multicenter phase ii clinical trials have been reported , there are still unclear issues regarding sn navigation surgery .
the phase iii randomized controlled trial ( senorita trial ) is ongoing , and long - term outcomes can help to elucidate these issues .
recently , image - guided technologies , such as infrared ray and fluorescence , have emerged as promising sn mapping methods , and further studies are required prior to clinical application .
the non - exposure endolaparoscopic full - thickness technique can be an alternative that avoids peritoneal contamination and tumor seeding .
moreover , sentinel concepts are tried to apply to lesions following non - curative endoscopic resection . in these cases ,
sn navigation surgery can lead to organ - preserving surgery and play a key role in improving the quality of life of patients with early gastric cancer . | although a number of feasibility studies for sentinel node ( sn ) concepts in gastric cancer have been conducted since 2000 , there remains a debate regarding detailed detection techniques and oncological safety .
two important multicenter phase ii clinical trials were performed in japan that used different methods and reached different conclusions ; one confirmed acceptable results with a false - negative rate of 7% , and the other showed an unacceptably high false - negative rate of 46.4% .
the sentinel node oriented tailored approach ( senorita ) trial is a multicenter randomized controlled phase iii trial being performed in korea .
patient enrollment is now complete and the long - term results are currently awaited .
recently , an image - guided sn mapping technique using infrared ray / fluorescence was introduced .
this method might be a promising technology because it allows the clear visualization of sns .
with regard to the primary tumor , the non - exposed endoscopic wall - inversion surgery technique and non - exposure endolaparoscopic full - thickness resection with simple suturing technique have been reported .
these methods prevent abdominal infection and tumor seeding and can be good alternatives to conventional laparoscopic gastric wedge resection . for indications , sn navigation surgery can be extended to patients who underwent non - curative endoscopic resection .
although a few studies have been performed on these patients , sentinel concepts may be beneficial to patients as they omit the need for additional gastrectomy .
sn navigation surgery can lead to actual organ - preserving surgery and plays a key role in improving the quality of life of patients with early gastric cancer in the future . |
a large number of studies has clearly shown the existence of a strong inverse correlation between plasma high - density lipoprotein cholesterol ( hdl - c ) concentrations and the incidence of coronary heart disease ( chd ) , but the significance of such association has been recently questioned .
intervention clinical trials carried out with agents efficient in raising hdl - c levels , including niacin and cholesteryl ester transfer protein ( cetp ) inhibitors , have failed in showing a reduction in cardiovascular events . in addition , mendelian randomization studies have shown that increased hdl - c levels caused by common variants in hdl related genes are not necessarily associated with reduced cardiovascular risk .
one possible explanation for this discrepancy is that the plasma hdl - c concentration does not reflect the very complex hdl system , involving different hdl particles and a number of receptors , transporters , enzyme , and transfer proteins .
moreover , cholesterol is not the active component of hdl and there is convincing evidence that at least some of the atheroprotective functions of hdl relate to specific hdl components or subclasses , which concentration in plasma may be totally unrelated to the hdl - c level .
hdl are a highly heterogeneous lipoprotein family composed by several subclasses with different density , shape , and size .
the density of the hdl particles is inversely related to their size , reflecting the relative contents of low density non - polar core lipid , and high density surface protein .
most part of plasma hdl has a globular shape , the central core is composed by non - polar lipids ( triglycerides and cholesteryl esters ) surrounded by a monolayer of polar lipids ( phospholipids and unesterified cholesterol ) and apolipoproteins .
a minor fraction of plasma hdl has a non - spherical structure and they consist in discoidal bilayer of polar lipids , in which non - polar core is lacking ; apolipoproteins run from side to side of the disk , with polar residues facing the aqueous phase and non - polar residues facing the acyl chains of the lipid bilayer .
the protein component of hdl is formed mainly by apolipoprotein a - i ( apoa - i ) , for the 70% , and apolipoprotein a - ii ( apoa - ii ) , for the 20% .
two major particle subclasses have been identified on the basis of major apolipoprotein composition : particles containing only apoa - i ( lpa - i ) , and particles containing both apoa - i and apoa - ii ( lpa - i : a - ii ) .
recent shotgun proteomic analysis showed that hdl contain 48 or more proteins , among these apoa - iv , apocs , apoe , lecithin : cholesterol acyltransferase ( lcat ) , cetp , phospholipid transfer protein ( pltp ) , paraoxonase ( pon ) , and platelet - activating factor acetylhydrolase ( paf - ah ) circulate in plasma bound to hdl .
most of the proteins carried by hdl are not apolipoproteins , and represent very minor components of these particles .
it is also possible to classify hdl on the basis of density ( hdl2 , with density of 1.063 to 1.120 g / ml , and hdl3 , 1.120 to 1.210
according to charge , hdl can be divided into - and pre--migrating particles on agarose gel , and combining charge and size these two subclass - es can be divided into 12 distinct apoa - i - containing particles , referred to as pre- ( pre-1 and pre-2 ) , ( 1 , 2 , and 3 ) and pre- ( pre-1 , pre-2 , and pre-3 ) on the basis of mobility that is slower or faster than albumin , respectively , and decreasing size . due to its highly dynamic nature
apoa - i and apoa - ii are synthe - sized mainly by the liver and , to a lesser extent , by the small intestine and are secreted as components of triglyceride - rich lipoproteins . in circulation
, pltp promotes the transfer of surface components ( phospholipids , cholesterol , and apolipoproteins ) from triglyceride - rich lipoproteins to hdl .
the regulatory role of pltp is achieved through two main functions , phospholipid transfer activity and the capability to modulate hdl size and composition in a process called hdl conversion .
hepatocytes are able to secrete apoa - i in lipid - free or lipid - poor and lipidated forms .
apoa - i is secreted as pro - apoa - i and converted to a mature form by a metalloprotease in plasma .
there are three potential sources of lipid - poor apoa - i in plasma : it may be released as lipid - poor protein after its synthesis in the liver and intestine , it may be released from triglyceride - rich lipoproteins that are undergoing lipolysis by lipoprotein lipase , and it may be generated in the circulation during the remodeling of mature , spherical hdl particles .
lipid free - apoa - i acquires phospholipids and cholesterol through the interaction with the atp binding cassette transporter a1 ( abca1 ) , to form pre--hdl , a pathway dependent on abca1 expression . once in the circulation , pre--hdl are the preferential substrate of lcat ( fig .
1 ) , that converts lecithin and cholesterol into lysolecithin and cholesteryl esters , using apoa - i as cofactor .
the cholesterol esters generated by lcat are more hydrophobic than free cholesterol and thus migrate into the hydrophobic core of the lipoprotein , with the resulting conversion of small , discoidal pre--hdl into mature , spherical , -migrating hdl ( -hdl ) .
lcat thus plays a central role in intravascular hdl metabolism and in the determination of plasma hdl level .
esterification of cholesterol in plasma by lcat is also necessary for cholesterol uptake from the liver , either directly through the scavenger receptor class b member 1 ( sr - bi ) or indirectly through cetp .
the -hdl produced by lcat ( hdl3 ) interact in the plasma with cetp , that exchanges cholesteryl esters for triglycerides between hdl and triglyceride - rich lipoproteins , generating large cholesteryl ester - poor and triglyceride - rich hdl particles ( hdl2 ) .
mature , large -hdl particles can be converted back to pre--hdl through the action of pltp and the endothelial and hepatic lipases , that hydrolyze triglycerides and phospholipids on hdl ( fig .
plasma half - life of pre--hdl is short , they are rapidly cleared through the kidney , while mature -hdl have a slower turnover .
hdl components are catabolized in different ways ; the major sites of catabolism of the protein components are liver and kidney .
the kidney , according to hydrophobicity , filters lipid - free apolipoproteins ; apoa - i and apoa - ii can be reabsorbed through cubulin receptors in the kidney proximal tubules .
when reabsorption is impaired , hydrophilic apolipoproteins ( apoa - i and apoa - iv , but no apoa - ii ) can be excreted into urine .
glomerular filtration barrier prevents access of mature hdl particles to the proximal tubules ; however , cubulin may bind filtered lipid - poor hdl .
hdl particles can entirely be removed by holoparticles hdl receptors . in the liver , holo - hdl particles accumulated in endosomal compartments
can be transferred to lysosomes for degradation or in a small proportion , they can be resecreted into the circulation .
one of the most important function of hdl is to promote the removal of cholesterol from peripheral cells , including macrophages within the arterial wall , and shuttle it to the liver for excretion through the bile and feces in a process called reverse cholesterol transport ( rct ) .
it results in a net mass transport of cholesterol from the arterial wall into the bile .
this pathway is described as an anti - atherogenic process by preventing arterial cholesterol accumulation , plaque destabilization , and development of acute cardiovascular events .
cell cholesterol efflux is the first and limiting step in rct and consists in the exchange of unesterified cholesterol between cells and extracellular acceptors .
this exchange can occur by several processes : via aqueous diffusion , which occurs according to the direction of cholesterol gradient , or through three main distinct and protein - mediated pathways .
lipid - free / lipid - poor apolipoproteins , mainly apoa - i , represent the principal cholesterol acceptors via abca1 .
all plasma hdl subclasses , including mature -hdl particles and discoidal pre--hdl , are efficient cholesterol acceptors via the abcg1 pathway , while sr - bi promotes cell cholesterol efflux only to mature , large -hdl .
the cholesterol accumulated in hdl is esterified in plasma by lcat with the resulting formation of cholesteryl esters .
then , hydrophobic cholesteryl esters move to the core while unesterified cholesterol is removed from the surface of hdl , leading to the progressive enlargement of these particles . for long time
, lcat has been considered necessary for efficient rct by keeping unesterified cholesterol gradient from cells to hdl , but recent data suggest that even if functional lcat is not present , macrophage cholesterol efflux and rct can occur .
large amount of the cholesteryl esters formed by the lcat are exchanged with triglycerides through cetp - mediated process into apob containing lipoproteins that are finally catabolized by the liver .
alternatively , hdl - cholesteryl esters are taken - up by the liver through sr - bi .
atheroprotection mediated by hdl is not only through their major role in rct , but also through other relevant functions ; one of the most important and well studied is the ability of hdl to maintain endothelial cell homeostasis and integrity .
hdl have potent antioxidant properties , mediated by molecules carried by hdl ( pon-1 , paf - ah , and lcat ) or by apoa - i and apoa - ii , as well as anti - inflammatory , antithrombotic , cytoprotective , vasodilatory , anti - infectious activities and the capacity to enhance insulin secretion .
this wide spectrum of biological activities likely reflects the heterogeneity of hdl particles ; however , the hdl - protective activities can be lost in some pathological conditions and hdl can even acquire proatherogenic properties .
hdl are a highly heterogeneous lipoprotein family composed by several subclasses with different density , shape , and size .
the density of the hdl particles is inversely related to their size , reflecting the relative contents of low density non - polar core lipid , and high density surface protein .
most part of plasma hdl has a globular shape , the central core is composed by non - polar lipids ( triglycerides and cholesteryl esters ) surrounded by a monolayer of polar lipids ( phospholipids and unesterified cholesterol ) and apolipoproteins .
a minor fraction of plasma hdl has a non - spherical structure and they consist in discoidal bilayer of polar lipids , in which non - polar core is lacking ; apolipoproteins run from side to side of the disk , with polar residues facing the aqueous phase and non - polar residues facing the acyl chains of the lipid bilayer .
the protein component of hdl is formed mainly by apolipoprotein a - i ( apoa - i ) , for the 70% , and apolipoprotein a - ii ( apoa - ii ) , for the 20% .
two major particle subclasses have been identified on the basis of major apolipoprotein composition : particles containing only apoa - i ( lpa - i ) , and particles containing both apoa - i and apoa - ii ( lpa - i : a - ii ) .
recent shotgun proteomic analysis showed that hdl contain 48 or more proteins , among these apoa - iv , apocs , apoe , lecithin : cholesterol acyltransferase ( lcat ) , cetp , phospholipid transfer protein ( pltp ) , paraoxonase ( pon ) , and platelet - activating factor acetylhydrolase ( paf - ah ) circulate in plasma bound to hdl .
most of the proteins carried by hdl are not apolipoproteins , and represent very minor components of these particles .
it is also possible to classify hdl on the basis of density ( hdl2 , with density of 1.063 to 1.120 g / ml , and hdl3 , 1.120 to 1.210
according to charge , hdl can be divided into - and pre--migrating particles on agarose gel , and combining charge and size these two subclass - es can be divided into 12 distinct apoa - i - containing particles , referred to as pre- ( pre-1 and pre-2 ) , ( 1 , 2 , and 3 ) and pre- ( pre-1 , pre-2 , and pre-3 ) on the basis of mobility that is slower or faster than albumin , respectively , and decreasing size .
due to its highly dynamic nature , apoa - i is involved in major pathways of hdl metabolism .
apoa - i and apoa - ii are synthe - sized mainly by the liver and , to a lesser extent , by the small intestine and are secreted as components of triglyceride - rich lipoproteins . in circulation
, pltp promotes the transfer of surface components ( phospholipids , cholesterol , and apolipoproteins ) from triglyceride - rich lipoproteins to hdl .
the regulatory role of pltp is achieved through two main functions , phospholipid transfer activity and the capability to modulate hdl size and composition in a process called hdl conversion .
hepatocytes are able to secrete apoa - i in lipid - free or lipid - poor and lipidated forms .
apoa - i is secreted as pro - apoa - i and converted to a mature form by a metalloprotease in plasma .
there are three potential sources of lipid - poor apoa - i in plasma : it may be released as lipid - poor protein after its synthesis in the liver and intestine , it may be released from triglyceride - rich lipoproteins that are undergoing lipolysis by lipoprotein lipase , and it may be generated in the circulation during the remodeling of mature , spherical hdl particles . lipid free - apoa - i acquires phospholipids and cholesterol through the interaction with the atp binding cassette transporter a1 ( abca1 ) , to form pre--hdl , a pathway dependent on abca1 expression .
1 ) , that converts lecithin and cholesterol into lysolecithin and cholesteryl esters , using apoa - i as cofactor .
the cholesterol esters generated by lcat are more hydrophobic than free cholesterol and thus migrate into the hydrophobic core of the lipoprotein , with the resulting conversion of small , discoidal pre--hdl into mature , spherical , -migrating hdl ( -hdl ) .
lcat thus plays a central role in intravascular hdl metabolism and in the determination of plasma hdl level .
esterification of cholesterol in plasma by lcat is also necessary for cholesterol uptake from the liver , either directly through the scavenger receptor class b member 1 ( sr - bi ) or indirectly through cetp .
the -hdl produced by lcat ( hdl3 ) interact in the plasma with cetp , that exchanges cholesteryl esters for triglycerides between hdl and triglyceride - rich lipoproteins , generating large cholesteryl ester - poor and triglyceride - rich hdl particles ( hdl2 ) .
mature , large -hdl particles can be converted back to pre--hdl through the action of pltp and the endothelial and hepatic lipases , that hydrolyze triglycerides and phospholipids on hdl ( fig .
plasma half - life of pre--hdl is short , they are rapidly cleared through the kidney , while mature -hdl have a slower turnover .
hdl components are catabolized in different ways ; the major sites of catabolism of the protein components are liver and kidney .
the kidney , according to hydrophobicity , filters lipid - free apolipoproteins ; apoa - i and apoa - ii can be reabsorbed through cubulin receptors in the kidney proximal tubules .
when reabsorption is impaired , hydrophilic apolipoproteins ( apoa - i and apoa - iv , but no apoa - ii ) can be excreted into urine .
glomerular filtration barrier prevents access of mature hdl particles to the proximal tubules ; however , cubulin may bind filtered lipid - poor hdl .
hdl particles can entirely be removed by holoparticles hdl receptors . in the liver , holo - hdl particles accumulated in endosomal compartments
can be transferred to lysosomes for degradation or in a small proportion , they can be resecreted into the circulation .
one of the most important function of hdl is to promote the removal of cholesterol from peripheral cells , including macrophages within the arterial wall , and shuttle it to the liver for excretion through the bile and feces in a process called reverse cholesterol transport ( rct ) .
it results in a net mass transport of cholesterol from the arterial wall into the bile .
this pathway is described as an anti - atherogenic process by preventing arterial cholesterol accumulation , plaque destabilization , and development of acute cardiovascular events .
cell cholesterol efflux is the first and limiting step in rct and consists in the exchange of unesterified cholesterol between cells and extracellular acceptors .
this exchange can occur by several processes : via aqueous diffusion , which occurs according to the direction of cholesterol gradient , or through three main distinct and protein - mediated pathways .
lipid - free / lipid - poor apolipoproteins , mainly apoa - i , represent the principal cholesterol acceptors via abca1 .
all plasma hdl subclasses , including mature -hdl particles and discoidal pre--hdl , are efficient cholesterol acceptors via the abcg1 pathway , while sr - bi promotes cell cholesterol efflux only to mature , large -hdl .
the cholesterol accumulated in hdl is esterified in plasma by lcat with the resulting formation of cholesteryl esters . then , hydrophobic cholesteryl esters move to the core while unesterified cholesterol is removed from the surface of hdl , leading to the progressive enlargement of these particles . for long time , lcat has been considered necessary for efficient rct by keeping unesterified cholesterol gradient from cells to hdl , but recent data suggest that even if functional lcat is not present , macrophage cholesterol efflux and rct can occur .
large amount of the cholesteryl esters formed by the lcat are exchanged with triglycerides through cetp - mediated process into apob containing lipoproteins that are finally catabolized by the liver .
alternatively , hdl - cholesteryl esters are taken - up by the liver through sr - bi .
atheroprotection mediated by hdl is not only through their major role in rct , but also through other relevant functions ; one of the most important and well studied is the ability of hdl to maintain endothelial cell homeostasis and integrity .
hdl have potent antioxidant properties , mediated by molecules carried by hdl ( pon-1 , paf - ah , and lcat ) or by apoa - i and apoa - ii , as well as anti - inflammatory , antithrombotic , cytoprotective , vasodilatory , anti - infectious activities and the capacity to enhance insulin secretion .
this wide spectrum of biological activities likely reflects the heterogeneity of hdl particles ; however , the hdl - protective activities can be lost in some pathological conditions and hdl can even acquire proatherogenic properties .
lcat is principally synthetized in the liver and in little amount in other tissues , such as brain and testes , and circulates in plasma compartment at concentration of 5 mg / l mainly bound to hdl but also to ldl .
lcat converts phosphatidylcholine and cholesterol into cholesteryl ester and lysophosphatidylcholine in plasma and other biological fluids . in rct pathway
, lcat plays a key role and it is thought to help facilitate this process by leading formation of large and mature hdl .
furthermore , it is reported that the majority of cholesteryl esters formed by lcat are removed by the liver . without lcat , hdl - c , apoa - i and apoa - ii levels in the plasma
are drastically reduced for the lacking formation of mature and spherical shaped hdl and for the rapid catabolism of discoidal hdl by the kidney .
on the basis of these evidences , variations in lcat activity seem to be naturally implicated in atherosclerosis prevention or development . to elucidate the role of lcat in atherosclerosis
, a large number of studies have been performed in both animal models and humans ( table 1 ) .
studies carried out in animal models led to controversial results , often dependent on species utilized .
studies performed in mice in which lcat was overexpressed or downregulated suggest that activity of lcat is not associated to atheroprotection and the lack of enzyme is not associated to increased atherosclerosis , even if the hdl - c levels in plasma are very low .
the increased atherosclerosis in mice with lcat overexpression is probably due to the accumulation in plasma of dysfunctional large apoe - rich hdl , which were shown to be defective in the delivery of cholesterol to the liver through sr - bi .
when the lcat gene was overexpressed in rabbits , opposite results were obtained : aortic lesions were reduced after atherogenic diet , even if large hdl particles containing apoe were detected .
the contradictory results obtained in the studies on animal models do not clarify the role of lcat in atherosclerosis , allowing for further consideration .
the role of lcat in atherosclerosis was also explored in humans , both in general population and in subjects at high cardiovascular risk . as observed in animal studies ,
the epic - norfolk was the first prospective study investigating the correlation of lcat plasma levels and atherosclerosis carried out in general population in more than 2,700 subjects .
one - third of enrolled subjects developed coronary artery diseases ( cads ) , but no associations between plasma lcat levels and risk to develop future cad was observed . when individuals were divided according to gender , increased lcat levels correlated with lower risk of cad only in men , while in women was the opposite .
reduction of lcat concentration / activity associated with absence of cad was described in the copenhagen city heart study , that enrolled more than 10,000 participants , and in the copenhagen general population study , in which more than 50,000 subjects are involved .
the variants s208 t found in the coding region of lcat gene was associated with reduction in hdl - c and apoa - i levels , but not with increased risk of myocardial infarction , ischemic heart disease , and ischemic cerebrovascular disease . in agreement with the results obtained in the general population , an observational study carried out in 540 subjects at high cardiovascular risk showed that low plasma lcat levels are not associated with higher carotid intima - media thickness ( imt ) , a marker of preclinical atherosclerosis .
consistent with these results , in various studies it was demonstrated that an increased lcat concentration is associated to cad . increased levels of lcat activity was associated with increased imt in 74 subjects with metabolic syndrome , as well as in the control subjects of the study . in another study from the same group ,
a recent study analyzed the relationship between lcat activity and triglyceride metabolism and ldl particle size in 550 patients at high cardiovascular risk .
increased lcat activity was associated with formation of small ldl particles that are more atherogenic than large particles , but no parameters of subclinical atherosclerosis were analyzed .
on other side , some studies affirm the opposite : decreased lcat activity is associated with cad .
early studies supporting this evidence were carried out in 1973 in subjects at high cardiovascular risk .
few years later , in 100 subjects divided according to the degree of atherosclerotic disease , lcat activity was found positively correlated with the severity of coronary atherosclerosis . lower levels of lcat activity were also observed in patients with ischemic heart disease , and in a study on patients with acute myocardial infarction .
while epidemiological studies have repeatedly shown a strong and inverse correlation between plasma hdl - c concentrations and the incidence of chd , the significance of such association for chd development has been recently questioned , and clinical trials with various drugs able to increase hdl - c levels did not show the expected benefits .
hdl metabolism is regulated by a large number of factors that modify plasma levels of circulating hdl , and plasma hdl - c levels are remarkably susceptible to variations in these factors which also affect hdl shape , size , density , and lipid and apolipoprotein composition , and as a consequence hdl function .
investigations of factors involved in hdl metabolism thus represent a good way to understand the relationship between hdl and chd , and will likely translate in the development of innovative therapeutic approaches to chd prevention and treatment specifically affecting hdl function independent of plasma hdl - c levels . | epidemiological data clearly show the existence of a strong inverse correlation between plasma high - density lipoprotein cholesterol ( hdl - c ) concentrations and the incidence of coronary heart disease . this relation is explained by a number of atheroprotective properties of hdl , first of all the ability to promote macrophage cholesterol transport .
hdl are highly heterogeneous and are continuously remodeled in plasma thanks to the action of a number of proteins and enzymes . among them , lecithin : cholesterol acyltransferase ( lcat ) plays a crucial role , being the only enzyme able to esterify cholesterol within lipoproteins .
lcat is synthetized by the liver and it has been thought to play a major role in reverse cholesterol transport and in atheroprotection .
however , data from animal studies , as well as human studies , have shown contradictory results . increased lcat concentrations
are associated with increased hdl - c levels but not necessarily with atheroprotection . on the other side ,
decreased lcat concentration and activity are associated with decreased hdl - c levels but not with increased atherosclerosis .
these contradictory results confirm that hdl - c levels per se do not represent the functionality of the hdl system . |
translation from the genetic information contained in mrna to the amino acid sequence of a protein is performed on the ribosome , a large ribonucleoprotein complex composed of three rna molecules and over 50 proteins .
the ribosome is a molecular machine that catalyzes the synthesis of a polypeptide from its substrate , aminoacyl - trna .
ribosomes that translate a problematic mrna , such as that lacking a stop codon , can stall at its 3 end and produce an incomplete , potentially deleterious protein .
trans - translation is known as the highly sophisticated system in bacteria to recycle ribosomes stalled on defective mrnas and add a short tag - peptide to the c - terminus of the nascent polypeptide as the degradation signal [ 14 ] ( figure 1 ) .
thus , the tagged polypeptide from truncated mrna is preferentially degraded by cellular proteases including clpxp , clpap , lon , ftsh , and tsp [ 1 , 57 ] , and truncated mrna is released from the stalled ribosomes to be degraded by rnases .
the process of trans - translation is facilitated by transfer - messenger rna ( tmrna , also known as 10sa rna or ssra rna ) , which is a unique hybrid molecule that functions as both trna and mrna ( figure 2 ) .
it comprises two functional domains , the trna domain partially mimicking trna and the mrna domain , which includes the coding region for the tag - peptide , surrounded by four pseudoknot structures [ 1014 ] .
as predicted from the trna - like secondary structure , the 3 end of tmrna is aminoacylated by alanyl - trna synthetase ( alars ) like that of canonical trna [ 15 , 16 ] .
the function as trna is a prerequisite for the function as mrna , indicating the importance of the elaborate interplay of the two functions .
trans - translation has been proposed : ala - tmrna somehow enters the stalled ribosome , allowing translation to resume by switching the original mrna to the tag - encoding region on tmrna .
how does tmrna enter the stalled ribosome in the absence of a codon - anticodon interaction ?
how does tmrna , 4- or 5-fold larger than trna , work in the narrow space in the ribosome ?
several factors , including ef - tu [ 1720 ] , smpb [ 2123 ] , and ribosomal protein s1 [ 2224 ] , have been identified as tmrna - binding proteins .
ef - tu delivers ala - tmrna to the ribosome like aminoacyl - trna in translation . unlike s1 [ 2527 ]
, smpb serves as an essential factor for trans - translation in vivo and in vitro .
it binds to the trna - like domain ( tld ) of tmrna [ 23 , 2830 ] and ribosome to perform multiple functions during trans - translation , including enhancement of aminoacylation efficiency of tmrna [ 22 , 23 , 31 ] , protection of tmrna from degradation in the cell [ 19 , 28 ] , and recruitment of tmrna to the stalled ribosome [ 21 , 23 ] .
nmr studies have revealed that smpb consists of an antiparallel -barrel core with three helices and flexible c - terminal tail residues that are disordered in solution [ 32 , 33 ] . here , we review recent progress in our understanding of the molecular mechanism of trans - translation facilitated by tmrna and smpb , which is being revealed by various chemical approaches such as directed hydroxyl radical probing and chemical modification as well as other biochemical and structural studies .
a cell - free trans - translation system coupled with poly ( u)-dependent polyphenylalanine synthesis was developed using escherichia coli crude cell extracts . later , several trans - translation systems were developed using purified factors from e. coli [ 31 , 34 , 35 ] or from thermus thermophilus .
these systems have revealed that ef - tu and smpb , in addition to the stalled ribosome and ala - tmrna , are essential and sufficient for the first few steps of trans - translation including the binding of ala - tmrna to the ribosome , peptidyl transfer from peptidyl - trna to ala - tmrna , and decoding of the first codon on tmrna for the tag peptide . besides
, these systems have also provided a basis for investigating the molecular mechanism of trans - translation by chemical approaches .
ivanova et al . performed chemical probing to analyze the interaction between smpb and a ribosome .
bases of rrna are protected from chemical modification with dimethylsulfate or kethoxal by smpb , indicating that there are two smpb - binding sites on the ribosome ; one is around the p - site of the small ribosomal subunit and the other is under the l7/l12 stalk of the large ribosomal subunit .
the capacity of two smpb molecules to bind to a ribosome is in agreement with results of other biochemical studies [ 37 , 38 ] .
. showed a crystal structure of aquifex aeolicus smpb in complex with the tmrna fragment corresponding to tld , which confirmed results of earlier biochemical studies showing that tld is the crucial binding region of smpb .
it also suggested that smpb orients toward the decoding center of the small ribosomal subunit and that smpb structurally mimics the anticodon arm .
this is in agreement with a cryo - em map of the accommodated state complex of ribosome / ala - tmrna / smpb [ 3941 ] .
a truncation of the unstructured c - terminal tail of smpb leads to a loss of trans - translation activity [ 42 , 43 ] . in spite of its functional significance
, cryo - em studies have failed to identify the location of the c - terminal tail of smpb in the ribosome due to poor resolution .
we performed directed hydroxyl radical probing with fe(ii)-babe to study the sites and modes of binding of e. coli smpb to the ribosome ( figure 3 ) .
fe(ii)-babe is a specific modifier of the cysteine residue of a protein , which generates hydroxyl radicals to cleave the rna chain .
cleavage sites on rna can be detected by primer extension , allowing mapping of amino acid residues of a binding protein on an rna - based macromolecule .
this is an excellent chemical approach to study the interaction of a protein with the ribosome [ 4447 ] .
we prepared smpb variants each having a single cysteine residue for attaching it to an fe(ii)-babe probe . using directed hydroxyl radical probing , we succeeded in identifying the location of not only the structural domain but also the c - terminal tail of smpb on the ribosome .
it was revealed that there are two smpb - binding sites in a ribosome , which correspond to the lower halves of the a - site and p - site and that the c - terminal tail of a - site smpb is aligned along the mrna path towards the downstream tunnel , while that of p - site smpb is located almost exclusively around the region of the codon - anticodon interaction in the p - site .
this suggests that the c - terminal tail of smpb mimics mrna in the a - site and p - site and that these binding sites reflect the pre- and posttranslocation steps of trans - translation .
the probing signals appear at interval 3 , residues of the latter half of the c - terminal tail , suggesting an helix structure , which has been predicted from the periodical occurrence of positively charged residues .
the main body of smpb mimics the lower half of trna , and the c - terminal tail of smpb mimics mrna both before and after translocation , while the upper half of trna is mimicked by tld . upon entrance of tmrna into the stalled ribosome
, the c - terminal tail of smpb may recognize the vacant a - site free of mrna to trigger trans translation .
after peptidyl transfer to ala - tmrna occurring essentially in the same manner as that in canonical translation , translocation of peptidyl - ala - tmrna / smpb from the a - site to the p - site may occur . during this event ,
the extended c - terminal tail folds around the region of the codon - anticodon interaction in the p - site , which drives out mrna from the p - site .
ala - tmrna / smpb forms a complex with ef - tu and gtp in vitro , and this quaternary complex is likely to enter the empty a - site of the stalled ribosome .
this complex forms an initial binding complex with the stalled ribosome like the ternary complex of aminoacyl - trna , ef - tu , and gtp does with the translating ribosome . in normal translation ,
the correct codon - anticodon interaction is recognized by universally conserved 16s rrna bases , g530 , a1492 and a1493 , which form the decoding center .
when a cognate trna binds to the a - site , a1492 , and a1493 flip out from the interior of helix 44 of 16s rrna , and g530 rotates from a syn to an anticonformation to monitor the geometry of the correct codon - anticodon duplex .
this induces gtp hydrolysis by ef - tu , allowing the cca terminal of trna to be accommodated into the peptidyl transferase center . in the context of trna mimicry ,
we have recently shown that interaction of the c - terminal tail of smpb with the mrna path in the ribosome occurs after hydrolysis of gtp by ef - tu . according to a chemical probing and nmr study , smpb interacts with g530 , a1492 , and a1493 .
how these bases recognize smpb to trigger the following gtp hydrolysis is yet to be studied .
it should be noted that recent crystal structures have revealed that these bases recognize the a - site ligands ( aminoacyl - trnas , if-1 , rf-1 , rf-2 and rele ) in different ways during translation [ 50 , 55 , 56 ] .
cryo - em reconstructions of the preaccommodated state of the ribosome / ala - tmrna / smpb / ef - tu / gdp/ kirromycin complex of t. thermophilus have shown that two smpb molecules present in a complex , one binding to the 50s ribosomal subunit at the gtpase - associated center and the other binding to the 30s subunit near the decoding center [ 39 , 41 ] .
the latter smpb is not found in the accommodation complex of t. thermophilus and e. coli [ 3941 ] .
thus , the following model has been proposed : two molecules of smpb are required for binding of ala - tmrna to the stalled ribosome and one of them is released from the ribosome concomitant with the release of ef - tu after hydrolysis of gtp , so that the 3-terminal of tmrna is oriented toward the peptidyl - transferase center .
however , several reports have argued against the requirement of two smpb molecules for trans - translation : smpb has been reported to interact with tmrna in a 1 : 1 stoichiometry in the cell [ 57 , 58 ] , and crystal structures of smpb in complex with tld have been reported to exhibit a 1 : 1 stoichiometry of tmrna and smpb [ 29 , 59 ] .
further studies are required to assess the stoichiometry of smpb in the preaccommodation state complex .
we have recently shown that the c - terminal tail of smpb is required for the accommodation of ala - tmrna / smpb into the a - site rather than the initial binding of ala - tmrna / smpb / ef - tu / gtp to the stalled ribosome .
we have also shown that the tryptophan residue at 147 in the middle of the c - terminal tail of e. coli smpb has a crucial role in the step of accommodation .
our results further suggest that the aromatic side chain of trp147 is required for interaction with rrna upon accommodation .
it has been shown that trans - translation can occur in the middle of an mrna in vitro , although the efficiency of trans - translation is dramatically reduced with increase in the length of the 3 extension from the decoding center [ 34 , 35 ] .
this may be a result of competition of the 3 extension of mrna and the c - terminal of a - site smpb for the mrna path .
the ribosome stalled on the middle of intact mrna in a cell might be rescued by trans - translation via cleavage of mrna at the a - site or by alternative ribosome rescue systems [ 6163 ] .
how does the stalled ribosome select the first codon on tmrna without an sd - like sequence ?
it is reasonable to assume that some structural element on tmrna is responsible for positioning the resume codon in the decoding center just after translocation of peptidyl - ala - tmrna / smpb from the a - site to the p - site . in e. coli ,
the coding region for the tag peptide starts from position 90 of tmrna , which is 12 nucleotides downstream of pk1 . indeed , pk1 is important for efficiency of trans - translation , whereas changing the span between pk1 and the resume codon does not affect determination of the initiation point of tag - translation .
a genetic selection experiment has revealed strong base preference in the single - stranded region between pk1 and the resume codon , especially 4 and + 1 ( position 90 ) .
several point mutations in this region encompassing 6 to 1 decrease the efficiency of tag - translation , while some of them shift the tag - initiation point by 1 or + 1 to a considerable extent [ 59 , 60 ] , indicating that the upstream sequence contains not only the enhancer of trans - translation but also the determinant for the tag - initiation point .
evidence for interaction between the upstream region and smpb has been provided by a study using chemical probing .
e. coli smpb protects u at position 5 from chemical modification with cmct . the structural domain of smpb rather than the c - terminal tail
the protection at 5 was suppressed by a point mutation in the tld critical to smpb binding , suggesting that smpb serves to bridge two separate domains of tmrna to determine the resume codon for tag - translation .
mutations that cause 1 and + 1 shifts of the start point of tag - translation also shift the site of protection at 5 from chemical modification by 1 and + 1 , respectively , indicating the significance of the fixed span between the site of interaction on tmrna with smpb and the resume point of translation : translation for the tag - peptide starts from the position 5 nucleotides downstream of the site of interaction with smpb .
such a functional interaction of the upstream region in tmrna with smpb is also supported by the results of another genetic study showing that a - to - c mutation at position 86 of e. coli tmrna that inactivates trans - translation both in vitro and in vivo is suppressed by some double or triple mutations in smpb . in agreement with these studies ,
recent cryo - em studies have suggested that the upstream region in tmrna interacts with smpb in the resume ( posttranslocation ) state [ 68 , 69 ] .
the initiation shift of tag - translation can also be induced by the addition of a 4,5- or 4,6-disubstituted class of aminoglycoside such as paromomycin or neomycin [ 70 , 71 ] , which usually causes miscoding of translation by binding to the decoding center on helix 44 of the small subunit to induce a conformational change in its surroundings .
aminoglycosides also bind at helix 69 of the large subunit , which forms the b2a bridge with helix 44 in close proximity of the decoding center in the small subunit , to inhibit translocation and ribosome recycling by restricting the helical dynamics of helix 69 . taken together
, these findings suggest the significance of interaction of the proximity of the decoding center with any portion of smpb or tmrna for precise tag - translation .
it should be noted that hygromycin b , which binds only to helix 44 , does not induce initiation shift of tag - translation .
along with the functional mimicry of tld / smpb , a similar behavior of tmrna / smpb to that of canonical trna+mrna in the ribosome through several hybrid states , a / t , a / a , a / p , p / p , and p / e , has been assumed .
cryo - em studies have shown the location of the complex of tmrna with the main body of smpb in the a / t and a / a states [ 39 , 40 ] , and a directed hydroxyl radical probing has revealed the positions of smpb in the a / a and p / p states .
the existence of stable smpb binding sites in the a - site and p - site suggests the requirement of translocation , as in canonical translation
concomitantly with translocation , mrna and p - site trna are released from the stalled ribosome .
considering the different c - terminal tail structures of a - site smpb and p - site smpb , the c - terminal tail would somehow undergo conformational change from the extended form to the folded form .
the next translocation is thought to move tmrna / smpb to the e - site .
these ribosomal processes should involve extensive changes in the conformation of tmrna as well as in the modes of interactions of tmrna with smpb and the ribosome [ 76 , 77 ] . according to chemical probing studies , secondary structure elements of tmrna remain intact in a few steps of trans - translation including pre- and posttranslocation states [ 7779 ] .
another study has suggested 1 : 1 stoichiometry of tmrna to smpb throughout the processes of translation for the tag peptide .
recently , the movement of trna during translocation has been revealed by using time - resolved cryo - em .
not only classic and hybrid states but also various novel intermediate states of trnas were revealed .
although the intermediate states during trans - translation remain unclear , results of future structural studies including chemical approaches should reveal tmrna / smpb and ribosome dynamics .
various chemical approaches in addition to cryo - em and x - ray crystallographic studies have been revealing the molecular mechanism of trans - translation .
tmrna forms a ribonucleoprotein complex with smpb , which plays an essential role in trans - translation . based on a directed hydroxyl radical probing towards smpb ,
we have proposed a novel molecular mechanism of trans - translation ( figure 4 ) . in this model ,
an elegant collaboration of a hybrid rna molecule of trna and mrna and a protein mimicking a set of trna and mrna facilitates trans - translation .
initially , a quaternary complex of ala - tmrna , smpb , ef - tu , and gtp may enter the vacant a - site of the stalled ribosome to trigger trans - translation , when a set of ala - tld of tmrna and the main body of smpb mimicking the upper and lower halves of aminoacyl - trna , respectively , recognizes the a - site free of trna .
after hydrolysis of gtp by ef - tu , the c - terminal tail of smpb mimicking mrna interacts with the decoding center and the downstream mrna path free of mrna , allowing ala - tld / smpb to be accommodated .
while several proteins including smpb have been proposed to mimic trna or its portion , smpb is the first protein that has been shown to mimic mrna .
smpb is also the first protein of which stepwise movements in the ribosome are assumed to mimic those of trna in the translating ribosome .
our model depicts an outline of the trans - translation processes in the ribosome , although the following issues should be addressed .
how do the intermolecular interactions between tmrna and ribosome , between tmrna and smpb , and between ribosome and smpb as well as the intramolecular interactions within tmrna and within smpb change during the course of the trans - translation processes ?
is ef - g required for translocation of tmrna / smpb having neither an anticodon nor the corresponding codon from the a - site to the p - site ? | since accurate translation from mrna to protein is critical to survival , cells have developed translational quality control systems .
bacterial ribosomes stalled on truncated mrna are rescued by a system involving tmrna and smpb referred to as trans - translation . here ,
we review current understanding of the mechanism of trans - translation .
based on results obtained by using directed hydroxyl radical probing , we propose a new type of molecular mimicry during trans - translation .
besides such chemical approaches , biochemical and cryo - em studies have revealed the structural and functional aspects of multiple stages of trans - translation .
these intensive works provide a basis for studying the dynamics of tmrna / smpb in the ribosome . |
menkes disease ( omid 309400 ) , also known as kinky hair disease , is an infantile - onset x - linked recessive neurodegenerative disorder caused by diverse mutations in a copper - transport gene , atp7a ( 1 , 2 ) .
the atp7a gene plays an important role in controlling copper efflux from cells ( 3 ) .
subsequently , hypotonia , seizures , failure to thrive and death in early childhood are typical ( 4 , 5 ) . at present
a 3-month - old male infant had visited our pediatric clinic for lethargy , floppy muscle tone , poor oral intake and partial seizures on may 9 , 2007 .
his birth weight was 3,180 g. he was healthy at birth and a neonatal metabolic screening test was negative .
the initial eeg showed one episode of 2 hz rhythmic spike and wave activity starting from the right central area evolving to the generalized slowings lasting about 100 seconds without clinical seizures , which was consistent with electrical partial seizures ( fig .
serum lactate , tandem mass screening , serum amino acid and urine organic acids were all within the normal range .
, the eeg had changed to irregular high amplitude delta slowings on the background activities and frequent spikes from the right or left frontal or occipital areas that were consistent with hypsarrhythmia ( fig .
vascular tortuosity and diffuse brain atrophy with callosal thinning were detected in an mri scan ( fig .
biochemical markers showed low serum copper ( 9.0 g / dl , reference range : 70 - 130 g / dl ) and ceruloplasmin ( 5.6 mg / dl , reference range : 16 - 31.5 mg / dl ) levels . from genetic analysis , a c.2743c > t ( p.gln915x ) missense mutation ( fig .
4 ) in exon 13 of the atp7a gene was detected , and the infant was diagnosed with menkes disease ( md ) .
the mutation was a novel one that has not been previously reported as a cause of md .
we analyzed the atp7a gene in a korean patient with classical md and identified one novel mutation .
the atp7a gene at xq13.3 contains 23 exons and encodes a copper - transporting p - type atpase of 1500 amino acids ( 3 ) . to date , about 170 different mutations affecting atp7a have been reported ( 6 , 7 ) .
approximately 25% of the atp7a mutations are gross deletions , ranging in size from a single exon to deletion of the whole gene , except for the first two exons ( 6 ) .
about 120 other different intragenic mutations of atp7a have been reported : missense ( 33% ) , nonsense ( 16% ) , splice - site mutations ( 16% ) and deletions / insertions / duplications ( 33% ) ( human gene mutation database [ hgmd ] ; www.hgmd.com ) ( 7 ) .
the biochemical result of low copper concentrations in md is reduced activity of numerous copper - dependent enzymes such as ceruloplasmin , dopamine beta - hydroxylase , peptidylglycine alpha - amidating monooxygenase , cytochrome c oxidase , ascorbate oxidase , lysyl oxidase , superoxide dismutase and tyrosinase , which leads to connective tissue abnormalities , tortuosity of blood vessels and peculiar hair ( 1 , 8) . the phenotypic features of menkes disease can be divided into at least three categories : classical md with death in early childhood , mild md with long survival and occipital horn syndrome ( 9 ) .
the majority of patients suffer from classical md , but milder forms are observed in 5%-10% of patients .
there seems to be poor genotype - phenotype correlation , and the clinical courses of md patients may differ within a family , despite identical genetic changes ( 10 ) .
based on recent studies , the development of epilepsy can be divided into three phases : 1 ) an early stage characterized by focal clonic status , usually triggered by fever ; 2 ) an intermediate stage with intractable infantile spasms , in which interictal eeg demonstrated modified hypsarrhythmia , with diffuse irregular slow waves , and spike waves ; and 3 ) a late stage with multifocal seizures , tonic spasms and myoclonus ( 11 , 12 ) .
however , neonatal diagnosis by plasma neurochemical measurement before symptoms appear and early parental copper - histidine supplement may modify the disease progression substantially ( 13 - 15 ) . prenatal diagnosis can be performed by biochemical analysis or dna assay using chorionic villi samples or amniocytes in the first trimester of an at - risk pregnancy ( 16 , 17 ) . in summary
, we report a case of menkes disease presented by intractable seizures and infantile spasms because of a novel missense mutation ( c.2743c > t ) in the atp7a gene . | menkes disease is an infantile - onset x - linked recessive neurodegenerative disorder caused by diverse mutations in a copper - transport gene , atp7a .
affected patients are characterized by progressive hypotonia , seizures , failure to thrive and death in early childhood . here
, we report a case of menkes disease presented by intractable seizures and infantile spasms .
a 3-month - old male infant had visited our pediatric clinic for lethargy , floppy muscle tone , poor oral intake and partial seizures .
his hair was kinky , brown colored and fragile .
partial seizures became more frequent , generalized and intractable to antiseizure medications .
an eeg showed frequent posteriorly dominant generalized spikes that were consistent with a generalized seizure . from a genetic analysis , a c.2743c > t ( p.gln915x ) mutation
was detected and diagnosed as menkes disease .
the mutation is a novel one that has not been previously reported as a cause of menkes disease . |
the `` problem of the @xmath6 spheres , '' so named by schtte and van der waerden @xcite and leech @xcite , asks whether there exists any configuration of @xmath6 non - overlapping unit spheres that all touch a central unit sphere .
it was raised in the time of newton by david gregory , and eventually resolved mathematically as impossible .
its resolution established that the `` kissing number '' of equal spheres in @xmath7-dimensional euclidean space is @xmath0 .
this paper is concerned with a related but different problem : _ how can @xmath0 spheres of equal radius @xmath2 touch a given central sphere of radius @xmath8 , in what patterns , and how are these patterns related ?
in other words , what is the topology of the corresponding configuration space of such spheres ? _ in this paper we review the remarkable history of this problem in several contexts , survey aspects of what is currently known about it , and present some new results and conjectures .
this problem has come up in physics and materials science .
many atoms and molecules are roughly spherical , and their local interactions are governed by how many of them can get close to a single atom .
the arrangements possible for @xmath6 nearby spheres , and allowable motions between them , are relevant to the nature of local interactions , to measuring the entropy of local configurations , and to phase changes in certain materials .
we are particularly motivated by a statement of frank ( 1952 ) made in the context of supercooling of fluids , given in section [ sec:27 ] .
insisting that exactly @xmath0 equal spheres touch a @xmath6-th central sphere , possibly of a different radius , gives a mathematical toy problem that can be subjected to careful analysis . as a mathematical problem ,
the @xmath0 spheres problem has both a metric geometry aspect and a topology aspect .
lszl fejes tth made major contributions to the metric geometry of the problem , which concerns extremal questions , formulated as densest packing problems . in connection with the tammes problem , described in section [ sec:3 ] , he found the largest radius of @xmath0 spheres that can touch a central sphere of radius @xmath8 , realized by the dodecahedral configuration @xmath9 , and found other extremal configurations of touching spheres for smaller @xmath3 .
he posed the dodecahedral conjecture concerning the minimal volume voronoi cell in a unit sphere packing , and posed another conjecture characterizing all configurations that pack space with every sphere having exactly @xmath0 neighboring spheres .
both of these conjectures are now proved .
the topological side of the @xmath0 spheres problem concerns allowable motions and rearrangements of configurations of spheres , and topological constraints on them .
a major part of this paper addresses the topological side of the problem , concerning the topology of configuration spaces , and the change of topology as the radius @xmath2 is varied .
the arrangements of the @xmath0 touching spheres are encoded in the associated _ configuration space _ of @xmath0-tuples of points on the surface of a unit sphere that remain at a suitable distance from each other .
this space has nontrivial topology and geometry . in topology
the general subject of configuration spaces started in the 1960s with the consideration of topological spaces whose points denote configurations of a fixed number @xmath3 of labeled points on a manifold .
this paper considers the _ constrained _ configuration space @xmath10 $ ] of @xmath3 non - overlapping spheres of radius @xmath2 which touch a central sphere @xmath11 of radius @xmath8 , centered at the origin .
( here `` non - overlapping '' means the spheres have disjoint interiors . )
it can also be visualized as the space of @xmath3 spherical caps on the sphere , which are obtained as the radial projection of the external spheres onto the surface of the central sphere , whose _ angular diameter _
@xmath12 is a known function of @xmath2 .
the centers of these caps define a constrained @xmath3-configuration on @xmath11 where no pair of points can approach closer than angular separation @xmath13 . for generic ( `` non - critical '' ) values of @xmath2 for a range of values
@xmath14 , this space is a compact @xmath15-dimensional manifold with boundary , not necessarily connected .
the group @xmath16 acts as global symmetries of @xmath10 $ ] by rigidly rotating the @xmath3-configuration of spheres touching the central sphere .
the _ reduced constrained configuration space _ @xmath17 = { \operatorname{conf}}(n)[r]/so(3)$ ] is obtained by identifying rotationally equivalent configurations . for generic values of @xmath2
it is a compact @xmath18-dimensional manifold with boundary ; for the case of @xmath0 spheres this is a @xmath19-dimensional manifold .
the subject of constrained configuration spaces has in part been developed for applications to fields such as robotics .
for an introduction to the robotics aspect , see generally abrams and ghrist @xcite or farber @xcite .
this paper surveys results for small @xmath3 on the metric geometry problem of determining the maximum allowable radius @xmath5 for @xmath10 $ ] ( equivalently @xmath20 $ ] ) to be nonempty ; this is a variant of the tammes problem , also treated in the literature under the name _ optimal spherical codes _ ( see section [ sec:3 ] ) .
this paper also studies the topology of configuration spaces of a fixed radius @xmath2 , and the changes in topology in such spaces as the radius @xmath2 is varied . in the latter case the configuration space changes topology at a set of _ critical radius values_. associated to these special values
are _ critical configurations _ , which are extremal in a suitable sense .
the change in topology is described by a generalization of morse theory applicable to the radius function @xmath2 , which we discuss in section [ sec:4 ] . to determine these changes
one studies the occurrence and structure of the critical configurations .
the simplest example of such topology change concerns the connectivity of the space of configurations as a function of @xmath2 , reported by the rank of the @xmath21-th homology group of the configuration space .
the @xmath0 spheres problem includes as its most important special case that of unit spheres , where the sphere radius @xmath22 .
this special case is the one relevant to sphere packing in dimension @xmath7 .
we treat the topological space @xmath23 $ ] in sections [ sec:5 ] and [ sec:6 ] , and formulate several conjectures related to it .
the radius @xmath22 is a critical radius , and two configurations @xmath24 and @xmath25 on the boundary of the space @xmath23 $ ] are critical configurations .
the topology of @xmath23 $ ] appears to be very complicated , and its cohomology groups have not been determined . in section [ sec:6 ]
we describe how it is possible to move in the space @xmath23 $ ] to deform any dodecahedral configuration @xmath9 of @xmath0 labeled spheres to any other labeled @xmath9 configuration , permuting the @xmath0 spheres arbitrarily , a result due to conway and sloane .
this suggests the ( folklore ) conjecture asserting that @xmath22 is the largest radius value for which the configuration space @xmath26 $ ] is connected , i.e. it is the largest @xmath2 for which the @xmath21-th cohomology group of @xmath26 $ ] has rank @xmath8 .
this paper establishes some new results .
it makes the observation ( in section [ sec:43a ] ) that the family of @xmath27-configurations of spheres achieving @xmath28 ( see figure [ fig : example5 ] ) is topologically complex .
it completely determines ( in section [ sec:47 ] ) the cohomology of @xmath29 $ ] for allowable @xmath2 .
it makes precise the notion of @xmath3-configurations being _
critical for maximizing _ the injectivity radius on @xmath30 , and provides a necessary and sufficient _ balancing condition _
( theorem [ thm : converse ] ) for criticality , prefatory to a `` morse theory '' for such min - type functions @xcite . and
it formulates several new conjectures in sections [ sec:65 ] and [ sec66 ] .
configuration spaces are of interest in physics and materials science .
jammed configurations are a granular materials criterion for a stable packing . according to torquato and stillinger
@xcite they are : `` particle configurations in which each particle is in contact with its nearest neighbors in such a way that mechanical stability of a specific type is conferred to the packing . '' packings of rigid disks and spheres have been studied extensively by simulation ( lubachevsky and stillinger @xcite , donev et al . @xcite ) .
it has been empirically discovered that randomly ordered hard spheres achieve in random close packing a density around @xmath31 percent @xcite , and pass through a jamming transition around @xmath32 percent @xcite .
the appearance of a jamming phase transition , signaled by a change in shear modulus , and the formation of a glass state , is relevant in studying the behavior of colloidal suspensions and granular materials .
the large rearrangement of structure required in making a phase transition is relevant in the phenomenon of supercooling of liquids ( see section [ sec:27 ] ) .
the nature of glass transitions has been called `` the deepest and most interesting unsolved problem in solid state theory '' ( anderson @xcite ) . for articles and reviews of these topics , see generally ediger et al .
@xcite , ohern et al .
@xcite , and liu and nagel @xcite . for a survey of hard sphere models , including the idea of a liquid - solid phase transition in packings ,
see generally lwen @xcite .
one may make an analogy between the configuration spaces @xmath17 $ ] treated here and a sphere packing model for jamming studied in @xcite , which treats spheres having repulsive local potential at zero density and zero applied stress , and includes hard spheres for one model parameter value . in the latter model ,
the order parameter is the packing fraction of the spheres . in the configuration space model ,
a proxy value for the packing fraction is the radius parameter @xmath2 , which determines the fraction of surface area of @xmath11 covered by the @xmath3 spherical caps .
an analogue of the jamming transition value in the configuration space model is then the maximal radius @xmath33 at which the constrained configuration space @xmath17 $ ] remains connected ; this property is detected by the @xmath21-th cohomology group .
finer topological invariants of this kind are then supplied by the various critical values @xmath34 at which the ranks of the individual cohomology groups @xmath35 , { { \mathbb q}})$ ] change .
our configuration model is simplified in being @xmath36-dimensional , with constrained configurations on the surface of a @xmath36-sphere @xmath11 , which , however , has the new feature of positive curvature , giving a compact constrained configuration space . for the jamming problem itself ,
the space of ( constrained ) configurations of hard spheres in a large @xmath7-dimensional box seems a more appropriate space .
the general direction of inquiry investigating the transition of topological invariants ( like betti numbers ) of configuration spaces as the radius parameter is varying could shed new light on the nature of jamming transitions .
for further remarks , see section [ sec:7 ] .
this paper is mainly of a survey nature , and the sections of the paper have been written to be independently readable .
section [ sec:2 ] gives a brief history of results on the @xmath0 spheres problem and sphere packing .
section [ sec:3 ] surveys results on the maximal radius @xmath37 for configurations of @xmath3 equal spheres touching a central sphere of radius @xmath8 for small @xmath3 .
this problem is equivalent to the tammes problem .
section [ sec:4 ] begins with the topology of configuration spaces of @xmath3 points in @xmath38 and on the @xmath36-sphere @xmath11 , corresponding to radius @xmath39 .
it then considers spaces of configurations of equal spheres of radius @xmath2 touching a sphere of radius @xmath8 for variable @xmath40 .
it defines a notion of critical configuration in the spirit of min - type morse theory .
section [ sec:5 ] discusses the special configuration space of @xmath0 unit spheres touching a @xmath6-th central sphere , i.e. the case of radius @xmath22 .
it focuses on properties of the @xmath24 configuration , the @xmath25 configuration and the dodecahedral configuration @xmath9 .
it shows that the @xmath24 and @xmath25 configurations are critical ( in the sense of section [ sec:42 ] ) in the reduced configuration spaces @xmath23 $ ] . and
it shows that there are continuous deformations between a dodecahedral configuration to an @xmath24 configuration and to an @xmath25 configuration in the reduced configuration space @xmath23 $ ] .
section [ sec:6 ] considers the problem of permutability of the spheres of the dodecahedral configuration for @xmath22 , conjecturing that @xmath23 $ ] is connected , and that this is the largest value of @xmath2 where connectedness holds .
it also considers the @xmath41 case and formulates several conjectures about disconnectedness .
section [ sec:7 ] makes some concluding remarks .
we begin with some historical vignettes concerning configurations of @xmath0 spheres touching a central sphere , as they have come up in physics , astronomy , biology and materials science .
johannes kepler ( 15711630 ) studied packings and crystals in his 1611 pamphlet `` the six - cornered snowflake '' @xcite . in it
he asserts that the densest sphere packing of equal spheres is the @xmath24 packing , or `` cannonball packing . ''
he states that this packing has @xmath0 unit spheres touching each central sphere : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in the second mode , not only is every pellet touched by its four neighbors in the same plane , but also by four in the plane above and four below , so throughout one will be touched by twelve , and under pressure spherical pellets will become rhomboid .
this arrangement will be more compatible to the octahedron and the pyramid .
the packing will be the tightest possible , so that in no other arrangement could more pellets be stuffed into the same container . ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ he expands on the construction as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ thus , let @xmath42 be a group of three balls ; set one @xmath43 , on it as apex ; let there be also another group @xmath44 , of six balls , and another @xmath45 , of ten , and another @xmath46 , of fifteen .
regularly superpose the narrower on the wider to produce the shape of a pyramid .
now , although in this construction each one in the upper layer is seated between three in the lower , yet if you turn the figure round so that not the apex but the whole side of the pyramid is uppermost , you will find , whenever you peel off one ball from the top , four lying below it in square pattern . again as before ,
one ball will be touched by twelve others , to with , by six neighbors in the same plane , and by three above and three below .
thus in the closest pack in three dimensions , the triangular pattern can not exist without the square , and vice versa .
it is therefore obvious that the loculi of the pomegranate are squeezed into the shape of a solid rhomboid .... copula trium globorum .
ei superpone @xmath43 unum pro apice ; esto et alia copula senum globorum @xmath44 , et alia denum @xmath45 et alia quindenum @xmath47 impone semper angustiorem latiori , ut fiat figura pyramidis .
etsi igitur per hanc impositionem singuli superiores sederunt into trinos inferiores : tamen iam versa figura , ut non apex sed integrum latus pyramidis sit loc superiori , quoties unum globulum deglberis e summis , infra stabunt quattuor ordine quadrato .
et rursum tangetur unus globus ut prius , et duodecim aliis , a sex nempe circumstantibus in eodem plano tribus supra et tribus infra .
ita in solida coaptatione arctissima non potest ess ordo triangularis sine quadrangulari , nec vicissim .
patet igitur , acinos punici mali , materiali necessitate concurrente cum rationaibus incrementi acinorum , exprimi in figuri rhombici corporis ... '' [ translation by colin hardie @xcite ] ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the cannonball packing had been studied earlier by the english mathematician thomas hariot [ harriot ] ( 15601621 ) .
hariot was mathematics tutor to sir walter raleigh , designed some of his ships , wrote a treatise on navigation , and went on an expedition to virginia in 15851587 as surveyor , reporting on it in 1590 in @xcite , his only published book .
he computed a chart in 1591 on how to most efficiently stack cannonballs using the @xmath24 packing , and computed a table of the number of cannonballs in such stacks ( see shirley @xcite ) .
hariot supported the atomic theory of matter , in which case macroscopic objects may be packed in arrangements of very tiny spherical objects , i.e. atoms ( * ? ? ? * chap .
he corresponded with kepler in 16061608 on optics , and mentioned the atomic theory in a december 1606 letter as a possible way of explaining why some light is reflected , and some refracted , at the surface of a liquid .
kepler replied in 1607 , not supporting the atomic theory .
the known correspondence of hariot with kepler does not deal directly with sphere packing .
the statement that the maximal density of a sphere packing in @xmath7-dimensional space equals @xmath48 , which is attained by the @xmath24 packing , is called the _ kepler conjecture . _
it was settled affirmatively in the period 19982004 by hales with ferguson @xcite .
a second generation proof , which is a formal proof checked entirely by computer , was recently completed in a project led by hales @xcite .
the discussion of isaac newton and david gregory in 1694 was related to preparing a second edition of newton s _ principia_. it concerned the question whether the `` fixed stars '' are subject to gravitational attraction .
what force is `` balancing '' their apparent fixed positions ?
gregory ( * ? ? ?
* vol iii , p. 317 ) summarized in a memorandum a conversation with newton on 4 may 1694 concerning the brightest stars as : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to discover how many stars there are of a given magnitude , he [ newton ] considers how many spheres , nearest , second from them , third etc .
surround a sphere in a space of three dimensions , there will be @xmath6 of first magnitude , @xmath49 of second , @xmath50 of third .
@xmath36-dae , @xmath51 3 ae . '' ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ newton s own star table `` a table of ye fixed starrs for ye yeare 1671 '' records @xmath6 first magnitude stars , @xmath52 of the second magnitude , @xmath53 of third magnitude ( see ( * ? ? ?
* vol ii , p. 394 ) ) .
newton drafted a new proposition to be included in a second edition of the _ principia _ , stating [ in translation ] @xcite : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ proposition xv .
theorem xv .
the fixed stars are at rest in the heavens and are separated by enormous distances from our sun and from each other . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in a draft proof he wrote [ in translation]@xcite _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ that the stars are at huge distances from our sun is clear enough from the absence of parallax ; and that they lie at no less distances from each other may be inferred from their differing apparent magnitudes . for there are @xmath6 stars of the first magnitude and roughly the same number of equal spheres can be arranged about a central sphere equal to them .
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ and : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for if around some sphere there are arranged more spheres of about the same size , the number of spheres which surround it closely will be @xmath0 or @xmath6 ; at the second stage about @xmath54 ; at the third about @xmath55 [ roughly @xmath56 ; at the fourth , @xmath57 [ @xmath58 , ... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this argument is similar to one of kepler ( * ? ? ?
* liber i , pars ii , p. 138 ) ( translation in koyr @xcite ) , with roots in the claim of giordano bruno , that all stars are suns . after further work , over several drafts ,
newton abandoned the proposition ( hoskin @xcite ) .
it was not included in the second edition of the _ principia _ when it finally came out in 1713 .
gregory continued with the geometric problem underlying the spacing of stars . in an ( unpublished ) notebook he considered the packing problem of @xmath36-dimensional disks in concentric rings and , in @xmath7 dimensions , that of equal spheres , noting that @xmath6 spheres might touch a given equal sphere ( * ? ? ? *
vol iii , letter 441 , note ( 10 ) , p. 321 ) .
he continued to consider the @xmath6 sphere question in later years , making the following memorandum in 1704 @xcite : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ oxon .
23 nov@xmath59 1704 .
kyl said that if @xmath6 equal spheres touch an equal inmost sphere , @xmath60 must touch one that include these former @xmath61 , because there is nine times as much surface to stand on .
i told him that we must reckon by the surface passing through their centers .
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we may infer that newton left the question of how many spheres might touch unresolved , and that gregory believed @xmath6 spheres might touch .
the issue of whether @xmath6 equal spheres might touch a central equal sphere was discussed in the physics literature in the period 18741875 , with contributions by c. bender @xcite , reinhold hoppe @xcite and siegmund gnther @xcite .
hoppe noted a mathematical gap in the argument of bender .
gnther offered a physical intuition , but no proof .
they all concluded that at most @xmath0 unit spheres could touch a central unit sphere . in 1994
hales @xcite noted a mathematical gap in the argument of hoppe . in another context
the crystallographer william barlow ( 18451934 ) noted another optimal sphere packing , the _ hexagonal close packing _ ( @xmath25 ) . in a paper `` probable nature of the internal symmetry of crystals '' @xcite he considered five symmetry types for crystal structure .
the third kind of symmetry he describes is the @xmath24 packing ( fig . 4 and 4a ) .
he then stated : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a fourth kind of symmetry , which resembles the third in that each point is equidistant from the twelve nearest points , but which is of a widely different character than the three former kinds , is depicted if layers of spheres in contact arranged in the triangular pattern ( plan d ) are so placed that the sphere centers of the third layer are over those of the first , those of the fourth layer over those of the second , and so on .
the symmetry produced is hexagonal in structure and uniaxial ( figs . 5 and 5a ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ here `` plan d '' is the two - dimensional hexagonal packing , and figs .
5 and 5a depict the @xmath25 packing .
he suggested that the atoms in a crystal of quartz ( @xmath62 ) occur with the fourth kind of symmetry ( see figure [ fig:0 - 2 ] ) .
barlow also stated later in the paper ( * ? ? ?
* figs . 7 and 8 , p.207 )
the following about twinned crystal arrays with a connecting layer : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the peculiarities of _ crystal - grouping _ displayed in twin crystals can be shown to favour the supposition that we have in crystals symmetrical arrangement rather than symmetrical shape of atoms or small particles . thus if an octahedron be cut in half by a plane parallel to two opposite faces , and the hexagonal faces of separation , while kept in contact and their centres coincident ,
are turned one upon the other through @xmath63 , we know that we get a familiar example of a form found in some twin crystals . and a stack can be made of layers of spheres placed triangularly in contact to depict this form as readily as to depict a regular octahedron , the only modification necessary being for the layers above the centre layer to be placed as though turned bodily through @xmath63 , from the position necessary to depict an octahedron ( compare figs . 7 and 8)
the modification , as we see , involves _ no departure from the condition that each particle is equidistant from the twelve nearest particles .
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ' '' '' and @xmath25 packings , title="fig : " ] and @xmath25 packings , title="fig : " ] + and @xmath25 packings , title="fig : " ] and @xmath25 packings , title="fig : " ] and @xmath25 packings , title="fig : " ] the dutch botanist pieter merkus lambertus tammes made in 1930 a study of the equidistribution of pores on pollen grains @xcite .
he asked the question : what is the maximum number of circular caps @xmath64 of angular diameter @xmath13 that can be placed without overlap on a unit sphere ? here
@xmath13 is measured from the center of the unit sphere @xmath11 in @xmath65 .
tammes ( * ? ? ?
3 ) empirically determined that @xmath66 , while @xmath67 for @xmath68 .
let @xmath69 denote the maximal value of @xmath13 having @xmath70 .
he concluded that @xmath71 .
the problem of determining various values of @xmath64 is now called the _ tammes problem_. it is related to a dual question of determining the maximal radius @xmath72 possible for @xmath3 equal spheres all touching a central sphere of radius @xmath8 .
namely , the maximal value of @xmath73 having @xmath74 , determines the maximal allowable radius @xmath72 of @xmath3 spheres touching a central unit sphere by a formula given in lemma [ lemma:31 ] below . in 1943
lszl fejes tth @xcite conjectured that the volume of any voronoi cell of any sphere packing of @xmath75 by unit spheres is minimized by the dodecahedral configuration of @xmath0 unit spheres touching a central sphere .
the voronoi cell of the central sphere is then a regular dodecahedron circumscribed about the sphere .
the packing density of the dodecahedron is approximately @xmath76 , which is larger than the density of the known fcc packing of @xmath75 .
this conjecture became known as the _ dodecahedral conjecture _ and was settled affirmatively in 2010 ( see section [ sec:53 ] ) .
the problem of molecular rearrangement in the liquid - solid phase transition is relevant in materials .
the structure of ordinary ice , the @xmath77 phase labeled ice @xmath78 , has an @xmath25 packing of its oxygen atoms , as observed in 1921 by dennison @xcite .
note that the hydrogen atoms are free to change their orientations to some extent ( pauling @xcite ) .
water exhibits a phenomenon of supercooling at standard pressure down to @xmath79 ; under special rapid cooling it can avoid freezing down to @xmath80 , and enter a glassy phase ( angell @xcite ) . in 1952
frederick charles frank @xcite argued that supercooling can occur because the common arrangements of molecules in liquids assume configurations far from what they would assume if frozen .
he wrote : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ consider the question of how many different ways one can put twelve billiard balls in simultaneous contact with another one , counting as different the arrangements which can not be transformed into each other without breaking contact with the centre ball? the answer is _ three_. two which come to the mind of any crystallographer occur in the face - centred cubic and hexagonal close packed lattices .
the third comes to the mind of any good schoolboy , and it is to put one at the centre of each face of a regular dodecahedron .
that body has five - fold axes , which are abhorrent to crystal symmetry : unlike the other two packings , this one can not be continuously extended in three dimensions .
you will find that the outer twelve in this packing do not touch each other .
if we have mutually interacting deformable spheres , like atoms , they will be a little closer to the centre in this third kind of packing ; and if one assumes they are argon atoms ( interacting in pairs with attractive and repulsive potentials proportional to @xmath81 and @xmath82 ) one may calculate that the binding energy of the group of thirteen is @xmath83 greater than for the other two packings .
this is @xmath84 of the lattice energy per atom in the crystal .
i infer that this will be a very common grouping in liquids , that most of the groups of twelve atoms around one will be of this form , that freezing involves a substantial rearrangement , and not merely an extension of the same kind of order from short distances to long ones ; a rearrangement which is quite costly of energy in small localities , and which only becomes economical when extended over a considerable volume , because unlike the other packing it can be so extended without discontinuities . _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the three local arrangements frank specifies we shall label as @xmath24 ( face - centered - cubic ) , @xmath25 ( hexagonal close packing ) and @xmath9 ( dodecahedral ) , for convenience .
the crystalline arrangements of @xmath24 and @xmath25 are `` extremal '' ( i.e. on the boundary of the configuration space ) , while the balls in @xmath9 configuration are free to move independently .
frank s assertion that there are exactly three possible arrangements is _ false _ if taken literally .
there are continuous deformations between any arrangement of types @xmath24 , @xmath25 and @xmath9 and any of the other types ( see section [ sec:53 ] ) .
there is however an important kernel of truth in frank s statement , which buttresses his argument made concerning the existence of supercooling : each of the three arrangements above is `` remarkable '' in some sense ( see section [ sec:5 ] ) . to move from a large arrangement of spheres having many @xmath9 configurations to one
frozen in the @xmath25 packing requires substantial motion of the spheres . in a paper titled `` das problem der
dreizehn kugeln '' [ `` the problem of the thirteen balls '' ] kurt schtte and bartel leendert van der waerden @xcite gave a rigorous proof that one can not have @xmath6 unit spheres touching a given central sphere .
there has been much further work on this problem . in his 1956 paper
titled `` the problem of @xmath6 spheres '' john leech @xcite gave a two page proof of the impossibility of @xmath6 unit spheres touching a unit sphere .
more accurately he stated : `` in the present paper i outline an independent proof of this impossibility , certain details which are tedious rather than difficult have been omitted . ''
various authors have written to fill in such details , which balloon the length of the proof .
these include work of maehara @xcite in 2001 , who gave in 2007 a simplified proof @xcite .
other proofs of the thirteen spheres problem were given by anstreicher @xcite in 2004 and musin @xcite in 2006 . in 1969 lszl fejes tth
@xcite discussed the problem of characterizing those sphere packings in space that have the property that every sphere in the packing touches exactly @xmath0 neighboring spheres .
the @xmath24 and @xmath25 packing both have this property , as already noted by barlow ( 1883 ) .
there are in addition uncountably many other packings , obtained by stacking plane layers of hexagonally packed spheres ( `` penny packing '' ) , where there are two choices at each level of how to pack the next level .
fejes tth conjectured that all such packings are obtained in this way .
this conjecture of fejes tth s was settled affirmatively by hales @xcite in 2013 . in their book : _ sphere packings , lattices and groups , _
john h. conway and neil j. a. sloane considered the question : _ what rearrangements of the @xmath0 unit spheres are possible using motions that maintain contact with the central unit sphere at all times ?
_ in ( * ? ?
* chap . 1 ,
appendix : planetary perturbations ) they sketch a result asserting : the configuration space of @xmath0 unit spheres touching a @xmath6-th allows arbitrary permutations of all @xmath0 touching spheres in the configuration .
that is , if the spheres are labeled and in the dod configuration , it is possible , by moving them on the surface of the central sphere , to arbitrarily permute the spheres in a dod configuration .
we will describe the motions in detail to obtain such permutations in section [ sec:6 ] .
what is the maximal radius @xmath72 possible for @xmath3 equal spheres all touching a central sphere of radius @xmath8 ?
this problem is closely related to the _ tammes problem _ discussed above , which concerns instead the maximum number of circular caps @xmath64 of angular diameter @xmath13 that can be placed without overlap on a sphere .
the latter problem is also the problem of constructing good spherical codes ( see ( * ? ? ?
* chap . 1 , sec . 2.3 ) ) .
one can convert the angular measure @xmath13 into the radius of touching spheres ; for a sphere touching a central unit sphere , its associated spherical cap on the central sphere is the radial projection of its points onto the surface of the central sphere .
related to radius @xmath2 ] [ lemma:31 ] for a fixed @xmath85 , the maximal value of @xmath86 having @xmath74 determines the maximal allowable radius @xmath5 of @xmath3 spheres touching a central unit sphere , using the formula @xmath87 conversely , given @xmath88 , we obtain @xmath89 choosing @xmath90 . from the right triangle in figure
[ fig:7 - 2 ] we have @xmath91 this relation gives a bijection of the interval @xmath92 to the interval @xmath93 the tammes problem has been solved exactly for only a few values of @xmath3 , including @xmath94 and @xmath95 .
the tammes problem was solved for @xmath97 and @xmath0 by lszl fejes tth @xcite in 1943 , where extremal configurations of touching points for @xmath98 are attained by vertices of an equilateral triangle arranged around the equator , and for @xmath99 by vertices of regular polyhedra ( tetrahedron , octahedron and icosahedron ) inscribed in the unit sphere .
fejes tth proved the following inequality : for @xmath3 points on the surface of the unit sphere , at least two points can always be found with spherical distance @xmath100 note that @xmath101 is the edge - length of a spherical equilateral triangle with the expected area for an element of an @xmath3-vertex triangulation of @xmath11 .
the inequality is sharp for @xmath102 and @xmath0 for the specified configurations above . in 1949
fejes tth @xcite gave another proof of his inequality .
his result was re - proved by habicht and van der waerden @xcite in 1951 . after converting this result to the @xmath2-parameter using lemma [ lemma:31 ] , we may re - state his result for @xmath103 as follows .
[ th72 ] ( fejes tth ( 1943 ) ) @xmath104 the maximum radius of @xmath0 equal spheres touching a central sphere of radius @xmath8 is : @xmath105 here @xmath106 is a real root of the fourth degree equation @xmath107 .
@xmath108 an extremal configuration achieving this radius is the @xmath0 vertices of an inscribed regular icosahedron ( equivalently , face - centers of a circumscribed regular dodecahedron ) .
the tammes problem was solved for @xmath110 in 1950 by van der waerden , building on work of habicht and van der waerden @xcite .
it was solved for @xmath111 by schtte .
these solutions , plus those of van der waerden for @xmath112 and schtte for @xmath113 appear in schtte and van der waerden @xcite .
they give a history of these developments in @xcite .
their paper used geometric methods , introducing and studying the allowed structure of the graphs describing the touching patterns of arrangements of @xmath3 equal circles on @xmath11 .
these graphs are now called _ contact graphs _ , and schtte and van der waerden credit their introduction to habicht .
schtte also conjectured candidates for optimal configurations for @xmath114 and van der waerden conjectured candidates for @xmath115 ( see @xcite ) .
l. fejes tth presented the work of schtte and van der waerden in his 1953 book on sphere - packing ( * ? ? ?
* chapter vi ) .
this book uses the terminology of _ maximal graph _ for the graph of a configuration achieving the maximal radius for @xmath3 . in 1959
fejes tth @xcite noted that the set of vertices of a square antiprism gave an extremal @xmath112 configuration on the @xmath36-sphere . in his 1963 habilitationsschrift @xcite ( see the 1986 english translation @xcite ) , ludwig danzer made a geometric study of the contact graph for a configuration of @xmath3 circles on the surface of a sphere .
this graph has a vertex for each circle and an edge for each pair of touching circles .
a contact graph is called _ maximal _ if it occurs for a set of circles achieving the maximal radius @xmath5 .
it is called _ optimal _ if it has the minimum number of edges among all maximal contact graphs .
a contact graph is called _ irreducible _ if the radius can not be improved by altering a single vertex . for each small @xmath3
, danzer found a complete list of irreducible contact graphs .
he used this analysis to prove the conjectures of schtte and van der waerden @xcite above for the cases @xmath117 .
[ th73 ] ( danzer ( 1963 ) ) @xmath104 for @xmath118 there is , up to isometry , a unique @xmath13-maximizing unlabeled configuration of spheres with @xmath74 .
@xmath108 for @xmath119 , the vertices of a regular icosahedron form the unique @xmath13-maximizing configuration .
the @xmath13-maximizing configuration for @xmath120 is a regular icosahedron with one vertex removed . in ( * ?
* theorem ii ) danzer classified irreducible sets for @xmath121 .
there are additional @xmath3-irreducible graphs for @xmath122 in these cases . for @xmath123
he finds one optimal set and one irreducible set with one degree of freedom .
he also finds for @xmath112 an irreducible set with two degrees of freedom .
for @xmath124 he finds one optimal set , two irreducible sets with no degrees of freedom , and five with at least one degree of freedom .
danzer states that the irreducible sets with no degrees of freedom ( presumably ) give relative optima .
an irreducible graph having a degree of freedom fails to be relatively optimal , since deforming along its degree of freedom leads to a boundary graph with an additional edge , where the extrema is reached .
danzer s work was not published in a journal until the 1980 s . in the interm ,
brczky @xcite gave another solution for @xmath120 , and hrs @xcite for @xmath124 .
very recently the tammes problem was solved for the cases @xmath126 and @xmath127 by oleg musin and alexey tarasov @xcite .
their proofs were computer - assisted , and made use of an enumeration of all irreducible configuration contact graphs ( see @xcite )
. earlier work on configurations of up to @xmath128 points was done by brczky and szab @xcite ) .
the case @xmath95 was solved in 1961 by raphael m. robinson @xcite .
he proved a 1959 conjecture of fejes tth @xcite , asserting that the extremal @xmath129 and that the extremal configuration of @xmath130 sphere centers are the vertices of a snub cube .
coxeter @xcite describes the snub cube .
table [ tab711 ] summarizes optimal angular parameters and radius parameters on the tammes problem for @xmath94 and @xmath95 ( see aste and weaire ( * ? ? ?
the configuration name given is associated to the vertices in the corresponding polyhedron being inscribed in a sphere , e.g. an icosahedron has @xmath119 vertices ( see melnyk et al .
* table 2 ) ) . in the case
@xmath110 the polyhedron is any from a family of trigonal bipyramids , including the square pyramid as a degenerate case . for @xmath120
the polyhedron is a singly capped pentagonal antiprism , i.e. the icosahedron with one vertex deleted .
the cases @xmath131 and @xmath132 are described in @xcite .
figure [ fig : contact ] shows schematically the optimal contact graphs for @xmath94 .
.the tammes problem for small @xmath3 [ cols=">,^ , < , < , < " , ] by taking the alternating sum of each row , or more directly by evaluating @xmath133 , we can compute the euler characteristic @xmath134 of @xmath135 for @xmath136 .
we also have ( * ? ? ?
* proposition 2.3 ) : [ thm:412 ] ( feichtner - ziegler ( 2000 ) ) for @xmath136 the moduli space @xmath137 is homotopy equivalent to the complement of the affine complex braid arrangement of hyperplanes @xmath138 of rank @xmath139 , since @xmath140 its integer cohomology algebra is torsion - free .
it is generated by @xmath8-dimensional classes @xmath141 with @xmath142 with @xmath143 and has a presentation as an exterior algebra @xmath144 where the ideal @xmath145 is generated by elements @xmath146 and @xmath147 here the complexified @xmath148-arrangement of hyperplanes @xmath149 of rank @xmath139 is cut out by the hyperplanes @xmath150 its complement @xmath151 is homeomorphic to @xmath152 .
the associated affine arrangement is : @xmath153 treating @xmath154 , we set @xmath155 a more refined result determines the integral cohomology ring for the configuration spaces of spheres , which includes torsion elements .
it was determined by feichtner and ziegler , who obtained in the special case of @xmath156 the following result ( see ( * ? ? ?
* theorem 2.4 ) ) .
[ f - z ] ( feichtner - ziegler ( 2000 ) ) for @xmath157 , the integer cohomology ring + @xmath158 has only @xmath36-torsion .
it is given as @xmath159 in which @xmath145 is the ideal of relations given in theorem [ thm:412 ] .
in this result the expression @xmath160 denotes a direct summand of @xmath161 in cohomology of degree @xmath162 , e.g. there is a @xmath163 direct summand in @xmath164 .
morse theory , as treated in milnor @xcite , concerns how topology changes for the _ sublevel sets _
@xmath165 of a given , sufficiently nice , real - valued function @xmath166 on a manifold @xmath167 , as the level set parameter @xmath168 varies . at the _ critical values _ of the function , where its gradient vanishes , the topology changes
this change can be described by adding up the contributions of individual _ critical points _ of the function that occur at the critical values .
more precisely , a _
morse function _ is a smooth enough function that has only isolated _ critical points _ , each of which is non - degenerate , and arranged so that only one critical point occurs at each critical level @xmath169 . here _
non - degenerate _
means that the function @xmath166 is twice - differentiable and its hessian matrix @xmath170 $ ] is nonsingular at the critical point .
the topology of a sublevel set @xmath171 is changed as @xmath168 ascends past a critical value , up to homotopy , by attaching a cell of dimension equal to the _ index _ of the critical point : the number of negative eigenvalues of the hessian .
our interest here will be in _
superlevel sets _
@xmath172 whose topology changes as @xmath173 descends past a critical value by attaching a cell of dimension equal to the _ co - index _ of the critical point : the number of positive eigenvalues of the hessian . in the 1980 s goresky and
macpherson @xcite developed morse theory on more general topological spaces than manifolds , namely _
stratified spaces _ in the sense of whitney @xcite , and applicable to a wider class of real - valued functions .
the configuration spaces such as @xmath174 studied here are in general stratified spaces in whitney s sense , because viewed using the @xmath2-parameter they are real semi - algebraic varieties . for the case at hand of @xmath175 and the injectivity radius function @xmath176 , we have a further problem that @xmath177 is not a morse function .
its critical points are degenerate and non - isolated , and even the notion of `` critical '' needs care in defining , since @xmath176 is a min - function of a finite number of smooth functions ( see definition [ def:41 ] ) .
technically , the angular distance function from @xmath178 is not smooth at the antipodal point @xmath179 , with angular distance @xmath180 on @xmath11 ; however we can treat these functions as if they were smooth using the following trick , valid for the nontrivial cases @xmath157 where @xmath181 : simply include the constant function @xmath182 among those functions over which we take the min , and smoothly cut off the other pairwise angular distance functions @xmath183 if they exceed @xmath184 .
an appropriate version of morse theory that applies in this context , called _ min - type morse theory _ , has only recently been sketched by gershkovich and rubinstein @xcite ( see also baryshnikov et al .
related work includes carlsson et al .
@xcite and alpert @xcite .
the treatment of @xcite studies a notion of topologically critical value . in what follows
we develop an alternative max - min approach to criticality and a morse theory for the injectivity radius function @xmath176 on configurations that is in the spirit of the criticality theory for maximizing _
thickness _ or normal injectivity radius ( also known as _ reach _ ) on configurations of curves subject to a length constraint ( or in a compact domain of @xmath75 , or in @xmath185 ) studied earlier in optimal ropelength and rope - packing problems by cantarella et al .
this approach provides a notion of critical configuration , refining the notion of a critical value .
the farkas lemma ( and its infinite - dimensional generalizations in the case of the ropelength problem ) is a key tool used in these works that relates criticality to the existence of a balanced system of forces on the configuration .
a more detailed treatment is planned in @xcite . to understand criticality for the injectivity radius function @xmath176 on @xmath186
, we first need to make sense of varying a configuration @xmath187 along a tangent vector @xmath188 to @xmath189 at @xmath190 ; here @xmath191 is a tangent vector to @xmath11 at @xmath192 , for @xmath193 . for sufficiently small @xmath168 we can define a nearby configuration @xmath194 by translating and projecting each factor back to @xmath11 . in particular , the @xmath195-directional derivative @xmath196 of a smooth function @xmath166 on @xmath189 at @xmath190 is simply @xmath197 , so @xmath190 is a critical point for smooth @xmath166 provided all its @xmath195-directional derivatives vanish at @xmath190 ; this means that the increment @xmath198 , where @xmath199 is a function which tends to @xmath21 faster than linearly .
the operation taking @xmath190 to @xmath200 can be thought of as the spherical analog of translating @xmath190 by @xmath201 via vector addition in the linear case , hence the suggestive sum notation .
the map taking @xmath201 to @xmath200 approximates ( to within @xmath202 ) the exponential map at @xmath190 .
now we make precise `` max - min criticality '' for the injectivity radius function @xmath176 .
[ def:46 ] a configuration @xmath203 is _
critical for maximizing _
@xmath176 provided for every @xmath188 and sufficiently small @xmath168 , we have @xmath204_+ = o(t),\ ] ] where @xmath205_+=\max\{g,0\}$ ] denotes the positive part of @xmath206 .
equivalently , a configuration @xmath190 is critical if _ no _ variation @xmath195 can _ increase _ @xmath176 to first order .
otherwise , a configuration @xmath190 is _ regular _ , that is , there exists a variation @xmath195 which _ does _ increase @xmath176 to first order , and so , by the definition of @xmath176 as a min - function , this means that for all pairs @xmath207 realizing the minimal angular distance @xmath208 , their distances increase to first order under the variation @xmath195 as well .
note that the set of regular configurations is open . if each configuration in this @xmath209-level set is regular , then this level is _ topologically regular : _ that is , there a deformation retraction from @xmath210 to @xmath211 for some @xmath212 ( see ) . [ def:47 ] for @xmath213 ,
the _ contact graph _ of @xmath190 is the graph embedded in @xmath11 with vertices given by points @xmath192 in @xmath190 and edges given by the geodesic segments @xmath214 $ ] when @xmath215 .
examples of contact graphs for extremal values of the tammes problem were given in figure [ fig : contact ] of section [ sec:3 ] .
[ def:48 ] a _ stress graph _ for @xmath213 is a contact graph with nonnegative weights @xmath216 on each geodesic edge @xmath217 $ ] .
a stress graph gives rise to a system of _ tangential forces _ associated to each geodesic edge @xmath217 $ ] of the contact graph .
these forces have magnitude @xmath216 , are tangent to @xmath11 at each point @xmath192 of @xmath190 , and are directed along the outward unit tangent vectors @xmath218 to the edge @xmath219 at its endpoints @xmath220 , respectively .
a stress graph is _ balanced _ if the vector sum of the forces in the tangent space of @xmath11 at @xmath192 is zero for all points of @xmath221 a configuration @xmath190 is _ balanced _ if its underlying contact graph has a balanced stress graph for some choice of non - negative , not - everywhere - zero weights on its edges .
[ thm : contact ] to each critical value @xmath222 for the injectivity radius @xmath176 , there exists a balanced configuration @xmath190 with @xmath223 .
the vertices of the contact graph are a subset of the points in @xmath190 and the geodesic edges of the contact graph all have length @xmath13 . as in ( * ? ? ? * corollary 3.4 and equation 2 ) , since @xmath176 is a min - function on @xmath224 , if @xmath222 is not a topologically regular value of @xmath176 , then some configuration @xmath225 is balanced . because @xmath226 , the conditions on the vertices and edge lengths are clearly met .
we now prove a converse result .
[ thm : converse ] if a configuration @xmath227 on @xmath11 is balanced , then @xmath227 is critical for maximizing the injectivity radius @xmath176 .
we will need a preliminary lemma . consider a planar graph @xmath161 embedded on the unit sphere @xmath11 via a map @xmath228 which is @xmath229 on the edges of @xmath161 .
( by slight abuse of notation , a point on its image in @xmath11 may also be denoted by @xmath178 . )
suppose each edge @xmath219 of @xmath161 is assigned a _ nonzero _
weight @xmath230 .
let @xmath231 denote the length of edge @xmath232 induced by the map @xmath178 , and let @xmath233 be the total _ weighted length _ of the embedded graph @xmath234 .
we can vary the map @xmath178 using a @xmath229 vector field @xmath235 , just as we varied a configuration : for sufficiently small @xmath168 , each point @xmath236 on the image of the graph is moved to @xmath237 .
let @xmath238 denote the first derivative at @xmath239 of weighted length for this varied graph , i.e. the _ first variation _ of @xmath240 along @xmath235 .
[ lem:412 ] the first variation @xmath238 of the weighted lengh @xmath240 for the embedded graph @xmath234 vanishes for every vector field @xmath235 on @xmath11 if and only if the following two conditions hold : @xmath104 each edge @xmath219 joining a pair of vertices @xmath241 of @xmath161 maps to a geodesic arc @xmath242=[{{\bf u}}^-,{{\bf u}}^+]$ ] in the embedded graph @xmath234 ; @xmath108 at any vertex @xmath178 of the embedded graph @xmath234 , the weighted sum @xmath243 , where the sum is taken over the subset of edges @xmath244 incident to @xmath178 , and where @xmath245 is the outer unit tangent vector of @xmath244 at @xmath178 .
this lemma is a direct consequence of the first variation of length formula @xmath246 ( see , for example , hicks ( * ? ? ?
* chapter 10 , theorem 7 , page 148 ) ) . here
@xmath247 is the unit tangent vector field of the edge @xmath232 , and @xmath248 is the geodesic curvature vector of @xmath232 ; with respect to any local arclength parameter on @xmath232 , the geodesic curvature vector is the projection to @xmath11 of the acceleration : @xmath249 , which is tangent to @xmath11 and normal to @xmath232 , and which vanishes iff @xmath232 is a geodesic arc .
now express @xmath250 as a sum of edge terms and vertex terms . the geodesic arc condition ( 1 ) that @xmath251 along every edge implies the edge terms in @xmath238 all
vanish for any variation @xmath235 of the map @xmath178 ; and the force balancing condition ( 2 ) implies all vertex terms vanish for any variation @xmath235 .
conversely , given any interior image point @xmath178 of an edge , take a variation @xmath235 supported in an arbitrarily small neighborhood of @xmath178 , and orthogonal to @xmath232 at @xmath178 : the vanishing of @xmath238 implies condition ( 1 ) that @xmath251 ; similarly , at any given vertex @xmath252 , consider a pair of variations @xmath253 supported in an arbitrarily small neighborhood of @xmath178 which approximate an orthogonal pair of translations of the tangent space to @xmath11 at @xmath178 : the vanishing of @xmath238 for both of these @xmath253 implies the forces balance ( 2 ) . in case @xmath254 , vanishing for the first variation of @xmath255 does not imply @xmath232 is a geodesic arc : instead , the edges with nonzero weights form a balanced geodesic subgraph of the original embedded graph @xmath234 .
lemma [ lem:412 ] suggests the following definition .
[ def:413 ] an embedded graph satisfying properties ( 1 ) and ( 2 ) is called a _ balanced geodesic graph . _
( note that there is no requirement here that the geodesic edge lengths are integer multiples of some basic length , as would be the case for a contact graph . )
lemma [ lem:412 ] shows that a balanced geodesic graph has vanishing first variation of weighted length @xmath255 , even if some of its edge weights @xmath216 are zero . by hypothesis , there are non - negative edge weights ( not all zero ) so that the resulting stress graph @xmath234 for the configuration * u * is balanced . by lemma [ lem:412 ]
the first variation @xmath238 of weighted length for @xmath234 vanishes for all variation vector fields @xmath235 on @xmath11 .
suppose ( to the contrary ) that * u * were _ not _
critical for maximizing the injectivity radius @xmath176 .
then there would be a variation @xmath256 of @xmath257 so that every geodesic edge of the stress graph has length increasing at least linearly in @xmath256 .
extend @xmath256 to an ambient @xmath229 variation vector field @xmath235 on @xmath11 .
since the edge weights are _ nonnegative _ , and not all zero , that implies the weighted length of the stress graph also increases at least linearly in @xmath235 , a contradiction .
a key property of balanced configurations is that for each @xmath136 the set of radii @xmath2 such that @xmath135 contains a balanced configuration @xmath190 of injectivity radius @xmath258 is finite .
it follows that _ the set of critical radius values for @xmath135 is finite_. this finiteness result can be proved using the structure of the spaces @xmath17 $ ] as real semi - algebraic sets , which we consider in @xcite .
we will assume this finiteness result holds in the discussions in [ sec:43a ] ; it can be directly verified for small @xmath3 . for small radii , it is convenient to state results for @xmath259 in terms of the angle parameter @xmath13 . for sufficiently small angles , the superlevel sets
@xmath260 will have the same homotopy type as the full configuration space @xmath189 . in terms of the radius function
, the conclusion of this result applies for @xmath261 , where @xmath262 is the smallest critical value for @xmath10 $ ] .
[ thm:43 ] suppose @xmath136 .
the smallest critical value for maximizing @xmath176 on @xmath135 is @xmath263 , achieved uniquely by the @xmath3-ring configuration of equally spaced points along a great circle .
moreover , for angular diameter @xmath264 the following hold .
@xmath104 the space @xmath260 is a strong deformation retract of the full configuration space @xmath265 .
@xmath108 the reduced space @xmath266 is a strong deformation retract of the full reduced configuration space @xmath135 . consequently each has , respectively , the same homotopy type and cohomology groups as the corresponding full configuration space .
this result corresponds to ( * ? ? ?
* theorem 5.1 ) .
first note that by using equal weights on each of its edges , the @xmath3-ring is balanced and hence a critical configuration by theorem [ thm : converse ] .
the balanced contact graph on @xmath11 of a @xmath267-critical @xmath3-configuration has geodesic edges with angular length @xmath268 . in order to balance
, its total angular length must be at least @xmath269 , the length of a complete great circle .
thus if @xmath270 , then the total length @xmath271 and there is no balanced @xmath3-configuration in @xmath260 and @xmath222 is not a critical value for @xmath176 . in this case
, a weighted @xmath176-subgradient - flow provides the strong deformation retraction of @xmath272 to @xmath260 .
[ cor:48 ] for @xmath273 and @xmath274 the configuration spaces @xmath260 and @xmath275 are path - connected , but not simply - connected .
these spaces have the same homotopy type as @xmath189 ( resp .
@xmath135 ) , which is connected since @xmath276 ( resp .
@xmath277 ) .
they each are closures of open manifolds and are connected , so are path - connected .
we have @xmath278 for some @xmath279 , using the formula applied for @xmath274 , so @xmath135 is not simply - connected .
finally , @xmath189 is not simply connected via the product decomposition in theorem [ thm:411 ] .
we consider reduced configuration spaces @xmath17 $ ] having radius parameter @xmath2 sufficiently close to @xmath5 , depending on @xmath3 . using the finiteness of the set of critical values , there is an @xmath280 such that the upward `` gradient flow '' of the injectivity radius function @xmath176 ( or of the corresponding touching - sphere radius function @xmath2 ) defines a deformation retraction from @xmath17 $ ] to @xmath281 $ ] for the range @xmath282 .
the simplest topology that may occur at @xmath5 is where @xmath283 $ ] has all its connected components contractible ; the property holds for most small @xmath3 in fact , for all @xmath284 except @xmath110 .
when it holds , the cohomology groups for @xmath17 $ ] in this range of @xmath2 will have the following very simple form : for @xmath110 the cohomology does _ not _ have the purity property . the reduced configuration space @xmath289 $ ] is @xmath290-dimensional for @xmath291 but becomes @xmath36-dimensional at @xmath292 .
some optimal maximum radius configurations at @xmath293 have room for an extra sphere ( giving @xmath294 ) : the sphere centers form five vertices of an octahedron , and either vertex in an antipodal pair of vertices can freely and independently move towards the unoccupied sixth vertex of the octahedron .
the resulting reduced configuration space @xmath295= { \operatorname{bconf}}(5 ; \frac{\pi}{2})$ ] is a simplicial @xmath36-complex which is not contractible ; it is pictured schematically in figure [ fig : example5 ] .
it has a single connected component having euler characteristic @xmath296 . for further discussion of this space as a critical stratified set ,
see section [ sec:46 ] .
-maximal stratified set for @xmath110,width=192 ] does the purity property hold for all or most large @xmath3 ?
we do not know .
one might expect that extremal configurations for high values of @xmath3 at @xmath297 will have most spheres are held in a rigid structure , and for @xmath2 near it all individual spheres will only be able to move in a tiny area around them , each contributing a connected component to the reduced configuration space . against this expectation ,
computer experiments packing @xmath3 equal - radius two - dimensional disks confined to a unit disk suggest the possibility for some @xmath3 that extremal configurations could have _ rattlers _ , which are loose disks that have motion permitted even at @xmath298 ( lubachevsky and graham @xcite ) . however , even with rattlers one could still have contractibility of individual connected components .
the hypothesis of extremal configurations being rigid ( and unique ) is known to hold for @xmath299 .
when the purity property holds one can ( in principle ) determine the number of connected components for the set of near - maximal configurations ; call it @xmath300 .
this value depends on the symmetries of each maximal configuration under the @xmath16 action . denoting the isomorphism types of the connected components of maximal rigid ( labeled ) configurations of @xmath3 points at @xmath297 by @xmath301 for @xmath302
, one would have @xmath303 for @xmath304 , excluding @xmath305 , the extremal configurations for the tammes problem are known to be unique up to isometry ; call them @xmath306 .
the analysis of danzer given in theorem [ th73 ] covers the cases @xmath307 . for the case
@xmath119 , the unique extremal configuration @xmath9 of vertices of an icosahedron has @xmath308 , the alternating group , of order @xmath309 , whence @xmath310 connected components of critical strata necessarily have dimension at least three from the @xmath16-action . in what follows
we consider reduced critical strata that quotient out by this action . at a critical value @xmath177
there can be several disconnected reduced critical strata , and such strata can have positive dimension .
we give examples of each .
for @xmath110 a positive dimensional reduced critical stratified set occurs at the maximal radius value @xmath311 .
the set of ( reduced ) critical configurations forms a family , which is two - dimensional , containing multiple strata .
a generic contact graph at the maximal injectivity radius @xmath312 is a @xmath313-graph having @xmath36 polar vertices and @xmath7 equatorial vertices .
this contact graph , depicted in figure [ fig : contact ] , has @xmath7 faces and @xmath314 edges and is optimal .
the three angles between equatorial vertices can range between @xmath315 and @xmath180 , with the condition that their sum is @xmath269 , defining a @xmath36-simplex . as long as none of the equatorial angles
is @xmath180 , criticality is achieved using weights that are non - zero on all the edges . when an equatorial angle is @xmath180 , corresponding to a corner of the @xmath36-simplex , these equatorial vertices may be regarded as a new pair of polar vertices . in this configuration , as the angles between equatorial angles go to @xmath180 , some weights of the stress graph can go to @xmath21 and the support of the weights degenerates to a @xmath316-ring .
the limit contact graph consists of the edges of a square pyramid whose base is that @xmath316-ring .
this gives a non - optimal contact graph with @xmath27 faces and @xmath317 edges .
for @xmath119 there are several distinct reduced critical strata at the critical value @xmath318 , two of which correspond to the @xmath24-configuration and @xmath25-configurations , singled out in frank s discussion in section [ sec:27 ] .
these configurations are defined in section [ sec:52 ] below , and their criticality is shown in theorem [ th913bb ] .
for very small @xmath3 it is possible to completely work out all the critical points and the changes in topology .
we illustrate such an analysis on the simplest nontrivial example @xmath319 ( see figure [ fig : morse4 ] , explained below ) . ]
we consider the reduced superlevel sets @xmath320 $ ] . since @xmath321 is @xmath317-dimensional , away from the critical values these spaces are @xmath27-dimensional manifolds with boundary .
if we ignore the labelling of points and classify the contact graphs for four vertices , there are exactly two geometrically distinct @xmath176-critical @xmath316-configurations in @xmath322 : 1 .
the @xmath316-ring of four equally spaced points around a great circle on @xmath11 with @xmath323 which is a saddle configuration for @xmath176 .
there is a @xmath8-dimensional subspace of the tangent space to @xmath322 at the @xmath316-ring along which @xmath176 increases to second order , i.e. the co - index is @xmath324 .
the critical value for @xmath2 for the @xmath316-ring is @xmath325 .
2 . the vertices of the regular tetrahedron with @xmath326 , which is the maximizing configuration for @xmath176 on @xmath322 , i.e the co - index @xmath327 .
the critical value for @xmath2 for tet is @xmath328 there are two intervals @xmath329 and @xmath330 $ ] on which the topology of @xmath331 $ ] remains constant . from theorem [ thm:411 ]
, it can be seen that on the interval @xmath332 , @xmath320 $ ] is homeomorphic to @xmath322 .
this has the homotopy type of @xmath38 punctured at two points , hence @xmath333 , { \mathbb z } ) = { \mathbb z } , \quad h^{1}({\operatorname{bconf}}(4)[r ] , { \mathbb z } ) = { \mathbb z}^2,\quad h^{k}({\operatorname{bconf}}(4)[r ] , { \mathbb z } ) = 0 , \ , \mbox{for}\,\ , k \ge 2.\ ] ] on the open interval @xmath334 , the manifold @xmath320 $ ] has two connected components , each diffeomorphic to a @xmath27-ball , which can be seen from the strong deformation retraction to @xmath335 $ ] consisting of the two points associated to the orientated labelings of @xmath336 configurations , @xmath337 and @xmath338 .
hence @xmath333 , { \mathbb z } ) = { \mathbb z}^2 , \quad h^{k}({\operatorname{bconf}}(4)[r ] , { \mathbb z } ) = 0 , \ , \mbox{for}\,\ , k \ge 1.\ ] ] figure [ fig : morse4 ] above is only a schematic picture , since we can not draw a @xmath27-dimensional manifold .
it compresses four of the dimensions .
the visible points take @xmath2-values with @xmath339 .
the value @xmath340 is a circular vertical ring in the middle , and the values of @xmath2 increase as one moves to the left or right , reaching a maximum at @xmath337 and at @xmath338 . from table
[ table4 ] , we can easily compute the euler characteristic @xmath341 . the indexed sum of critical points of the function @xmath342
gives an alternative computation of the euler characteristic as @xmath343 we count the _ labeled _ configurations in @xmath322 : since the @xmath316-ring has symmetry group @xmath344 of order @xmath317 in @xmath16 , there are @xmath345 critical points of this type with co - index @xmath8 ; and since tet has symmetry group @xmath346 of order @xmath0 in @xmath16 , there are really @xmath347 critical points of this type with co - index @xmath21 ; and so we obtain @xmath348 as predicted .
in fact , the morse complex for @xmath176 captures the fact that @xmath322 itself has the homotopy type of the @xmath313-graph : there are @xmath36 vertices ( @xmath21-cells ) in the complex corresponding to the maxima ( co - index @xmath21 ) @xmath337 and @xmath349 configurations ; there are @xmath7 edges ( @xmath8-cells ) corresponding to the @xmath7 saddle ( co - index @xmath8 ) @xmath316-ring configurations .
the complexity of the changes in topology of the configuration space grows rapidly with @xmath3 . for larger values of @xmath3
there are many @xmath176-critical configurations which are not maximal .
the value @xmath119 is large enough to be extremely challenging to obtain a complete analysis of the critical configurations of the configuration space , and to analyze the variation of the topology as a function of the radius @xmath2 .
the betti numbers for @xmath119 for radius @xmath39 given in table [ table4 ] differ greatly from those at @xmath351 where the cohomology of @xmath352 $ ] is entirely in dimension @xmath21 , according to the purity property , which holds for @xmath119 by results in section [ sec:3 ] .
this topology change involves millions of ( labeled ) critical points .
its full investigation remains a task for the future .
in this section , we discuss @xmath353 $ ] and @xmath23 $ ] , the configuration spaces of @xmath0 unit spheres touching a central unit sphere @xmath11 .
these configuration spaces are remarkable and have some special properties .
the value @xmath354 is a critical value and has ( at least ) two geometrically distinct critical points , the @xmath24 and @xmath25 configurations .
we believe @xmath8 is the maximal radius @xmath2 where the spheres @xmath26 $ ] are arbitrarily permutable with motions remaining on @xmath11 ( see section [ sec:65 ] ) .
the case where all spheres are unit spheres has been extensively studied in connection with sphere packing .
the value @xmath22 is a critical value of the radius function @xmath2 , and we will see that the associated configuration spaces @xmath353 $ ] and @xmath355 $ ] are not manifolds . to better understand their topology , it is useful to consider the spaces @xmath356 $ ] and @xmath26 $ ] for @xmath2 in a neighborhood of @xmath8 .
these are stratified spaces naturally embedded in @xmath357 and filtered by @xmath2 . for noncritical values of @xmath2 , the spaces @xmath356 $ ] and @xmath26 $ ] are submanifolds with boundary . for all @xmath358
, the space @xmath356 $ ] has top dimension @xmath130 .
after factoring out the ambient @xmath16-action , the space @xmath26 $ ] has top dimension @xmath19 .
we now consider the three configurations of @xmath0 touching spheres singled out by frank ( 1952 ) @xcite . in figure
[ fig : configs ] , the three polyhedra have vertices located at the @xmath0 touching sphere centers of these configurations and centroids located at the center of the central sphere .
the edges of these polyhedra specify the contact graphs of these configurations , also pictured schematically in figure [ fig : configs ] .
the @xmath9 configuration realizes the optimal contact graph for @xmath119 given in figure [ fig : contact ] in section [ sec:33 ] . *
the @xmath9 configuration is obtained by placing @xmath0 spheres touching a central @xmath6-th sphere at the vertices of an inscribed icosahedron ; such touching points are also the centers of the faces of a circumscribed dodecahedron .
it has oriented symmetry group @xmath359 , the icosahedral group , of order @xmath309 and in @xmath23 $ ] there are @xmath360 of these . * the @xmath24 configuration is obtained by stacking three layers of the hexagonal lattice , with the third layer not lying over the first layer .
the inscribed polyhedron formed by the convex hull of the @xmath0 points of the @xmath24 configuration where the spheres touch the central sphere is a cuboctahedron .
the circumscribed dual polyhedron which has the @xmath0 points as the center of its faces is a rhombic dodecahedron .
it has oriented symmetry group @xmath361 , the octahedral group , of order @xmath130 and in @xmath23 $ ] there are @xmath362 of these . * the @xmath25 configuration is obtained by stacking three layers of the hexagonal lattice , with the third layer lying directly over the first layer .
the inscribed polyhedron formed by the convex hull of the @xmath0 points of @xmath25 where the spheres touch the central sphere is a triangular orthobicupola .
this polyhedron is the johnson solid @xmath363 .
the circumscribed dual polyhedron which has the @xmath0 points as the center of its faces is a trapezoidal rhombic dodecahedron .
it has oriented symmetry group @xmath364 , the dihedral group of order @xmath314 , and in @xmath23 $ ] there are @xmath365 of these .
the @xmath24 and @xmath25 configurations are elements of @xmath366 $ ] , while the @xmath9 configuration is an interior point of @xmath23 $ ] .
we first consider rigidity properties of these packings . in the following definition
we identify a sphere tangent to the @xmath36-sphere with the circular disk ( i.e. spherical cap ) on @xmath11 that it produces by radial projection .
connelly @xcite ) a packing of disks on @xmath11 is _ locally jammed _ if each disk is held fixed by its neighbors .
that is , no disk in the packing can be moved if all the other disks are held fixed .
we say a configuration of disks is _ jammed _ if it can only be moved by rigid motions .
we call it _ completely unjammed _ if each disk can be moved slightly while holding all the other disks fixed . [ th913 ] the @xmath9 configuration in @xmath353 $ ] is completely unjammed . its space of ( infinitesimal ) deformations has dimension @xmath130 .
the deformation space is @xmath19 dimensional if viewed in @xmath23 $ ] .
the maximal radius for @xmath0 spheres is @xmath367 and is achieved in the @xmath9 configuration .
therefore the deformation space at @xmath9 is full dimensional .
in contrast , both the @xmath24 and @xmath25 configurations are _ locally jammed _
, i.e. they are rigid against motion of any one disk while holding all the other disks fixed ; each of their infinitesimal deformation spaces has codimension at least @xmath36 .
later , in section [ sec:53 ] , we describe a deformation of the @xmath9 packing to the @xmath24 packing .
this deformation , properly adjusted , has @xmath314 moving balls during its final phase arriving at @xmath24 .
( the @xmath314 fixed balls form an antipodal pair of `` triangles '' . )
we believe this value @xmath314 to be the smallest number of moving balls needed to unjam the @xmath24 configuration . for a manual on how to unlock @xmath24 , see section [ manual ] .
a deformation of the @xmath9 configuration to the @xmath25 configuration , also described in section [ sec:53 ] , requires @xmath368 moving balls at the instant of arrival at @xmath25 .
we believe this value @xmath368 to be the smallest number of moving balls needed to unjam the @xmath25 configuration . for a manual on how to unlock @xmath25 ,
see section [ manual ] . a possible reason for the larger number of moving balls needed to unjam
the @xmath25 configuration compared with that of the @xmath24 configuration is that the @xmath25 configuration has fewer local symmetries .
the @xmath24 and @xmath25 configurations are critical configurations for @xmath22 . according to theorem [ thm : converse ]
it suffices to show that these configurations carry balanced contact graph structures .
[ th913bb ] the @xmath24 configuration and @xmath25 configuration for @xmath22 carry balanced contact graph structures .
consequently , @xmath22 is a critical value of the radius function on @xmath369 . by theorem [ thm :
converse ] , a sufficient condition for the criticality of a configuration for maximizing injectivity radius is that its contact graph can be balanced .
that is , a set of positive weights may be assigned to the edges of the contact graph so that at each vertex the weighted vector sum , defined by the outward tangent vectors to the incident edges , vanishes . and @xmath25 configurations , title="fig:",width=230 ] ( a ) fcc configuration + and @xmath25 configurations , title="fig:",width=259 ] ( b ) hcp configuration + we now indicate weight values for the @xmath24 and @xmath25 configurations ( see figure [ fig : stress - graph ] ) .
\(1 ) at radius @xmath22 for the @xmath24 configuration , the stress graph is balanced when all the weights are equal .
this can be seen from the cubic- or @xmath361-symmetry of the contact graph .
\(2 ) at radius @xmath22 for the @xmath25 configuration , consider a weight @xmath370 on edges between triangular faces and square faces , a weight @xmath371 on edges between pairs of square faces , and a weight @xmath372 on edges between pairs of triangular faces . from the structure of the contact graph , it is possible to choose a constant @xmath373 and find a weight @xmath374 which balances the associated stress graph .
this suffices to balance this configuration with some zero weights .
however , it is also possible to add a uniform constant weight @xmath375 to the equatorial great circle , giving a balanced stress graph with positive weights on all edges of @xmath376 .
the @xmath9 configuration is not a critical configuration for @xmath22 ; instead it is a critical configuration at the maximal radius @xmath377 .
as noted in section [ sec:2 ] , fejes tth @xcite conjectured that this configuration does have a certain extremality property for local packing by equal spheres , that it gives a minimizer for a single voronoi cell of a unit sphere packing .
this statement , the dodecahedral conjecture , was proved in 2010 by hales and mclaughlin @xcite .
[ th514a ] ( hales and mclaughlin ( 2010 ) ) a @xmath9 configuration of unit spheres minimizes the volume of a voronoi cell of a unit sphere with center at the origin of @xmath75 over all sphere packing configurations of unit spheres containing that sphere .
the volume of this voronoi cell gives a local sphere packing density of approximately @xmath76 , which exceeds the sphere packing density @xmath48 in @xmath7-dimensional space .
we now show that the @xmath9 configuration can be continuously deformed inside @xmath23 $ ] to the @xmath24 configuration and to the @xmath25 configuration .
[ th55 ] @xmath104 on the space @xmath353 $ ] there is a continuous deformation of the @xmath9 configuration to the @xmath24 configuration that remains in the interior @xmath378 $ ] of @xmath353 $ ] till the final instant .
@xmath108 there is also a continuous deformation of the @xmath9 configuration to the @xmath25 configuration that remains in the interior @xmath378 $ ] of @xmath353 $ ] till the final instant .
the motions of these two deformations , measured from the touching points of the @xmath0 spheres to the central sphere , can be given by piecewise analytic functions on the @xmath36-sphere .
the proof of theorem [ th55 ] is given in sections [ sec:531 ] - [ sec:546 ] .
an unlocking manual for doing it is given in the appendix ( section [ manual ] ) .
to describe the deformations we will need coordinates . in sections [ sec:531 ] - [ sec:545 ]
we suppose a ball with radius @xmath8 centered at @xmath21 touches all the @xmath0 balls of the same given radius @xmath2 .
we initially allow all values of @xmath1 $ ] , but in the move @xmath379 described in later subsections we will necessarily restrict to @xmath380 .
we take @xmath9 to consist of @xmath0 equal balls of radius @xmath2 , touching a central unit sphere at @xmath0 vertices of an inscribed icosahedron @xmath381 .
we view this icosahedron @xmath381 as embedded in @xmath65 with cartesian coordinates so that we have : * its centroid is at the center @xmath382 of the unit sphere .
* it has two opposite faces parallel to the @xmath383-plane . in other words , for some @xmath384 , the intersections @xmath385 are triangular faces of @xmath386 here @xmath387 where @xmath388 is the golden ratio .
-move @xmath379,title="fig : " ] + ( a ) phase 1 -move @xmath379,title="fig : " ] + ( b ) phase 2 the _ north balls _ are those which have their centers in the plane @xmath389 while the _ south balls _ have their centers in the plane @xmath390 the three north balls form a _ north triangle _ which is centrally symmetric to the _ south triangle _ formed by the three south balls , as in @xmath24 .
together with north and south triangles , we have @xmath314 remaining balls , which will be called _ equatorial _ , even though in the initial configuration they do not have their centers on the equator .
the equator lies in the plane @xmath391 . to fix their positions ,
let the _ greenwich meridian _ be defined as @xmath392 and the longitude @xmath393 be measured from it in the counterclockwise direction viewed from the north pole @xmath394 .
we require : * the center of one of the north balls is in the half - plane of the greenwich meridian , i.e. this ball touches the greenwich meridian .
let us call this ball @xmath395 * the center of one of the equatorial balls is in the half - plane of the greenwich meridian .
let us call it @xmath396 it will necessarily be in the southern hemisphere .
this fixes the location of all @xmath0 balls . with this orientation of the icosahedron ,
the meridians of the north triangle are spaced by @xmath397 furthermore the meridians of the other three balls in the northern hemisphere are also spaced by @xmath184 and the meridians combined are spaced by @xmath182 as in @xmath24 .
the same holds for the six balls in the southern hemisphere .
we now define the `` @xmath314-move '' deformation @xmath379 , which has two variants , one leading from @xmath9 to the @xmath24 configuration , and the other leading to the @xmath25 configuration .
this move proceeds in two phases .
the first phase is the same for both variants .
it moves the @xmath314 balls that are not equatorial at constant speed along meridians towards the poles , until they form north and south triangles of three mutually touching balls .
the @xmath314 equatorial balls do not move . in the second phase
, all @xmath0 balls are moving .
in both variants , the @xmath314 equatorial balls , initially not on the equator , move towards the equator along their meridians at constant speed , to arrive on the equator at the end of the move , forming a ring of six balls on the equator .
this ring is an allowed configuration only if @xmath399 they do not touch during this move , until the last moment , and then all touch if @xmath22 . at the same time
, the north and south triangles will rotate about the polar axis at a variable speed , the same for all six , in such a way as to avoid the equatorial balls .
they will rotate by @xmath182 to their final position . for the @xmath24 move ,
the north triangle and south triangle rotate in the same direction , while for the @xmath25 move they rotate in opposite directions .
a key issue is to suitably specify the variable speed of rotation .
denote by @xmath400 the parallel @xmath401 where the three centers of the north triangle stop .
let @xmath402 be the parallel @xmath403 where the south triangle stops .
each of the six centers of the equatorial balls , initially not on the equator , will move at constant speed along their respective meridians towards their final positions on the equator .
parametrize this motion so that at @xmath239 , the @xmath314 balls are at their initial positions , while at @xmath404 , the @xmath314 balls are at their final positions on the equator .
the centers of the north triangle are on @xmath400 at @xmath239 and will remain on @xmath400 throughout the move . similarly ,
the centers of the south triangle are on @xmath402 at @xmath239 and will remain on @xmath402 throughout the move . the triangles simply rotate .
it now suffices to specify functions @xmath405 , which describe the _ increment _ of the longitude of the north triangle during the time @xmath406 $ ] and @xmath407 , the _ increment _ of the longitude of the south triangle during the time @xmath406 $ ] .
we will take @xmath405 to be a continuous , non - decreasing function , with @xmath408 @xmath409 we get two different moves to @xmath24 and @xmath25 depending on which direction the south triangle rotates .
the motion @xmath410 will take us to @xmath24 , and choosing the opposite rotation @xmath411 will take us to @xmath25 . the function @xmath413 defined so that no ball from the two triangles hits any equatorial one , is certainly not unique .
here is a minimal definition of @xmath413 beginning at the second phase .
recall that our balls are open , and that : * the center of one of the three north balls , @xmath414 , is on the half - plane of the greenwich meridian . * the center of one of the equatorial balls , @xmath415 , is also on the half - plane of the greenwich meridian .
* there is an equatorial ball with the longitude @xmath416 call it @xmath417 note that the center of @xmath415 is south of the plane @xmath418 while that of @xmath419 is north of the plane @xmath391 . throughout the second phase of @xmath379 the ball @xmath415
will move north while @xmath419 moves south .
let @xmath420 @xmath421 denote their positions , @xmath422.$ ] then for every @xmath423 define @xmath424 to be the ball with the center at @xmath400 and with the longitude @xmath425 for example , @xmath426 now let us define the function @xmath412 as follows : @xmath427 clearly , @xmath428 for all @xmath168 small enough .
the only thing one needs to check is that @xmath429 holds for all @xmath430 $ ] .
+ [ lem:56 ] an increment function @xmath431 exists : @xmath432 as defined by satisfies .
we use euler coordinates on the sphere .
the latitude @xmath433 of the parallel @xmath400 is @xmath434 where @xmath13 satisfies @xmath435 by symmetry , it is enough to consider the movement of three balls : * @xmath414 on the parallel @xmath400 .
its initial angle @xmath436 the latitude @xmath433 of @xmath437 is constant .
* @xmath415 on the greenwich meridian @xmath438 its initial latitude is @xmath439 and final latitude is @xmath440 on the interval , its latitude is given by @xmath441 * @xmath419 on the meridian @xmath442 its initial latitude is @xmath443 and final latitude is @xmath444 on the interval , its latitude is given by @xmath445 the function @xmath446 is uniquely defined by : * @xmath447 * at every time @xmath448 and after initial contact , the ball @xmath449 touches the ball @xmath450 .
to complete the proof of lemma [ lem:56 ] it remains to check that the balls @xmath449 and @xmath451 of radius @xmath380 are disjoint for @xmath452 .
this fact will follow from the next lemma .
[ lem:57 ] let @xmath453 consider an isosceles spherical triangle @xmath454 where @xmath43 has @xmath455 @xmath456 @xmath42 has @xmath457 @xmath458 and @xmath459 is defined by @xmath460 and the touching condition .
then @xmath461 let @xmath45 be the middle point of the arc @xmath462 it does not depend on @xmath168 and is given by @xmath463 @xmath464 let @xmath465 be the arc perpendicular to the arc @xmath466 at @xmath467 then @xmath468 is simply the intersection of @xmath469 and the parallel @xmath470 the triangle @xmath471 is a right triangle .
evidently , the legs @xmath472 and @xmath473 become shorter as @xmath168 increases .
hence the hypotenuse @xmath474 becomes shorter as well . since @xmath475 the proof follows .
lemmas [ lem:56 ] and [ lem:57 ] complete a proof that there exists a deformation path from the @xmath9 configuration to a @xmath24 configuration and to a @xmath25 configuration , respectively .
however , the deformation path obtained does not satisfy one required condition of the theorem : remaining in the interior of the configuration space .
it exits from the interior of @xmath353 $ ] at the end of the first phase and remains on the boundary during the second phase : the three north balls are touching and the three south balls are touching .
we can modify the construction above so that no balls touch throughout the deformation until the final instant . to do this we halt the first phase just short of the three balls touching , at @xmath476 .
then in the second phase , we allow @xmath477 to increase monotonically in the north triangle at some variable speed @xmath478 as the rotation proceeds , in such a way as to avoid contact between the three north balls and the equatorial balls .
the south triangle @xmath477 variable is to decrease monotonically in the reflected motion of @xmath479 at the same time .
lemma [ lem:57 ] implies that if @xmath477 approaches @xmath8 rapidly enough in the motion that we can again avoid contact ; this is an open condition at each point @xmath168 , so by compactness of the motion interval we have a finite subcover to attain it .
\(1 ) this motion process can be continued by concatenation with an inverse @xmath379 using @xmath480 , in such a way as to arrive back at a @xmath9 configuration , differently labeled .
this is possible because there are two exit directions ( tangent vectors ) from the @xmath24 configuration and two exit directions from the @xmath25 configuration in @xmath23 $ ] .
section [ sec:551 ] studies the group of permutations of the @xmath0 labels obtainable by such deformations .
\(2 ) starting from the @xmath24 or @xmath25 configuration , there is a reference frame in which the north triangle remains fixed . the inverse of the second phase of @xmath379 describes a move which unlocks the @xmath24 configuration with @xmath314 moving balls and @xmath314 fixed balls , and which unlocks the @xmath25 configuration with @xmath368 moving balls and @xmath7 fixed balls ( see section [ manual ] ) . according to his recollection , on 25 april 1948 buckminster fuller found a `` jitterbug '' construction given by a jointed framework motion that , among other things , permits an @xmath24 configuration , given as the vertices of a cuboctahedron , to be continuously deformed into a @xmath9 configuration , given as the vertices of an icosahedron ( see @xcite ) .
in buckminster fuller s construction , the joint distances remain constant during the motion , so that they can be rigid bars , while the radii of the associated touching spheres continuously contract during the deformation . at each instant during the motion the central sphere and the @xmath0 touching spheres can all have equal radii without overlapping , and this radius varies monotonically in time .
in retrospect one may see that it is possible to rescale space during the motion via homotheties varying in time such that all spheres retain the fixed radius @xmath8 throughout the deformation . in this case
the joint lengths will change continuously in the motion .
the rescaled motion no longer corresponds to a physical object with rigid bars , but it does give a continuous motion in the configuration space of @xmath0 equal spheres touching a 13-th central sphere that continuously deforms the @xmath24 configuration to the @xmath9 configuration .
the work of buckminster fuller on the `` jitterbug '' movable jointed framework is described in schwabe @xcite .
fuller described it in his book synergetics ( * ? ? ? * sec .
460.00 - 463.00 ) .
the construction is also described in edmondson ( * ? ? ? * chap .
11 ) , with a detailed analysis in verheyen @xcite .
the `` jitterbug '' motion immediately enters the interior @xmath481 $ ] after the initial instant , in contrast to the `` unlockings '' described in appendix 8 , which adhere to its boundary .
the number of connected components of the configuration space @xmath356 $ ] is related to the ability to permute labeled spheres by deformations within @xmath356 $ ] .
the possible permutability of the ( labeled ) spheres in the @xmath9 configuration in @xmath356 $ ] depends on the radius @xmath2 of the touching spheres . conway and sloane (
* chap . 1 ,
appendix , pp .
2930 ) give a terse proof that for radius @xmath8 the labels on labeled spheres in @xmath9 configurations can be arbitrarily permuted using continuous deformations inside the space @xmath353 $ ] .
[ thm:58](permutability at radius @xmath22 ) for the radius parameter @xmath22 , each labeled @xmath9 configuration can be continuously deformed in the configuration space @xmath23 $ ] to a @xmath9 configuration at the same @xmath0 touching points with any permutation of the labeling .
-move @xmath482,title="fig : " ] -move @xmath482,title="fig : " ] we follow the outline in conway and sloane ( * ? ? ?
* chapter 1 , appendix , pp .
a main ingredient is an additional set of permutation moves which we call @xmath482-moves , detailed next .
beginning from the @xmath9 configuration centered at the origin , we rotate it so that two opposite balls have their centers on the @xmath477 axis .
call these balls @xmath484 and @xmath485 .
note that the centers of @xmath27 of the @xmath132 remaining balls are in the northern half - space , while the remaining @xmath27 centers are in the southern half - space .
call these balls @xmath486 and @xmath487 , respectively
. * first phase .
* move the @xmath27 northern @xmath488 balls towards @xmath484 , in such a way that their centers remain on their corresponding meridians , until each of them touches @xmath484 .
note that these @xmath27 balls do not touch each other , only @xmath484 .
indeed , because their centers are located at the latitude @xmath489 when viewed from the @xmath477-axis each of the @xmath27 balls subtends the dihedral angle @xmath490 of a regular tetrahedron ; but the longitude difference between the neighboring ball centers is @xmath491 , so there remains a tiny longitude gap @xmath492 the @xmath27 southern @xmath493 balls may be moved into the southern hemisphere in the same manner . * second phase . * note that for @xmath22 , the @xmath314 northern balls fit into the northern half - space , while the @xmath314 southern balls fit into the southern half - space .
the union of all the @xmath314 balls in the northern hemisphere may be rotated by @xmath491 as a rigid body , keeping the remaining balls fixed .
* third phase .
* reverse the first phase .
the net result of an @xmath482-move is a cyclic permutation @xmath494 of @xmath9 of length @xmath27 , which is an even permutation .
the halfway point of a @xmath482 as illustrated in the right side of figure [ fig:5-move ] is itself a `` remarkable '' critical configuration ; it is found at the center of a 4-simplex of critical configurations for @xmath22 .
the family of such configurations is in many ways similar to the maximal stratified set that occurs for @xmath110 .
the first step is to show that there is a continuous deformation of @xmath9 to itself , which permutes the labels by an odd permutation . to exhibit it
, we use the move @xmath495 defined before , and deform @xmath9 into an @xmath24 configuration ( note that we can do it for all @xmath496 but not for @xmath41 ) . note that the @xmath24 configuration has three axes of 4-fold symmetry passing through the opposite squares of four balls . by rotating @xmath497 around any such an axis and then deforming our configuration back to @xmath9 via @xmath498
we induce a a permutation @xmath499 of 12 balls , which is a product of three ( disjoint ) cyclic permutations , each of length 4 .
every such cycle is an odd permutation , hence their product @xmath499 is also odd .
the second step uses @xmath483-moves .
each such move gives a cyclic permutation of order @xmath27 .
since there are @xmath0 options for choosing @xmath484 , we get @xmath0 such @xmath27-cycles @xmath500 .
it is shown in conway and sloane ( using an elegant argument about the mathieu group @xmath501 , see @xcite ) that all such @xmath502 generate the alternating group @xmath503 , the subgroup of even permutations of @xmath504 combined with any odd permutation @xmath499 , the full permutation group @xmath505 is generated .
the move @xmath483 can be modified in such a way that it continues to work for all values @xmath506 , for some @xmath507 slightly bigger than @xmath8 .
we first explain the modification and then propose the value of @xmath508 the modification deals only with the second phase of @xmath482 . in order to explain it ,
it is enough to follow the @xmath132 longitude values of the touching points of our balls , which may be considered as points on the equator . for @xmath22 ,
the northern @xmath27 balls @xmath509 correspond to the longitude values @xmath510 for @xmath511 and we can suppose that at the initial moment these values are @xmath512 the longitude values @xmath513 are defined similarly , corresponding to the southern balls @xmath493 , and @xmath514 our initial move looks now as follows : @xmath515 of course there is no need for all the @xmath516 to move with the same speed ; the only constraint is that the difference between consecutive @xmath516 should equal or exceed @xmath490 at all times . in particular
, we can modify the speeds in such a way that at any time @xmath168 , we have @xmath517 for at most one value of @xmath518 now let the radius @xmath2 be slightly bigger than @xmath8 .
then , at the moment @xmath168 when @xmath519 the corresponding balls @xmath520 @xmath493 will overlap .
this , however , can be remedied by making the following small deformation of our @xmath0-configuration : * the ball @xmath488 moves up along its meridian , by the distance @xmath521 * the ball n moves along the same meridian in the same direction by the distance @xmath522 * the ball @xmath493 moves down along its meridian , by the distance @xmath521 * the ball s moves along the same meridian in the same direction by the distance @xmath522 * other balls may be rearranged in such a way that they do not intersect
. the non - overlap condition can be satisfied when @xmath523 is small enough , since there were no other collisions .
below we will show there will be @xmath27 _ bottleneck configurations _ that one encounters on the way to perform the modified @xmath483 move .
each one defines a value @xmath524 , for @xmath525 which is the maximal radius for which this configuration is allowed .
we set @xmath526 [ thm:62 ] for every @xmath527 the move @xmath483 can be modified in such a way that one can reach from an initial labeled @xmath9 configuration any labeled @xmath9 configuration whose labels are an even permutation of the initial labels .
that is , the alternating group @xmath528 is generated by the compositions of different @xmath483 moves .
there will occur @xmath27 bottleneck @xmath0-configurations of the @xmath2-balls touching the unit central ball , described by certain touching patterns that correspond to the configurations appearing during the move @xmath483 at the moment when the ball @xmath488 passes due north of the ball @xmath529 the @xmath27 bottleneck configurations have a common pattern : @xmath316 touching balls centered on the same meridian , two in the northern half - space , and the remaining two in the southern half - space .
we denote them by @xmath484 , @xmath520 @xmath530 @xmath485 .
this set of @xmath316 balls is symmetric with respect to the plane @xmath531 strictly speaking , as @xmath2 is slightly bigger than @xmath532 the balls @xmath484 and @xmath485 are now centered on the meridian _
opposite _ the one containing @xmath488 and @xmath493 .
the eight other balls are the remaining ones from @xmath533 each pair @xmath534 , @xmath535 touches , as do the pairs @xmath534 , @xmath484 } , as well as the pairs \{@xmath536 , @xmath537 .
the @xmath27 bottleneck configurations differ in how the additional pairs of balls touch .
each of the rows in the following list completes a different touching pattern that occurs as the move @xmath482 is performed : @xmath538 @xmath539 @xmath540 @xmath541 @xmath542 observe that for any @xmath41 , and for any of the @xmath27 touching patterns , such a configuration is unique if it exists , and that it _ does _ exist for @xmath543 sufficiently small .
we define @xmath544 as the maximal values for which the above configurations exist . to @xmath545,title="fig:",width=163 ] to @xmath545,title="fig:",width=163 ] to @xmath545,title="fig:",width=163 ] to @xmath545,title="fig:",width=163 ] to @xmath545,title="fig:",width=163 ] we are not asserting that the value @xmath546 defined above is the true critical value above which the ( small perturbation of the ) move @xmath482 can not be performed .
indeed , we imposed some a priori constraints in making our construction of the modified @xmath482 , and did not rule out the possibility of a more `` optimal '' modification of @xmath482 .
[ defi:63 ] _ let @xmath547 be the maximal value of the radius @xmath2 for which there exists some modified move @xmath482 .
we call it the _ upper critical radius . _ _ from the previous theorem we know that @xmath548 .
we expect @xmath547 to be a critical value for maximizing the radius function .
based on theorems [ th55 ] and [ thm:58 ] about deformation and permutability of labeled @xmath9 configurations , it is natural to propose the following statement .
( connectedness conjecture)[conj95 ] the configuration space @xmath353 $ ] is connected .
that is , every set of @xmath0 distinct labeled points on the @xmath36-sphere pairwise separated by spherical angle at least @xmath182 can be deformed into @xmath0 other distinct labeled points , with all points maintaining a spherical angle at least @xmath182 apart during the deformation .
this problem appears to be approachable but difficult to prove , despite the supporting evidence of permutability in theorem [ thm:58 ] .
one may approach it by cutting the space @xmath549 $ ] into many small path - connected convex
pieces and gluing them together in some fashion .
the computational size of the problem , since the dimension of the space is @xmath19 , and has a complicated boundary , is daunting .
we also propose a stronger statement .
( strong connectedness conjecture)[conj95a ] the radius @xmath22 is the largest radius value at which configuration space @xmath356 $ ] is connected . in support of conjecture [ conj95a
] , the @xmath398-move appears to be possible only when @xmath550 . at one time instant it has @xmath314 spheres fitting in a ring around the equator , a condition which is allowed only for @xmath380 .
we also know that the @xmath22 satisfies the necessary condition of being a critical value for the @xmath2-parameter .
we formulate one further conjecture concerning the connectivity structure of @xmath23 $ ] .
it is based on a further analysis not included here ( see @xcite ) , which indicates that @xmath23 $ ] has at each of the @xmath551 @xmath24-configurations , and at each of the @xmath552 @xmath25-configurations , a unique tangent line along which it can be approached from the interior @xmath481 $ ] .
in addition , this analysis shows that each of these is a _ local cut point _
, a point of a space which when removed , disconnects a small open neighborhood of the point .
that is , these configurations are points at which the space @xmath26 $ ] locally disconnects as @xmath2 increases past @xmath8 .
the following conjecture asserts that these local cut points form unavoidable bottlenecks in @xmath23 $ ] in making certain rearrangements of spheres in the configuration space .
( @xmath24 and @xmath25 bottlenecks)[conj95b ] any piecewise smooth continuous curve in the reduced configuration space @xmath23 $ ] which starts at a labeled @xmath9 configuration and ends at another labeled @xmath9 configuration with the labels permuted by an odd permutation must necessarily pass through either an @xmath24 configuration or a @xmath25 configuration .
this conjecture asserts a specific way in which the @xmath24 and @xmath25 configurations may play a remarkable role in rearrangements of @xmath0-configurations , illuminating the assertion of frank ( 1952 ) in section [ sec:27 ] . for the region
@xmath553 we propose the following conjecture .
[ conj:66 ] ( two connected components ) let @xmath547 be the upper critical radius defined after theorem [ thm:62 ] .
then the space @xmath356 $ ] for @xmath554 has exactly two connected components .
two labeled configurations , @xmath9 and @xmath555 , where @xmath556 is a permutation of twelve labels , belong to different connected components of @xmath356 $ ] if and only if the permutation @xmath557 is odd . in view of the existence of the @xmath482-move , the argument in theorem [ thm:58 ] indicates that there can be at most @xmath36 connected components containing @xmath9 configurations in this region . conjecture [ conj:66 ] asserts there are exactly these two , and no other , connected components .
we next note that the five `` bottlenecks '' in the @xmath27-move lead to the possibility of connected components not containing any @xmath9 configuration , for certain ranges of @xmath2 . during the @xmath483 move joining two @xmath9 configurations , there are @xmath27 bottlenecks all through which one can pass at least up to a radius @xmath558 .
there is however room for configurations of spheres of larger radius occurring between the bottlenecks .
if we increase the radius above the smallest two of the bottleneck radii , it may be possible for a sphere to get stuck in the middle of one of these regions , so it can neither go backwards nor forwards via the @xmath482-move to a @xmath9 configuration . assuming that `` trapped '' configurations ( from blocking of the @xmath483 move ) exist containing no @xmath9 configuration , eventually as we increase @xmath2 some `` trapped '' configuration must become a critical configuration
it would then be a local maximum in the configuration space , an isolated point in some @xmath26 $ ] .
the critical value at which this occurs would necessarily be strictly smaller than @xmath106 , using the result of danzer ( in theorem [ th73 ] ) that the extremal configuration for @xmath119 is unique .
[ conj:65 ] ( non-@xmath9 components ) there is a nonempty interval of values of @xmath41 such that the reduced configuration space @xmath26 $ ] has connected components that do not contain any copy of a @xmath9 configuration .
a positive answer to this question raises the possibility of a value of @xmath2 for which the number of connected components of @xmath26 $ ] exceeds the number of labeled @xmath9 configurations , which is @xmath559 . to obtain the latter number ,
let us take the @xmath9 configuration and label its @xmath0 balls .
there are @xmath560 such labelings .
two labeled configurations are equivalent ( i.e. the same in @xmath26 $ ] ) iff one can be obtained from the other by the @xmath16 rotation action .
clearly , there are @xmath561 labelings in every equivalence class .
finally , for the region of @xmath2-values very close to @xmath106 , we assert that each of the spaces @xmath356 $ ] and @xmath26 $ ] has exactly @xmath562 connected components , with each component containing a @xmath9 configuration . this fact follows assuming the finiteness of the set of critical radius values @xmath2 for @xmath369 , since at the point @xmath106 only the @xmath9 configurations survive , according to the uniqueness result of danzer ( theorem [ th73 ] ) , and the topology of @xmath26 $ ] does not change above the next largest critical value of @xmath2 below @xmath106 . ' '' ''
this paper treats configuration spaces of touching spheres for very small values of @xmath3 .
we have shown that the configuration space of @xmath0 equal spheres touching a central @xmath6-th sphere is already large enough to exhibit interesting behavior in its critical points .
concerning @xmath0-sphere configurations in the equal radius case @xmath22 we have made the following observations .
* we have clarified an assertion of frank ( 1952 ) given in section [ sec:27 ] , showing that in the space @xmath23 $ ] there are deformations interconnecting all @xmath24 , @xmath25 and @xmath9 configurations .
* we have given evidence suggesting that @xmath23 $ ] is a connected space , and conjectured that @xmath22 is the largest parameter value where @xmath26 $ ] is connected .
* we have shown that all elements of the finite set of @xmath24 and @xmath25 configurations lie on the boundary of the topological space @xmath23 $ ] and are critical points for maximizing the radius parameter .
* we have conjectured that when a deformation of 12 spheres in a @xmath9 configuration yields an odd permutation of elements , the ( finite set of ) @xmath24 and @xmath25 configurations in @xmath23 $ ] are `` unavoidable '' points .
many challenging and computationally difficult problems remain to better understand the constrained configuration space @xmath23 $ ] . as mentioned in the introduction ,
configuration spaces are of interest in physics and materials science , particularly in connection with jamming in materials .
hard sphere models which view spheres packed inside a box have been extensively studied for jamming .
materials scientists have studied configuration spaces of small numbers of hard spheres by simulation in connection with nanomaterials .
recently , holmes - cerfon @xcite developed an algorithm that enumerates rigid sphere clusters and has determined those with up to @xmath563 spheres .
the cases of small numbers ( but larger than the @xmath3 treated here ) of spheres were studied in phillips et al .
@xcite and glotzer et al .
@xcite , giving estimates for extremal configurations at values of @xmath3 larger than can be currently treated mathematically .
we note that simulations of phase space can sample only a small part of it . in the simulation experiments reported in @xcite for @xmath119 equal spheres ,
the experimenters were unable to detect that the radii at which the @xmath482-move and the @xmath379-move permutation cease being feasible are in fact different ( as discussed in section [ sec:64 ] ) .
study of the jamming problem leads the sub - problem concerning what is a good notion of rigidity for such configurations .
there is a notion of `` locally jammed configuration '' in which no particle can move if its neighbors are fixed .
the tammes problem or ( extremal ) spherical codes problem , of determining @xmath5 is analogous to determining maximally dense jammed configurations of spheres in a box .
various notions of rigidity for spherical codes were formulated in tarnai and gspr @xcite .
more recently cohn et al .
@xcite give a mathematical treatment of rigidity of extremal @xmath564-dimensional spherical codes .
in configuration theory models like @xmath17 $ ] of this paper , certain critical configurations at critical values of the radius parameter might serve as a proxy for jammed configurations , with the balancing condition in theorem [ thm : converse ] capturing the locally jammed condition .
perhaps only a subclass of critical configurations should be interpreted as jammed , for example those that are local maxima of the radius function .
to unlock the @xmath24 configuration , a good way is to do it with the help of a friend , hereafter called charles .
please follow these steps : 1 .
ask charles to hold the @xmath7 north balls and the @xmath7 south balls firmly in their positions .
these @xmath314 polar balls remain fixed during the whole process . as a result , the @xmath6-th central ball stays fixed as well .
2 . roll the remaining @xmath314 equatorial balls in a direction roughly parallel to the equator .
if properly lubricated , this does not require a big effort .
the equatorial balls can all be pushed either to the east or to the west , in a coordinated way .
4 . at all times
you must ensure the @xmath314 rolling balls touch the central ball .
this requires some practice , but it is possible and not terribly hard .
observe that the @xmath314 balls roll around the central ball along the equatorial `` valley '' between the polar balls kept fixed by charles .
these rolling balls can not always move equatorially , but instead move north and south slightly , in an alternating manner , as you roll them .
6 . because the @xmath314 rolling balls move north and south , some of them do not touch each other any more : free space may appear between them . also , some space can be created between them and the @xmath314 balls kept fixed by charles
this is normal .
as you proceed by @xmath565 the @xmath314 rolling balls realign in the equatorial plane , touching each other and the polar balls .
note that at this moment the configuration is locked back into @xmath24 .
each of the @xmath314 rolled balls is touching two of its equatorial neighbors , one ball to the north and one ball to the south .
unlocking the @xmath25 configuration is similar to the @xmath24 configuration , except that charles has somewhat more to do
. please follow these steps : 1 .
ask charles to hold firmly the three north balls and the three south balls .
the three south balls will remain fixed during the whole process .
but the north triangle has to be rotated as a whole in its plane , at some constant speed , which can be either eastward or westward ( there are two choices ) .
it will move through an angle @xmath566 .
the @xmath6-th central ball stays fixed as before .
roll all the remaining @xmath314 balls in the ( roughly same ) equatorial direction as the north triangle is rotating .
this movement direction is forced on all six equatorial balls by the motion of the north triangle .
the rest of the process goes basically in the same way as for the @xmath24 configuration .
as charles proceeds to rotate the north triangle by @xmath184 , you proceed by @xmath565 the six middle balls align back into the equatorial plane , touching each other and the six polar balls .
note that at this moment the configuration is locked back into the @xmath25 configuration .
each of the @xmath314 rolled balls is touching two of its equatorial neighbors , one ball to the north and one ball to the south .
note that for @xmath24 , the equatorial balls underwent cyclic permutation of length @xmath314 . for @xmath25 , the equatorial balls underwent a cyclic permutation of length @xmath314 and the @xmath7 northern balls a cyclic permutation of length @xmath7 .
these give odd permutations of @xmath24 and @xmath25 .
the authors were each supported by icerm in the spring 2015 program on `` phase transitions and emergent properties . ''
r. k. was also supported by the university of pennsylvania mathematics department sabbatical visitor fund and by msri via nsf grant dms-1440140 .
w. k. was also supported by austrian science fund ( fwf ) project 5503 .
j. l. was supported by nsf grant dms-1401224 and by a clay senior fellowship at icerm .
part of the work of s. s. has been carried out in the framework of the labex archimede ( anr-11-labx-0033 ) and of the a*midex project ( anr-11-idex-0001 - 02 ) , funded by the `` investissements davenir '' french government programme managed by the french national research agency ( anr ) .
part of the work of s. s. has been carried out at iitp ras . the support of russian foundation for sciences ( project no .
14 - 50 - 00150 ) is gratefully acknowledged .
the authors thank bob connelly , sharon glotzer , mark goresky , tom hales and oleg musin for helpful comments .
parts of section [ sec:41 ] are adapted from unpublished notes by r. k. and john sullivan ( msri , 1994 ) about critical configurations of `` electrons '' on the sphere . c. bender , _ bestimmung der grssten anzahl gleich grosser kugeln , welche sich auf eine kugel von demselben radius , wie die brigen , auflegen lassen _ , acrhiv der mathematik und physik * 56 * ( 1874 ) , 302306 .
l. danzer , _ finite point - sets on @xmath569 with minimum distance as large as possible _ , discrete math .
* 60*(1986 ) , 366 .
[ english translation of danzer habilitationsschrift , with extra references added . ]
l. fejes tth , _ ber die abschtzung des krzesten abstandes zweier punkte eines auf einer kugelfches liegenden punktsystems _ , jber .
* 53 * ( 1943 ) , 6668 .
l. fejes tth , _ ber die dichteste kugellagerung _ , math . z.
* 48 * ( 1943 ) , 676684 .
t. hales , _ the strong dodecahedral conjecture and fejes tth s conjecture on sphere packings with kissing number twelve _ , pp .
121132 in : _ discrete geometry and optimization _ , ( k. bezdek , a. deza and y. ye , eds . ) fields institute communications * 69 * : fields institute , toronto , on 2013 .
t. hales , et .
m. adams , g. bauer , dat tat dang , t. harrison , truong le hoang , c. kaliszk , v. magron , s. mclaughlin , thang tat nguyen , truong quang nguyen , t. nipkow , s. obua , j. pleso , j. rute , a. solovyev , hoai thi ta , trung nam tran , diep thi trieu , j. urban , ky khac vu , r. zumkeller , _ a formal proof of the kepler conjecture _ , arxiv:1501.02155 .
r. hoppe , _ bemerkung der redaktion _ , archiv der mathematik und physik ( grunert ) * 56 * ( 1874 ) , 307312 . m. a. hoskin , _ newton , providence and the universe of stars _ ,
journal for the history of astronomy ( jha ) * 8 * ( 1977 ) , 77101 .
j. kepler , _ epitome astronomiae copernicae , usitat form quaestionum & responsionum conscripta , inq ; vii .
libros digesta , quorum tres hi priores sunt de doctrina sphaeric _ , lentijs ad danubium , excudebat johannes plancus , mdcxviii .
r. kusner , w. kusner , j. c. lagarias , and s. shlosman , max - min morse theory for configurations on the @xmath36-sphere , paper in preparation . _
the kepler conjecture : the hales - ferguson proof , by thomas c. hales , samuel p. ferguson _
( j. c. lagarias , ed . ) , springer - verlag : new york 2011 .
b. lubachevsky and r. l. graham , _ dense packings of @xmath571 equal disks in a circle for @xmath572 and @xmath27 _ , pp .
302311 in : ( ding - zhu du and ming li , eds . ) computing and combinatorics , first annual conference , cocoon 95 , lecture notes in comp .
, vol . 959 .
springer : new york 1995 .
i. newton , _ the correspondence of isaac newton _ ( 9 volumes , h. w. turnbull , f. r. s. , ed . ) , cambridge university press 1961 .
l. pauling , _ the structure and entropy of ice and other crystals with some randomness of atomic arrangement _ , j. amer .
chemical soc .
* 57 * ( 1935 ) , 26802684 . c. l. phillips , e. jankowski , m. marval and s. c. glotzer , _ self - assembled clusters of spheres related to spherical codes _ , phys .
e * 86 * , ( 2012 ) , 041124 .
15 october 2012 . c. l. phillips , e. jankowski , b. j. krishnatreya , k. v. edmond , s. sacanna , d. g. grier , d. j. pine and s. c. glotzer , _ digital colloids : reconfigurable clusters as high information density elements _ , soft matter * 10 * ( 2014 ) , 74687479 . | the problem of @xmath0 spheres is to understand , as a function of @xmath1 $ ] , the configuration space of @xmath0 non - overlapping equal spheres of radius @xmath2 touching a central unit sphere .
it considers to what extent , and in what fashion , touching spheres can be moved around on the unit sphere , subject to the constraint of always touching the central sphere .
such constrained motion problems are of interest in physics and materials science , and the problem involves topology and geometry .
this paper reviews the history of work on this problem , presents some new results , and formulates some conjectures .
it also addresses results on configuration spaces of @xmath3 spheres of radius @xmath2 touching a central unit sphere , for @xmath4 .
the problem of determining the maximal radius @xmath5 is equivalent to the tammes problem , to which lszl fejes tth made significant contributions . |
North Charleston police officer Michael Slager, third from left, stands in the courtroom during his murder trial at the Charleston County court in Charleston, S.C., Friday, Dec. 2, 2016, in Charleston,... (Associated Press)
North Charleston police officer Michael Slager, third from left, stands in the courtroom during his murder trial at the Charleston County court in Charleston, S.C., Friday, Dec. 2, 2016, in Charleston, S.C. Circuit Judge Clifton Newman told the jurors Friday afternoon that they should try again to reach... (Associated Press)
CHARLESTON, S.C. (AP) — The jury in the murder trial of a former South Carolina police officer charged with gunning down a black motorist will continue deliberating next week, despite at one point Friday appearing deadlocked by a juror who told the judge he could not "with good conscience approve a guilty verdict."
The panel of one black and 11 white jurors has now deliberated for more than 16 hours over three days on whether to convict former North Charleston police Officer Michael Slager in the death of 50-year-old Walter Scott. They will return to the jury room Monday.
Twice on Friday the jurors told Judge Clifton Newman they had reached a stalemate. One juror sent a letter directly to the judge saying he could not "with good conscience approve a guilty verdict." The juror added he was not about to change his mind.
But then in the courtroom, the jury foreman told the judge that he thought jurors could reach a unanimous verdict and deliberations continued. Newman did not say whether the jurors were leaning toward a conviction on murder or on voluntary manslaughter.
Slager pulled over Scott's 1990 Mercedes for a broken taillight on April 4, 2015. Scott was shot five times in the back as he fled the traffic stop. A passer-by captured the shooting on cellphone video that stunned the nation.
Slager was fired from the department and charged with murder after the video surfaced.
Jurors are considering the charge of murder, which in Slager's case could carry a sentence of from 30 years to life in prison, and a lesser charge of voluntary manslaughter, which carries a sentence of two to five years.
The city of North Charleston reached a $6.5 million civil settlement with Scott's family last year. Following the shooting, the city also that the U.S. Justice Department to review its police department policies with an eye toward how the department can improve its relationship with residents.
Slager also faces trial next year in federal court on charges of depriving Scott of his civil rights. ||||| "We all struggle with the death of a man and with all that has been put before us," the juror wrote. "I still cannot, without a reasonable doubt, convict the defendant. At the same time, my heart does not want to have to tell the Scott family that the man who killed their son, brother and father is innocent. But with the choices, I cannot and will not change my mind." | – After more than two days of deliberations, a single juror is refusing to convict South Carolina ex-cop Michael Slager in the fatal shooting of unarmed black motorist Walter Scott. The jury's foreman told the judge Friday that only one person on the jury of 11 whites and one African-American "has issues" preventing them from reaching a verdict, the Los Angeles Times reports. "I cannot in good conscience consider a guilty verdict," the juror wrote in a letter read out in court. "We all struggle with the death of a man and with all that has been put before us,” the juror wrote. "I still cannot, without a reasonable doubt, convict the defendant. At the same time, my heart does not want to have to tell the Scott family that the man who killed their son, brother and father is innocent. But with the choices, I cannot and will not change my mind." Slager, who was filmed shooting Scott in the back as he ran away, was charged with murder but the jury also has the option of convicting him of manslaughter, and it's not clear which charge the 11 jurors are leaning toward, the AP reports. When the jury told Circuit Judge Clifton Newman on Friday afternoon they had been unable to reach a verdict, he ordered them to resume deliberations, telling them they had a "duty to make every reasonable effort to reach a unanimous verdict," NBC reports. At the end of the day, the jury said they wanted to return Monday for more deliberations. If they remain deadlocked, a mistrial will be declared. (Slager will face a federal civil rights trial next year.) |
randomness can have much more dramatic effects at quantum phase transitions than at classical phase transitions because quenched disorder is perfectly correlated in the imaginary time direction which needs to be included at quantum phase transitions .
imaginary time acts as an additional coordinate with infinite extension at absolute zero temperature .
therefore , the impurities and defects are effectively very large which leads to strong - disorder phenomena including power - law quantum griffiths singularities @xcite , infinite - randomness critical points characterized by exponential scaling @xcite , and smeared phase transitions @xcite . for example , the zero - temperature quantum phase transition in the random transverse - field ising model is governed by an infinite - randomness critical point @xcite featuring slow _ activated _ ( exponential ) rather than power - law dynamical scaling .
it is accompanied by quantum griffiths singularities .
this means , observables are expected to be singular not just at criticality but in a whole parameter region near the critical point which is called the quantum griffiths phase .
quantum griffiths singularities are caused by rare spatial configurations of the disorder . due to statistical fluctuations ,
one can always find spatial regions ( rare regions ) which are impurity free .
the probability @xmath2 to find such a rare region is exponentially small in its volume @xmath3 , @xmath4 with @xmath5 being a constant that depends on the disorder strength .
close to a magnetic phase transition , the rare region can be locally in the magnetic phase while the bulk system is still non - magnetic . when the characteristic energy @xmath6 of such a rare region decays exponentially with its volume , @xmath7 ( as in the case of the transverse - field ising model ) , the resulting rare - region density of states has power - law form , @xmath8 , where @xmath9 is the non - universal griffiths exponent .
@xmath10 takes the value zero at the quantum critical point and increases throughout the quantum griffiths phase .
the singular density of states of the rare regions leads to quantum griffiths singularities of several thermodynamic observables including order - parameter susceptibility , @xmath11 , specific heat , @xmath12 entropy , @xmath13 and zero - temperature magnetization - field curve @xmath14 ( for reviews see , e.g. , refs .
@xcite ) . many interesting models in statistical mechanics and field theory contain some integer - valued parameter @xmath15 and can be solved in the large@xmath1 limit .
therefore , the large@xmath1 method is a very useful tool to study classical and quantum phase transitions .
an early example is the berlin - kac spherical model @xcite which is equivalent to a classical @xmath0 order parameter field theory in the large@xmath1 limit @xcite .
analogously , the quantum spherical model @xcite has been used to investigate quantum critical behavior . in both cases
, @xmath15 is the number of order parameter components .
another potential application of the large@xmath1 method are @xmath16 kondo models @xcite with spin - degeneracy @xmath15 . in all of these cases ,
the partition function can be evaluated in saddle point approximation in the limit @xmath17 , leading to self - consistent equations . in clean systems
, these equations can often be solved analytically .
however , in the presence of disorder , one obtains a large number of coupled self - consistent equations which can be solved only numerically . in this paper , we develop a new efficient numerical method to study critical behavior of disordered system with @xmath0 order - parameter symmetry in the large@xmath1 limit .
we apply this method to the superconductor - metal quantum phase transition in disordered nanowires . using a strong - disorder renormalization group , it has recently been predicted that this transition is in the same universality class as the random transverse - field ising model .
we confirm these predictions numerically .
we also find the behaviors of observables as a function of temperature and an external field .
they follow the expected quantum griffiths power laws .
we consider up to 3000 disorder realizations for system sizes @xmath18 and 1024 .
the paper is organized as follows : in sec .
[ sec : model12 ] we introduce the model : a continuum landau - ginzburg - wilson order - parameter field theory in the presence of dissipation ; and we generalize the theory to quenched disordered systems . then , we discuss the predicted critical behavior of this model and derive the large@xmath1 formulation . in sec . [ sec : three_adr ] , we review an existing numerical approach to this model . in sec . [
sec : ourmethod ] , we present our numerical method to study the quantum critical behavior . we discuss the results in sec . [ sec : results12 ] , and we compare them to the behavior predicted by the strong - disorder renormalization group . sec .
[ sec : perf ] is devoted to the computational performance of our method .
finally , we conclude in sec .
[ sec : concl12 ] by discussing and comparing our numerical method to the existing one .
we also discuss generalizations to higher dimensions and other models .
we start from the quantum landau - ginzburg - wilson free - energy functional for an @xmath19component vector order parameter @xmath20 in one space dimension . for a clean system with overdamped order parameter dynamics
the landau - ginzburg - wilson action reads , ) in what follows . ]
@xmath21 ^ 2 + \frac{u}{2n } \varphi^4(x,\tau ) \bigr ] \nonumber\\ & + \frac{\gamma t}{2 } \sum_{\omega_n } |\omega_n|\int dx |\tilde{\varphi}(x,\omega_n)|^2 - h \int dx \int_0^{1/t}d\tau \varphi(x,\tau ) \ , , % \int dx % \int_0^{1/t}d\tau \varphi(x,\tau ) \end{aligned}\ ] ] where @xmath22 is the bare distance from criticality .
@xmath23 and @xmath24 are the strength of dissipation and interaction , respectively .
@xmath25 is the standard quartic coefficient .
@xmath26 is a uniform external field conjugate to the order parameter .
@xmath27 is the fourier transform of the order parameter @xmath28 with respect to imaginary time , and @xmath29 is a matsubara frequency .
the above action with @xmath30 order parameter components ( equivalent to one complex order parameter ) has been used to describe @xcite the superconductor - metal transition in nanowires @xcite .
this transition is driven by pair - braking interactions , possibly due to random magnetic moments trapped on the wire surface @xcite , which also introduce quenched disorder in the nanowire .
the action ( [ action1 ] ) can be generalized to @xmath31 space dimensions and @xmath32 order parameter components , in this case , it describes itinerant antiferromagnetic quantum phase transitions @xcite . in the presence of quenched disorder , the functional form of eq .
( [ action1 ] ) does not change qualitatively .
however , the coupling constants become random functions of position @xmath33 .
the full effect of disorder can be realized by setting @xmath34 while considering the couplings @xmath22 and @xmath24 to be randomly distributed in space @xcite .
the quantum phase transition in zero external field can be tuned by changing the mean of the @xmath35 distribution , @xmath36 .
recently , the model ( [ action1 ] ) has been investigated by means of a strong - disorder renormalization group method @xcite .
this theory predicts that the model falls in the same universality class as the one - dimensional random transverse - field ising model which was studied extensively by fisher @xcite .
thus , the phase transition is characterized by an infinite - randomness critical point at which the dynamical scaling is exponential instead of power - law . off criticality , the behaviors of observables are characterized by strong quantum griffiths singularities .
let us focus on the griffiths phase on the disordered side of the transition , where the distance from criticality @xmath37 .
the strong - disorder renormalization group predicts the disorder averaged equal - time correlation function @xmath38 to behave as @xcite @xmath39}{(x/\xi)^{5/6 } } \,\end{aligned}\ ] ] for large distances @xmath33 .
here , @xmath40 is the correlation length which diverges as @xmath41 with @xmath42 as the critical point is approached .
the disorder averaged order parameter as a function of the external field @xmath26 in the griffiths phase has the singular form @xcite @xmath43 here , @xmath10 is the non - universal griffiths exponent which vanishes at criticality as @xmath44 with critical exponent @xmath45 .
right at criticality , the theory predicts logarithmic behavior rather than a power law @xcite , @xmath46^{\phi-1/\psi } } \,.\end{aligned}\ ] ] here , the exponent @xmath47 equals to the golden mean , and @xmath48 is some microscopic energy scale .
the average order parameter susceptibility as a function of temperature @xmath49 in the disordered griffiths phase is expected to have the form @xcite @xmath50 with the same @xmath51exponent as in eq .
( [ orderpgrif ] ) .
our goal is to test the strong - disorder renormalization group predictions by means of a numerical method . as a first step , we discretize the continuum model ( [ action1 ] ) in space and fourier - transform from imaginary time @xmath52 to matsubara frequency @xmath53 .
the discretized landau - ginzburg - wilson action has the form @xmath54 \nonumber\\ & + \sum_{i=1}^{l}\bigl[\frac { t}{2}\sum_{\omega_n } |\omega_n||\tilde{\varphi}_i(\omega_n)|^2 - h \tilde{\varphi}_i(0)\bigr]\ , , \end{aligned}\ ] ] where @xmath55 is the system size .
the nearest - neighbor interactions @xmath56 and the mass terms @xmath35 ( bare local distances from criticality ) are random quantities .
the critical behavior of the model ( [ action3 ] ) can be studied in the limit of a large number of order parameter components @xmath15 . in this limit
, the above action can be reduced to a gaussian form .
this can be done in several ways , for example by decomposing the square of each component of the order parameter @xmath57 into its average @xmath58 and fluctuation @xmath59 : @xmath60 . substituting this into the quartic term of the action ( [ action3 ] ) and using the central limit theorem ,
the quartic term can be replaced by @xmath61 .
this leads to the gaussian action @xmath62 the coupling matrix is given by @xmath63 the renormalized local distance @xmath64 from criticality at site @xmath65 must be determined self - consistently from @xmath66 where @xmath67 is given by @xmath68^{-1}_{ii}+h^2\sum_{j , k=1}^{l } m^{-1}_{ij}m^{-1}_{ik}\,.\end{aligned}\ ] ] here , @xmath69 is the identity matrix . in the presence of disorder ,
the self - consistent equations ( [ rend ] ) at different sites are not identical .
we thus arrive at a large number of coupled non - linear self - consistent equations .
therefore , numerical techniques are required to solve them .
in this section , we review the numerical method proposed by del maestro _
@xcite to study the model ( [ action2 ] ) at zero temperature and in the absence of an external field ( @xmath70 ) .
the matrix @xmath71 is spectral decomposed in terms of its orthogonal eigenvectors @xmath72 and eigenvalues @xmath73 as @xmath74 using this decomposition , the inverse matrix in eq .
( [ avgso ] ) can be written as @xmath75^{-1}_{ij}=\sum_{k=1}^{l}\frac{v_{ik}v_{kj}}{\epsilon_k+|\omega_n|}\ , .
\end{aligned}\ ] ] at zero temperature the sum over matsubara frequencies in eq .
( [ avgso ] ) turns into an integral which can be performed analytically .
this leads to the self - consistent equations ( for @xmath70 ) , @xmath76 here , for convergence of the frequency integral , an ultra violet cutoff @xmath77 is introduced .
numerical solutions to eq .
( [ adsel ] ) were obtained by an iteration process using a modified powell s hybrid method .
the method works well for large distances from criticality and small system sizes , but it becomes computationally prohibitive near criticality where the correlation length @xmath40 becomes of order of the system size .
this problem can be partially overcome by implementing a clever iterative solve - join - patch procedure .
however , the system size @xmath55 is still limited because large matrices need to be fully diagonalized which requires @xmath78 operations per iteration .
therefore , for large @xmath55 the method gets very slow . as the result , the largest sizes studied in ref .
@xcite were @xmath79 .
the authors analyzed equal time correlations , energy gap statistics and dynamical susceptibilities and found them in agreement with the strong - disorder renormalization group predictions @xcite .
the method was also used in ref .
@xcite to study the conductivity .
we now present a novel numerical method to study the model ( [ action2 ] ) at non - zero temperatures .
its numerical effort scales linearly with system size @xmath55 ( per iteration ) compared with the @xmath80 scaling of the numerical method outlined in sec .
[ sec : three_adr ]
. the basic idea of our method is that , for @xmath70 , we only need the diagonal elements of the inverse matrix @xmath81 to iterate the self - consistent eq .
( [ rend ] ) .
the numerical effort for finding the diagonal elements of the inverse of a sparse matrix is much smaller than that of a full diagonalization . combining eqs .
( [ rend ] ) and ( [ avgso ] ) , the system of self - consistent equations at non - zero temperatures @xmath49 , and in the presence of an external field @xmath26 , reads @xmath82^{-1}_{ii}+t m^{-1}_{ii}+h^2\sum_{j , k=1}^{l } m^{-1}_{ij}m^{-1}_{ik } + \alpha_i \,.\end{aligned}\ ] ] here , @xmath83 with an ultra - violet cutoff frequency @xmath77 . to solve these equations ( [ self12 ] ) iteratively , we find the inverses of the tridiagonal matrices @xmath84 $ ] and @xmath85 using the fast method proposed in ref . @xcite
this algorithm is summarized in [ app:12 ] . in zero external field
, we only need the diagonal elements of @xmath86^{-1}$ ] and the number of operations per iteration scales linearly with system size @xmath55 , while it scales quadratically in the presence of a field because for @xmath87 , full inversion of the matrix is required .
once the full set of @xmath64 has been obtained , we can compute observables from the quadratic action ( [ action2 ] ) .
let us first consider observables in the absence of an external field .
the equal - time correlation function @xmath88 averaged over disorder realizations can be obtained from eq .
( [ action2 ] ) , @xmath89^{-1}}_{i , i+x}+m^{-1}_{i , i+x}\right ) } \,,\end{aligned}\ ] ] where the overbar indicates the average over disorder configurations .
similarly , in the zero external field , we can calculate the order parameter susceptibility as a function of temperature .
the disorder - averaged order parameter susceptibility @xmath90 can be expressed as @xmath91 in the presence of an external field , we need to include @xmath26 in the solution of eq .
( [ self12 ] ) .
we can then compute the order - parameter @xmath92 field curve .
the disorder - averaged order parameter reads @xmath93 we note that the number of operations to calculate observables for one disorder configuration scales quadratically with the system size @xmath55 .
however , this needs to be done only once , outside the loop that iterates the self - consistent equations . at low temperatures , according to eq .
( [ self12 ] ) , we need to invert a huge number of matrices @xmath84 $ ] per iteration ( one for each matsubara frequency ) .
naively , one might therefore expect the numerical effort to scale linearly in @xmath94 .
however , these matrices are not very different .
we can therefore accelerate the method by combining them appropriately .
this is explained in [ app:11 ] . versus temperature @xmath49 for various distances from criticality @xmath95 in the griffiths phase .
all data are averaged over 3000 disorder configurations with system size @xmath18 .
the solid lines represent fits to the griffiths power law ( [ ordsusc0 ] ) , @xmath96 , over the temperature range @xmath97.,width=340 ] versus distance from criticality @xmath98 .
the solid line is a fit to the power law @xmath99 .
b ) the correlation length @xmath40 obtained by analyzing correlation function data versus distance @xmath98 from criticality .
the solid line is a fit to a power law , resulting in a critical point of @xmath100 and the correlation length exponent @xmath101.,width=340 ] versus external field @xmath26 for various @xmath95 .
the data are averaged over 3000 disorder configurations of system size @xmath18 . in the field range @xmath102 to @xmath103 , the dotted and solid lines represent fits to eq .
( [ orderpcrit ] ) and the griffiths power law ( [ orderpgrif ] ) , respectively.,width=340 ] .
all data are averaged over 3000 samples of size @xmath104 at @xmath105 .
the solid lines are fits to eq .
( [ corrf0 ] ) .
inset : deviations of correlation function at fixed value of @xmath106 due to temperature effects and statistical error of an average over disorder configurations .
the data represented by circles and stars are averaged over the same 1000 disorder configurations at @xmath107 and @xmath105 , respectively .
the curves represented by triangles are averaged over different set of 1000 disorder configurations at @xmath105 .
, width=340 ]
in this section , we report results of our numerical calculations of the model ( [ action2 ] ) .
we consider the interactions @xmath108 to be uniformly distributed on @xmath109 with mean @xmath110 and the bare local distances from criticality @xmath35 to be gaussian distributed with mean @xmath36 and variance 0.25 .
an advantage of our method is that it gives direct access to the temperature dependencies of observables .
for example , we calculate the zero - field order parameter susceptibility as a function of temperature for various values of the control parameter @xmath95 according to eq .
( [ susc1 ] ) . at low temperatures ,
the griffiths power law ( [ ordsusc0 ] ) describes the data very well ( see figure [ fig : fig2 ] ) .
the non - universal griffiths exponent @xmath10 can be determined from fits in the temperature range @xmath97 .
figure [ fig : fig4](a ) shows how @xmath10 varies as the distance from criticality @xmath111 changes .
the power law @xmath112 describes the data well with the critical point @xmath100 , and exponents @xmath101 and @xmath113 . here , the number in brackets indicates the uncertainty in the last digit .
these results are consistent with the predictions of refs . @xcite .
we also compute the order parameter as a function of an external field at @xmath105 for various @xmath95 ( figure [ fig : fig3 ] ) .
the off - critical data ( @xmath114 ) are described by the griffiths power law ( [ orderpgrif ] ) with an exponent @xmath10 . at the critical point , the @xmath115 curve follows the logarithmic dependence ( [ orderpcrit ] ) with exponents @xmath113 and @xmath116 .
the value for exponent @xmath117 is in agreement with the predicted one @xcite .
the values of the griffiths exponent @xmath10 match those extracted from susceptibility data ( see figure [ fig : fig4 ] ( a ) ) .
the deviation near the critical point may be due to the fact that the correlation length becomes comparable to the system size and correspondingly causes finite - size effects in the data .
in addition , in the absence of an external field @xmath26 , for system size @xmath118 we compute the disorder - averaged correlation functions ( [ corrf1 ] ) at temperature @xmath105 for various values of @xmath95 ( see figure [ fig : fig1 ] ) .
the values of correlation length @xmath40 can be extracted by fitting the data to eq .
( [ corrf0 ] ) .
we find good agreement of the data with eq .
( [ corrf0 ] ) for distances between @xmath119 and some cutoff at which the curves start to deviate from the zero - temperature behaviors due to temperature effects and where curves start to become noisy because correlations become dominated by very rare large clusters .
figure [ fig : fig4](b ) shows how the correlation length @xmath40 changes with distance from criticality @xmath98 .
the data can be fitted to the power law @xmath41 , as expected @xcite . by fitting , we extract the critical point @xmath100 and exponent @xmath101 .
the values of exponent @xmath120 and critical point @xmath121 are in agreement with those obtained from @xmath90 and @xmath115 .
in this section , we discuss the execution time of our method for solving the self - consistent eqs .
( [ self12 ] ) iteratively ( _ i.e. _ , the time needed to get a full set of renormalized distances from criticality @xmath64 ) . in our method ,
the time per iteration scales linearly with the system size @xmath55 in the absence of an external field because the operation count is dominated by the matrix inversion .
thus , the disorder - averaged execution time @xmath122 for a single disorder configuration , where @xmath123 is the number of iterations needed for convergence of the self - consistent eqs .
( [ self12 ] ) . the number of iterations @xmath123 depends on the disorder configuration , it is larger for a disorder realization which has locally ordered rare regions with smaller @xmath22 . in the conventional paramagnetic phase , @xmath124 , for larger values of @xmath95 away from criticality ,
locally ordered rare regions are almost absent , therefore the number of iterations @xmath123 is a constant .
thus , in the conventional paramagnetic phase , the execution time is expected to scale linearly with the system size , @xmath125 .
figure [ fig : fig5 ] shows that it indeed scales linearly with the system size for @xmath126 .
in contrast , in the quantum griffiths phase , where locally ordered rare regions are present , @xmath123 is expected to be large and to become larger close to criticality .
if we compare two different system sizes in the quantum griffiths phase , the larger system is expected to have locally ordered rare region with higher probability .
thus , in the quantum griffiths phase the number of iterations @xmath123 is expected to be a function of system size @xmath55 , which we model as @xmath127 with some non - negative exponent @xmath128 .
therefore , in the quantum griffiths phase the execution time does not scale linearly with the system size but it behaves as @xmath129 .
figure [ fig : fig5 ] shows that for @xmath130 in the quantum griffihts phase , the disorder averaged execution time @xmath131 does not scales linearly with @xmath55 but behaves as power law @xmath132 with @xmath133 .
because our method performs the matsubara sums numerically , the effort increases with decreasing temperature @xmath49 . as shown in [ app:11 ] , this increase is only logarithmic in @xmath94 if we approximately combine higher matsubara frequencies . and in the zero field @xmath70 , execution time for a single disorder configuration @xmath131 versus system size @xmath55 for @xmath130 and @xmath126 .
all data are averaged over 1000 disorder realizations .
the solid lines represent fits to the power - law .
( times measured on an intel core i5 cpu),width=340 ]
in summary , we have developed an efficient numerical method for studying quantum phase transitions in disordered systems with @xmath0 order parameter symmetry in the large@xmath1 limit .
our algorithm solves iteratively the large@xmath1 self - consistent equations for the renormalized distances from criticality using the fast method of ref .
@xcite for the necessary matrix inversions .
we have applied our method to the superconductor - metal quantum phase transition in nanowires and studied the critical behavior of various observables near the transition .
our results are in agreement with strong - disorder renormalization predictions @xcite that the quantum phase transition is governed by infinite - randomness critical point accompanied by quantum griffiths singularities .
let us compare the performance of our method with that of the method proposed in ref .
@xcite and outlined in sec . [ sec : three_adr ] .
the main difference is how the sums over the matsubara frequencies in the self - consistent equations ( [ rend ] ) are handled .
the method of ref .
@xcite works at @xmath134 where the matsubara sum becomes an integral .
this integral is performed analytically which saves computation time . however , the price is a complete diagonalization of the coupling matrix @xmath71 which is very costly ( @xmath135 operations per iteration ) .
moreover , observables at @xmath136 are not directly accessible .
in contrast , our method performs the matsubara sum numerically which allows us to use the fast matrix inversion of ref .
@xcite ( which needs just @xmath137 operations per iteration ) instead of a full diagonalization .
furthermore , we can calculate observables at @xmath136 .
however , our effort increases with decreasing @xmath49 .
thus , the two methods are in some sense complimentary .
the method of ref .
@xcite is favourable for small systems when true @xmath134 results are desired .
our method works better for larger systems at moderately low temperatures .
we also emphasize that all our results have been obtained by converging the self - consistent equations ( [ rend ] ) by means of a simple mixing scheme .
even better performance could be obtained by combining our matrix inversion scheme with the solve - join - patch algorithm @xcite for convergence acceleration .
our method can be generalized to higher - dimensional problems .
the self - consistent equations can be solved in the same way , using a fast method for inverting the arising sparse matrices . for two dimensional systems
, one could use the methods given in refs .
@xcite for which the cost of inversion is @xmath138 , where @xmath15 is a total number of sites .
we therefore expect the cost of our method to scale as @xmath139 or @xmath140 in the quantum griffiths and quantum paramagnetic phases , respectively . for three dimensional systems ,
sparse matrices can be inverted in @xmath141 operations @xcite , correspondingly the cost of our method is expected to behave as @xmath142 ( @xmath15 is number of sites ) in the quantum griffiths phase . in the quantum paramagnetic phase
it should scale as @xmath143 .
a possible application of our method in three dimensions is the disordered itinerant antiferromagnetic quantum phase transitions @xcite .
the clean transition is described by a landau - ginzburg - wilson theory which is generalization of the action ( [ action1 ] ) to @xmath31 space dimensions and @xmath32 order parameter components @xcite .
introducing disorder leads to random mass terms as in the case of the superconductor - metal quantum phase transition in nanowires .
this work has been supported by the nsf under grant nos .
dmr-0906566 and dmr-1205803 .
in this appendix we sketch the fast method for the inversion of a tridiagonal matrix outlined in ref .
the cost of finding the diagonal elements of the inverse matrix is @xmath137 operations while inverting the full matrix costs @xmath144 operations .
the basic idea is that the inverse matrix of the tridiagonal matrix @xmath85 can be represented by two sets of vectors @xmath145 and @xmath146 : @xmath147 .
let diagonal and offdiagonal elements of matrix @xmath85 be @xmath148 and @xmath149 , respectively . by combining a ul decomposition of the linear system for @xmath150 and a ul decomposition of @xmath85
, one can determine the set of vectors @xmath151 where @xmath152 the set of vectors @xmath146 can be found by combining a ul decomposition of the linear system for @xmath25 and a ul decomposition of @xmath85 , yielding @xmath153 where @xmath154 finding both sets of vectors needs @xmath137 operations , consequently the number of operations to extract the diagonal elements @xmath155 of inverse matrix scales linearly with @xmath55 while the cost of finding the full inverse matrix @xmath147 is @xmath144 .
in this appendix we propose an approach to accelerate the summation over the matsubara frequencies in our method .
the idea is based on the fact that the critical behaviors are dominated by low - frequencies , correspondingly only matrices associated with low matsubara frequencies @xmath53 have dominant contributions in eq .
( [ self12 ] ) . at
higher @xmath53 , consecutive matrices change very little .
therefore , instead of finding diagonal elements of @xmath156^{-1}$ ] for each matsubara frequencies @xmath53 , we invert matrices corresponding to @xmath157 and correspondingly calculating the sum of first 100 terms in eq .
( [ self12 ] ) exactly .
then , we approximate sum of the remaining terms corresponding to @xmath158 ( higher matsubara frequencies ) in the following way : we find diagonal elements of @xmath156^{-1}$ ] corresponding to the midpoints of subintervales obtained by dividing interval @xmath159 ( @xmath160 ) into @xmath161 subintervales of width @xmath162 . then , we approximate appropriate sum in eq .
( [ self12 ] ) by summing over terms calculated at midpoints multiplied by @xmath162 . in this case ,
numerical effort scales logarithmically as @xmath163 compared with @xmath94 scaling in the case of exact summation . to check the magnitude of errors arising due to this approximation , we have compared observables calculated exactly and using acceleration method for the system with size @xmath18 and control parameter @xmath164 at the temperature @xmath105 .
we have found that arising errors are less than @xmath165 . | we develop an efficient numerical method to study the quantum critical behavior of disordered systems with @xmath0 order - parameter symmetry in the large@xmath1 limit .
it is based on the iterative solution of the large@xmath1 saddle - point equations combined with a fast algorithm for inverting the arising large sparse random matrices . as an example , we consider the superconductor - metal quantum phase transition in disordered nanowires .
we study the behavior of various observables near the quantum phase transition .
our results agree with recent renormalization group predictions , i.e. , the transition is governed by an infinite - randomness critical point , accompanied by quantum griffiths singularities .
our method is highly efficient because the numerical effort for each iteration scales linearly with the system size .
this allows us to study larger systems , with up to 1024 sites , than previous methods .
we also discuss generalizations to higher dimensions and other systems including the itinerant antiferomagnetic transitions in disordered metals .
quantum phase transition ; large@xmath1 limit ; infinite randomness ; quantum griffiths phase ; |
long before the microscopic theory , a phenomenological approach to superconductivity was proposed by ginzburg and landau @xcite .
the idea was that the normal - superconducting transition is a thermodynamical second order transition .
so one can apply to it the general theory of second - order transitions defining an order parameter @xmath8 in such a way that @xmath8 is zero in the disordered state ( normal metal ) and finite in the ordered state ( superconducting metal ) .
the free energy of a superconductor is given by @xcite @xmath9 the transition from normal to superconducting state in a magnetic field is second order and near the transition point the order parameter is small , @xmath10 and hence one can easily linerize the ginzburg - landau equation to the following form @xcite @xmath11 where @xmath8 stands for the complex superconducting order parameter and @xmath12 is the first ginzburg - landau parameter , related to the temperature - dependent coherence length , @xmath13 , by @xmath14 .
the starting point of the theoretical description of nucleation ( _ i.e. _ onset ) of superconductivity in an applied magnetic field is this linearized ginzburg - landau equation ( lgle ) .
one can easily identify that the eq .
( 2 ) is identical with the schr@xmath7dinger equation for a free charged particle of mass @xmath15 and charge @xmath16 in a magnetic field @xmath17 , with @xmath18 playing the role of the energy eigenvalue .
this property allows us to apply various familiar solutions and methods of usual quantum mechanics to the problem of nucleation in superconductivity .
the lowest eigenvalue of the lgle gives the highest magnetic field at which the nucleation of the superconductivity can occur .
now the big question is what happens in nonstationary cases ? for instance , if one applies an external electric field which varies very slowly , is it possible that the order parameter will be given by the same equation as in the static case where the time is a parameter . on the other hand ,
if the field varies very rapidly with time will it be possible that the superconductor will respond to an average of the field as it happens in other systems in physics ?
the last quary is our basic investigation in this work .
time dependent ginzburg landau ( tdgl ) model often gives a reasonable picture of superconducting dynamics @xcite .
unlike from its static counterpart , validity of the tdgl theory is much more limited .
it is not enough just to be close to the critical temperature .
the necessary condition is that the deviation from equilibrium is small ; the quasiparticle excitations should remain essentially in equilibrium with the heat bath .
it can normally be fulfilled for gapless superconductor @xcite .
so , we begin by writing down the time - dependent ginzburg landau equation that governs the dynamics of the superconducting order parameter @xcite : @xmath19 now choosing the electrical potential @xmath20 , @xmath21 along the z axis with convenient gauge @xmath22 and linearizing one can show @xmath23 now using @xmath24 and then following the method of cook _ et al _ @xcite one obtains @xmath25 where @xmath26 .
+ in this subsection , we calculate the critical nucleation field of superconductivity in the presence of an external magnetic field for a large sample .
we consider the large sample in the presence of a magnetic field @xmath27 along the @xmath28 axis and a convenient gauge is @xmath22 .
equation ( 5 ) is our starting point of discussion about bulk nucleation .
equation ( 5 ) is same as that of a schr@xmath7dinger equation for a particle of mass @xmath15 bound in a harmonic oscillator potential with force constant @xmath29 .
the resulting harmonic oscillator eigenvalues are @xmath30 in view of eq .
( 5 ) these energy eigenvalues , @xmath31 , are to be equated to @xmath32 .
thus @xmath33 here , we are concerned about the highest value of @xmath21 _ i.e. _ @xmath0 which is obviously given by the lowest eigenvalue ( @xmath34 , @xmath2 and @xmath35 ) .
thus @xmath36 where flux quantum @xmath37 .
the above relation @xmath38 gives us an important message .
when @xmath39 , @xmath40 and vortex phase in the type - ii superconductor appears . on the other hand for @xmath41 , @xmath42 and meissner effect sets in , the mixed phase does not appear and one obtains type - i superconductor .
+ on the other hand , for @xmath43 , @xmath44 it is evident from eq . ( 9 ) that the highly oscillating electric field actually increases the nucleation field by an amount @xmath45 in the bulk nucleation of superconductivity .
so far in my treatment of the ginzburg - landau equation at the mean - field level we have not taken into account the surface of the sample . at the surface of the superconductor ,
some additional boundary conditions need to be imposed on the solutions .
one can quite reasonably expect that the presence of an interface between the superconductor and a non - superconducting material , such as a normal metal or an insulator must affect the nucleation of superconductivity in the material .
we consider a specimen with a single plane surface and the external magnetic field to be parallel to the surface _
i.e. _ @xmath46 .
the superconducting sample is located in the half - space @xmath47 , while we take the non - superconducting material to be located in the half space @xmath48 .
the latter material is taken to be either vacuum or an insulating material .
then the superconducting boundary condition imposed on @xmath8 in finite samples @xcite : @xmath49 it reduces to the neumann boundary condition , @xmath50 , when the magnetic vector potential , @xmath51 , can be chosen in a form with zero normal component at the boundary of the sample . in my case
this becomes @xmath52 where @xmath53 .
we look for a solution of the form @xmath54 for the linearized tdgl equation ( eq .
4 ) with the constraints : @xmath55 the complication arises because the boundary condition states that the solution must be flat at a position @xmath56 while the minimum is located at @xmath57 . when the minimum of the potential is located far from the surface ( @xmath58 )
one can ignore the boundary condition , and when @xmath59 the boundary condition is satisfied by the standard solution of usual schr@xmath7dinger equation of a harmonic oscillator .
thus in both the cases we obtain @xmath60 .
one can easily understand that the surfaces have consequences for the solution to the lgle only at the intermediate values of @xmath61 , _
i.e. _ @xmath62 . for the intermediate values of @xmath61
, we can think of solving the schr@xmath7dinger equation by employing the effective hamiltonian method of cook _ et al _ @xcite and thus the effective schr@xmath7dinger like equation becomes @xmath63 this is an eigenvalue problem where the eigenvalue itself is @xmath61 dependent , and my task is to minimize this with respect to @xmath61 subject to the boundary condition on @xmath64 at the surface . introducing @xmath65 , @xmath66 , @xmath67 and @xmath68 , one can rewrite eq .
( 14 ) as follows : @xmath69 now my task is to find the lowest possible value of @xmath70 subjected to the boundary conditions @xmath71 at @xmath72 .
this can be phrased as the following variational problem of minimizing the functional @xmath73 with respect to variations in @xmath64 .
the euler - lagrange equation for this variational problem is precisely the scaled differential eq .
( 15 ) . to do the minimization , we use the following trial wave - function : @xmath74 with the help of this trial wave - function , we obtain @xmath75 now minimizing @xmath70 with respect to @xmath76
, we obtain @xmath77 which yields @xmath78 . substituting this back in @xmath70 and again minimizing with respect to b , we find @xmath79 and it gives us @xmath80 . substituting this back into @xmath70 , we obtain @xmath81 from the definition of @xmath70 one can relate @xmath82 thus @xmath83 in view of equation ( 23 ) one can realize that the physically accessible region is defined as follows : @xmath84 . now solving equation ( 23 ) for ` h ' one obtains @xmath85 from eq . ( 24 ) , we obtain for @xmath2 , @xmath86 and for @xmath43 , the surface nucleation field @xmath87 .
the main message of eq .
( 24 ) is that when the superconductivity occurs in a system , it starts to nucleate at the surface of an ideal defect - free sample and not in the interior of the sample .
if the sample has defects in the interior , superconductivity starts to nucleate in the vicinity of such defects . on the other hand the high frequency field further accentuates the nucleation at the surface rather than in the interior of the sample which is evident from equation ( 24 ) .
for the bulk sample the increasing amount due to the rapidly oscillating field is @xmath45 ( see equation 9 ) . on the other hand the rapidly oscillating field increases the surface nucleation field by an amount @xmath88 ( see equation 24 ) .
thus the rapidly oscillating electric field accentuates the surface nucleation field by @xmath89 times more than the bulk nucleasion field increasing factor . from equation ( 9 ) and equation ( 24 ) one can determine the real values for this enhancement in nucleation field .
if an experimentalist uses @xmath90 v / m , @xmath91 ghz , the enhancement of the bulk nucleation field is @xmath92 t more than the without driven result . for the same values of electric field strength and frequency , the enhancement of surface nucleation field is @xmath93 t more than the non - driven system .
+ before concluding this subsection we want to show the analogy between the problem of nucleation of superconductivity at the surface with that of a double oscillator .
equation ( 14 ) is still have the form of schr@xmath7dinger equation for a particle in a harmonic well centered at @xmath61 , but the boundary condition , eqs .
( 12 ) and ( 13 ) , means that the eigenvalue depends crucially on the value of @xmath61 .
one can incorporate the boundary condition by an image method .
we consider a particle moving in the potential @xmath94 of fig .
2(b ) which is my original harmonic well together with its reflection at the surface .
the ground - state wave function @xmath95 in @xmath94 must be symmetric about @xmath56 , as required in eq .
( 12 ) , and for @xmath47 it satisfies eq .
thus @xmath95 for @xmath47 is the solution to our problem , and the corresponding eigenvalue @xmath96 gives the critical field .
we can compare @xmath96 for various @xmath61 with the eigenvalue @xmath97 in the single harmonic well .
for @xmath98 , @xmath99 , and for @xmath59 , @xmath100 again . on the other hand for intermediate values of @xmath61
, @xmath96 is less than @xmath97 because @xmath94 is smaller than the single harmonic potential in some region .
the new surface eigenfunction must have a lower eigenvalue than the interior ones because it arises from a potential that is lower and broader than the single harmonic well about @xmath61 .
thus one can say that the problem of nucleation of superconductivity at the surface is analogous to that of double oscillator problem .
the analysis of the previous section has clearly demonstrated that the problem of nucleation of superconductivity in the bulk and at the surface are analogous to that of a single harmonic oscillator and double oscillator respectively .
a. p. van gelder had shown that the problem of nucleation resembles that of finding the ground state energy of a quantum particle and the nucleation field is inversely proportional to the ground state energy @xcite .
henceforth , one can easily calculate the nucleation field ratio , @xmath6 , by calculating the ground state energies of a single harmonic oscillator and a double oscillator . due to the recent developments in the regime of superconducting high frequency devices
@xcite , we would like to exhibit the effect of high frequency field on the nucleation of superconductivity in this section .
the effect of high frequency field can be taken care of by simply following the effective hamiltonian method @xcite . by following cook _
et al _ @xcite , we convert the time - dependent problem into an effective time independent one and then calculate the ground state energy by following standard stationary quantum mechanics procedures .
thus the main objective of this section is to find the nucleation field ratio , @xmath6 , from the eigenvalue solutions of the single harmonic oscillator and the double oscillator in both the cases _ i.e. _ in the presence and absence of the high frequency field . + we consider a quantum particle moving in a double oscillator potential @xmath101 and driven by a high frequency monochromatic force .
thus the driven double - oscillator hamiltonian becomes @xmath102 now following cook _
et al _ @xcite we can express the effective time - independent potential @xmath103 to determine ground state energy of this driven double oscillator , we consider the normalized ground - state eigenfunctions of single oscillators centered at @xmath104 : @xmath105 where we have introduced dimensionless variables @xmath106 and @xmath107 with @xmath108 . now ,
we compute the expectation value of the hamiltonian @xmath109 , in the state @xmath110 .
the norm of @xmath111 is @xmath112 . to compute the expectation value of the kinetic energy operator ( @xmath113 ) we need to use the identity @xmath114 thus
, we find @xmath115 to obtain the expectation value of the potential energy operator @xmath116 , we need to compute @xmath117 and @xmath118 .
so , we obtain @xmath119 finally the ground state energy of the driven oscillator is given by @xmath120 in fig .
3 , we plot the ground - state energy versus @xmath121 for the double - oscillator in the presence as well as absence of the external high frequency field .
as pointed out by a. p. van gelder the critical nucleation field is inversely proportional to the ground state energy .
thus , we calculate the critical nucleation field ratio ( @xmath6 ) from the numerical computation of the minimum of ground state energies of the double and single oscillators ( eq .
minimum ground state energies for the double oscillator ( dosc ) in the presence and absence of the external field are computed from eq .
( 31 ) by putting @xmath122 , @xmath123 , @xmath124 and @xmath125 respectively . on the other hand the ground state energies for the single oscillator ( sosc . ) in the presence and absence of the field ( @xmath125 ) can be computed from the eq .
( 31 ) by simply putting @xmath126 . from our computation
, we obtain the minimum of ground state energies for the double - oscillator with external rapidly oscillating force ( wf ) and without force ( wof ) cases are @xmath127 and @xmath128 respectively
. thus the ratio of the critical fields for the force - free case is given by @xmath129 we have to mention that a more careful numerical analysis based on the hypergeometric functions in the non - driven system yields ( exact result ) @xmath130 from the variational principle one obtains @xmath131 and from the ground state energy method , we obtain @xmath132 . now comparing these two results with equation ( 33 )
one can say that the results obtained from the two approximated theories ( variational principle and ground state enrgy ) agree very well with the exact results .
this establishes the validity of these two approximated theories .
+ now , we employ the ground state energy method to deternine the nucleation field ratio in the presence of the externally applied high frequency field : @xmath133 in view of eq .
( 34 ) and eq .
( 32 ) , we can conclude that the nucleation field ratio for the driven system is slightly larger than the non - driven system .
the basic inference that one can draw from the above discussion is that the high frequency field accentuates the surface critical nucleation field of superconductivity more than that of the bulk critical nucleation field .
in this section , we briefly summarize all my derived results and then conclude this paper .
we investigate the effect of high frequency electric field on the nucleation of superconductivity in the interior of a large sample as well as at the surface of a finite sample . employing the linearized time dependent ginzburg - landau theory for this purpose
we have shown the analogy between quantum single harmonic oscillator and the nucleation in the interior of a large sample .
the similarities between the nucleation of superconductivity at the surface and a quantum double oscillator is also demonstrated .
we invoke two approximate theories to derive critical nucleation field ratio @xmath6 .
the first approximate theory is based on the variational principle and is solved analytically . on the other hand the second approximate theory is based on the ground state energy method of a. p. van gelder @xcite .
we determine the minimum of the ground state energy by the numerical method . since critical nucleation field is inversely proportional to the ground state energy one can easily compute the critical nucleation field ratio .
the validity of these two approximated theories is checked by comparing the results obtained from these two theories with that of the exact results for the case of nondriven system and they agree very well with each other .
the effect of high frequency oscillating time - periodic field is taken care of by the effctive time - independent hamiltonian method of cook et al @xcite .
it is observed that for electric field strength @xmath134 v / m and frequency @xmath91 ghz one can obtain 0.114 t and 0.189 t much more enhancement in the bulk nucleation and surface nucleation field respectively .
+ in conclusion , we examine in details the effect of high frequency field on the nucleation of superconductivity . using the variational method and ground state energy method we have shown that the high frequency field actually accentuates the surface critical nucleation field of superconductivity more than the bulk critical nucleation field .
the enhancement of the surface critical nucleation field is 1.6592 times more than the enhancement of the bulk critical nucleation field .
one can now say that the ratio @xmath6 is not universal i.e. @xmath1 is not the universal upper limit of nucleation .
one can obtain higher than @xmath1 field by applying high frequency oscillating electric field .
our results will be helpful in analyzing recently developed high - frequency superconducting devices @xcite .
99 r. j. cook , d. g. shankland and ann l. wells , phys . rev .
a _ 31 _ , 564 ( 1985 ) .
n. l. manakov , v. d. ovsiannikov , l. p. rapoport , phys .
_ 141 _ , 319 ( 1986 )
. s. i. chu , adv .
_ 73 _ , 739 ( 1986 ) .
g. casati , l. molinari , prog .
_ 98 _ , 287 ( 1989 ) ; g. casati , b. v. chirikov , d. l. shepelyansky , i. guaneri , phys . rep .
_ 154 _ , 77 ( 1987 ) .
a. g. fainshtein , n. l. manakov , v. d. ovsiannikov , l. p. rapoport , phys .
_ 210 _ , 111 ( 1992 ) . j. i. gittleman and b. rosenblum , proc . ieee _ 52 _ , 1138 ( 1964 ) ; phys .
_ 16 _ , 734 ( 1966 ) ; j. appl .
_ 39 _ , 2617 ( 1968 )
. m. w. coffey and j. r. clem , phys .
_ 67 _ , 386 ( 1991 ) ; physica c _ 185 - 189 _ , 1915 ( 1991 ) ; phys .
45 _ , 9872 ( 1992 ) ; phys .
45 _ , 10527 ( 1992 ) ; j. supercond .
_ 5 _ , 313 ( 1992 )
. n. c. yeh , phys .
b _ 40 _ , 5243 ( 1989 ) .
j. owliaei , s. sridhar and j. talvacchio , phys .
_ 69 _ , 3366 ( 1992 ) .
m. c. marchetti and d. r. nelson , phys .
b _ 41 _ , 1910 ( 1990 ) ; d. r. nelson and h. s. seung , phys . rev .
b _ 39 _ , 9153 ( 1989 ) ; d. r. nelson , phys .
_ 60 _ , 1973 ( 1988 ) . m. p. a. fisher , phys . rev .
_ 62 _ , 1415 ( 1989 ) . v. bruyndoncx , j. g. rodrigo , t. puig , l. van look , v. v. moshchalkov , and r. jonckheere , phys . rev .
_ 60 _ , 4285 ( 1999 ) . v. v. moshchalkov , l. gielen , c. strunk , r. jonchheere , x. qiu , c. van haesendonck and y. bruynseraede , nature _ 373 _ , 319 ( 1995 )
. l. v. ginzburg and l. d. landau , zh .
_ 20 _ , 1064 ( 1950 ) .
d. saint - james and p. g. de gennes , phys .
_ 7 _ , 306 ( 1963 ) .
l. d. landau and e. m. lifshitz , _ mechanics _ ( pergamon press , oxford , 1976 ) .
p. l. kapitza , zh .
_ 21 _ , 588 ( 1951 ) ; _ collected papers of p. l. kapitza _ , edited by d. ter haar ( pergamon press , oxford , 1965 )
. t. p. grozdanov and m. j. rakovi@xmath135 , phys .
a _ 38 _ , 1739 ( 1988 ) .
ido gilary and n. moiseyev , phys .
a _ 66 _ , 063415 ( 2202 ) .
s. rahav , i. gilary , and s. fishman , phys .
_ 91 _ , 110404 ( 2003 )
. s. denisov , l. morales - molina , s. flach and p. h@xmath136nggi , phys .
a _ 75 _ , 063424 ( 2007 ) ; s. denisov , s. flach , a. a. ovchinnikov , o. yevtushenko , and y. zolotaryuk , phys .
e _ 66 _ , 041104 ( 2202 ) .
v. l. ginzburg and l. d. landau , zh .
fiz . , _ 20 _ , 1064 ( 1950 ) . l. d. landau and e. lifshitz _ statistical physics _
, ( pergamon , oxford , third edition , 1980 ) .
n. kopnin , _ theory of nonequilibrium superconductivity _ ( oxford university press , 2001 ) ; and references therein .
l. p. gorkov and g. m. eliashberg , sov .
jetp _ 27 _ , 328 ( 1968 ) .
a. schmid , phys . mat .
_ 5 _ , 302 ( 1966 ) .
m. cyrot , rep .
_ 36 _ , 103 ( 1973 ) .
e. abrahams and t. tsuneto , phys .
_ 152 _ , 416 ( 1966 ) .
c. caroli and k. maki , phys .
_ 159 _ , 306 ( 1967 )
. s. saito and y. murayama , phys .
a _ 135 _ , 55 ( 1989 ) ; _ 139 _ , 85 ( 1989 ) .
h. schmidt , z. f@xmath137r physik _ 216 _ , 336 ( 1968 ) .
m. tinkham , _ introduction to superconductivity _
( krieger , malabar , 1985 ) .
a. p. van gelder , phys .
_ 20 _ , 1435 ( 1968 ) .
p. g. de gennes , _ superconductivity of metals and alloys _
( benjamin , new york , 1966 ) ; d. saint - james , g. sarma , and e. j. thomas , _ type ii superconductivity _
( pergamon press , oxford , 1973 ) .
k. fossheim and asle sudb@xmath138 , _ superconductivity , physics and applications _
( john wiley & sons ltd . ,
singapore , 2004 ) .
d. winkler , z. ivanov , and t. claeson , in k. fossheim ed .
_ superconducting technology : 10 case studies _ ( world scientific publishing , singapore , 1991 ) . | 0.5 cm the effect of an externally applied high frequency oscillating electric field on the critical nucleation field of superconductivity in the bulk as well as at the surface of a superconductor is investigated in details in this work .
starting from the linearized time dependent ginzburg - landau ( tdlg ) theory and using the variational principle we have shown the analogy between a quantum harmonic oscillator with that of the nucleation of superconductivity in bulk and a quantum double oscillator with that of the nucleation at the surface of a finite sample . the effective hamiltonian approach of cook _ et
al _ @xcite is employed to incorporate the effect of an externally applied highly oscillating electric field .
the critical nucleation field ratio is also calculated from the ground state energy method .
the results obtained from these two approximated theories agree very well with the exact results for the case of undriven system which establishes the validity of these two approximated theories .
it is observed that the highly oscillating electric field actually increases the bulk critical nucleation field ( @xmath0 ) as well as the surface critical nucleation field ( @xmath1 ) of superconductivity as compared to the case of absence of electric field ( @xmath2 ) .
but the externally applied rapidly oscillating electric field accentuates the surface critical nucleation field more than the bulk critical nucleation field i.e. the increase of @xmath1 is 1.6592 times larger than that of @xmath0 .
0.5 cm 0.5 cm during recent years a lot of research activity is going on both in experimental and theoretical physics aiming at the understanding of dynamics of such systems which are exposed to strong time - dependent external fields @xcite .
fundamental informations regarding high - temperature superconductors can be obtained from the high frequency electrodynamic response .
informations involving mixed state are extracted from these kind of studies @xcite .
also recent advances in microfabrication are creating new interesting opportunities for investigating the nucleation of superconductivity in type - ii superconductor @xcite .
if one decreases the strength of an applied magnetic field below a certain critical value , a material can become superconducting and this critical value is known as nucleation field of superconductivity .
landau and ginzburg @xcite have shown that the value of this critical field for a bulk material ( @xmath0 ) , equals @xmath3 ( @xmath4 is the dimensionless ginzburg landau parameter ) times the value of its thermodynamical critical value ( @xmath5 ) .
saint james and de gennes @xcite discovered the existence of a higher critical field , @xmath1 , by considering the nucleation at the surface of a semi - infinite material .
now the main question is , whether this @xmath1 is the universal upper limit of nucleation .
in other words , is the ratio @xmath6 a universal constant ? in this perspective , we investigate the high frequency nucleation field ratio @xmath6 of type - ii superconductor in the present paper .
+ time - dependent systems are generally more complicated than those of the corresponding time - independent ones . as a result , it is difficult to predict qualitative and quantitative behavior of driven systems even in cases in which it is very easy to understand the dynamics of the corresponding time - independent ones .
but there are certain methods by which these time - dependent systems can be described by an effective time - independent hamiltonian @xcite .
this enables the qualitative as well as quantitative analysis of such driven systems more convincing . in order to consider the highly oscillating field we follow the sign convention of denisov _ et al _ @xcite and the references therein .
+ the ginzburg - landau theory for superconductivity represents one of the most useful tools available for the theoretical description of the nucleation of superconductivity in an applied field @xcite .
it starts with a free energy expansion , completely in line with the general landau theory for condensed matter , with particular attention being paid on the gradient of the ordering quantity @xcite .
we use the linearized time dependent ginzburg - landau ( tdgl ) theory as the starting point of my discussion about the nucleation of superconductivity @xcite . from the linearized tdgl theory we derive schr@xmath7dinger like equations as that of a single harmonic oscillator and a double oscillator for the bulk nucleation of superconductivity and surface nucleation of superconductivity respectively .
a. p. van gelder have shown that the problem of nucleation resembles that of finding the ground state energy of a particle moving in a magnetic field and the ground state energy is inversely proportional to the nucleation field @xcite . in the present study
we want to demonstrate the link between the bulk nucleation field ( @xmath0 ) of superconductivity with that of finding the ground state energy of a single harmonic oscillator and the surface nucleation field ( @xmath1 ) of superconductivity with that of a double oscillator .
we employ the effective hamiltonian approach @xcite to incorporate the effect of highly oscillating field on the nucleation fields of superconductivity .
we calculate the nucleation field ratio @xmath6 through the ground state energy of a single harmonic oscillator and a double oscillator for the driven as well as non - driven cases .
+ with this preceding background , we organize the rest of the paper as follows . in the next section
, we discuss about the generalized linear tdgl theory of superconductivity . in this context
we explore the connection between the bulk nucleation of superconductivity and the quantum harmonic oscillator and the similarity between a double oscillator and the surface nucleation of superconductivity . in section 3
, we analyze the double oscillator in the presence of a high frequency field through the effective time - independent hamiltonian method of cook et al @xcite . by calculating the ground state energy of the driven single oscillator and the driven double oscillator
, we determine the nucleation field ratio , @xmath6 , in the presence of a high frequency electric field .
we summarize our findings and conclude in section 4 . |
children are the most valuable property of mankind and also are the most vulnerable part of the population .
birth weight is one of the most important health indicators of development in every country as one of the main factors for normal growth and development and even survival of newborns and infants .
based on world health organization ( who ) 's definition , low birth weight ( lbw ) defines as a child with birth weight lower than 2500 g. from 120 million reported child birth annually worldwide , 20 million are lbws .
based on the 2002 report of who , the prevalence of lbw is approximately 10% in asia and 9% in iran . in a report from kohgiluye boyerahmad province , iran ,
lbw is a result of preterm labor ( pl ) or intrauterine growth retardation ( iugr ) .
approximately 15.5% of all births , or more than 20 million infants worldwide , are born with lbw .
the level of lbw in developing countries ( 16.5% ) is more than double the level in developed regions ( 7% ) .
, the rate of lbw is different from 8.5 to 9.1 in different provinces of the country .
lbw not only is the leading cause of mortality but also results in disability , increased risk of infections and hematological and nutritional diseases .
there are several causing factors for lbw : nutritional status and pattern of weight gaining on mother during pregnancy , history of obstetric complications such as abortion or another child with lbw , chronic underlying diseases in mother , alcohol use and smoking .
other factors are prenatal care , hemoglobin ( hb ) and hematocrit ( hct ) level of mother during pregnancy , socioeconomic situation , mother 's activity during pregnancy and demographic factors ( age , weight , ) and so on
. an lbw child can have several problems , especially in developing countries except psychological consequences ; high cost of care and treatment of such children for parents ( that are usually from low socioeconomic layers of community ) can be catastrophic .
lbw is the leading cause of mortality in newborns , designing a study to determine risk factors of lbw can help health system authorities to prevent it and lower its mortality and morbidity .
as there are many risk factors involved in lbw , it is important to find the prevalent regional factors to have a broad picture for designing educational programs or policy making .
this study tries to detect lbw - related factors and their effect on children 's growth pattern up to sixth month of life in health centers of urmia city , iran .
this was a cross - sectional study carried out on information obtained from registered documents of 250 families which were under supervision and follow up in the health centers of urmia city and its related villages in 2011 .
four health centers from four areas of urmia city were selected and 54 completed questionnaires were obtained from each center , which were chosen randomly . all related data such as age and weight of infants , mothers age , gestational age ( ga ) at the time of delivery , time gap between two pregnancies , past history of abortion , prenatal care history , systemic and underlying diseases , hb of mothers during pregnancy , and pattern of infant 's growth up to six months of age were registered in a questionnaire .
all infants with malformation and also cases of still birth were excluded from the study .
lbw based on international definition was birth weight under 2500 g , very lbw ( vlbw ) was under 1500 g , definition of preterm labor was delivery before 37 weeks of pregnancy .
all data were transferred to spss-15 software and analyzed using t - test ( to compare ga and child number in pregnancy with birth weight ) and chi - square test to compare gender and history of bleeding with birth weight ) . a p value of < 0.05 was considered statistically significant .
two hundred and fifty infants were included in this study ; 120 ( 48% ) were female and 130 ( 52% ) were male .
data show that 20.1% of infants had birth weight under 2500 g ( lbw ) and 79.9% were more than 2500 g. a 75.8% of females were weighed more than 2500 g and 24.2 were under 2500 g. these data were 83.2% and 16.3% for male infants , respectively .
mean weight ( kg ) , delivery rate and gap between present and previous delivery for mothers ( years ) were 63.07 ( 13.09 ) , 4.31 ( 2.24 ) and 3.36 ( 1.05 ) , respectively.[table 1 ] twenty - three infants ( 9.2% ) were delivered from mothers under the age of 18 and 7 infants ( 2.8% ) from mothers older than 35 years .
the rest of the children 's mothers ( 220 mothers ( 88% ) ) were between 18 and 35 years of age .
a 63.2% of mothers had been experiencing their first pregnancy , 24.4% second pregnancy , 9.6% third , and 7% forth and more .
the gap between two pregnancies was more than 4 years in 69.1% of mothers , 3 years in 8.51% , and 2 years and lower in 22.34% of mothers.[table 2 ] ninety percent of deliveries occurred between 38 and 42 weeks of ga ; only 10% of deliveries were under 38 weeks .
a 97.2% of the participants were home makers and 2.8% were working ; 7.2% illiterate , 29.6% elementary school , 32.4% guidance school , 23.6% diploma holders , and 7.2% had academic degrees .
data showed that 3.2% of mothers had bleeding during pregnancy and 9.2% had some problems such as edema , systolic blood pressure over 140 mmhg and albominuria .
hb was in normal range in 90.4% ; only 4% had a history of abortion .
a 54.4% of participants had normal weight before pregnancy , 7.2% were underweight[body mass index ( bmi 18.5 ) ] , and 38.4% were overweight ( bmi 25 ) .
height - to - weight index of infants in 12.8% was under 3 percentile , 54% between 3 and 50 percentiles , and 29.6% between 50 and 97 percentiles . in chi - square analysis
, there was a significant relationship between birth weight and gender of infants ( p = 0.02 ) . in t - test analysis
, there was a significant relationship between birth weight and mother 's age ( p < 0.001 ) and weight ( p < 0.001 ) .
there was no significant correlation between birth weight and gap between two pregnancies ( p = 0.115 ) .
there was a significant relationship between birth weight and ga during delivery ( p < 0.001 ) .
there was no significant correlation between birth weight and history of abortion , mother 's work , and mother 's level of graduation .
based on chi - square test pattern of growth in the first 6 months of life was related to birth weight ( p < 0.001 ) . in children with birth weight more than 2500 g growth indexes
was between 50 and 97 percentiles in 38.9% , between 3 and 50 in 58.4% , and under 3 in 2.6% ; for lbw children ( birth weight < 2500 g)0.5% , 53% , and 46.5% , respectively .
lbw is the leading cause of mortality in newborns and infants and with congenital malformations are major causes of morbidity based on world health organization ( who ) reports , female children are at more risk of lbw . in the present study , we found that the prevalence of lbw was more in female rather than male children significantly .
studies by delaram in shehrekord and rafii in arak concluded similar results in iran but roudbari in zahedan did not find significant difference between gender and lbw prevalence .
mothers between 18 and 35 years of age had lowest prevalence of lbw in their children .
the highest prevalence was for mothers younger than 18 years but was not significant in over 35 years old ( of course , prevalence was more in 1349 year olds ) .
low parity in our > 35 years old mothers and observing appropriate gap between two pregnancies in this group can be the cause of compensation of high age risk , because in most of the studies , mother 's high age is a risk factor for lbw . in our study
, mothers in first pregnancy had highest percentage of lbw children ; this result is concomitant with delgad et al .
then in our study , first pregnancy was a strong risk factor that needs more attention by health system authorities .
the most important risk factor in all studies and also in our study was preterm labor .
the results of studies showed that many risk factors as women younger than 20 years , low maternal weight < 50 kg , and smoking during pregnancy , can be the reason of preterm labor and preterm labor is the most risk factors in morbidity and mortality of children
. we did not find relationship between mothers level of education and lbw ; but gisselman et al .
this discrepancy can be due to increasing knowledge of mothers about pregnancy in our study , after frequent educational programs are implemented for pregnant mothers in iran 's health centers . in our study ,
similar to eghbalian and minagawa studies no relationship between lbw and mothers work was found while choudhary et al . , in india resulted that 71.4% of mothers engaged as laborer gave birth to lbw babies as compared with others .
also they showed that duration of day - time rest taken by mothers of lbw newborns 76.5% belonged to mothers who took less than 1 h day - time rest as compared with only 7.1% newborns whose mothers took the rest for 90 min or more . the difference between our study and choudhary
could be related to the design of study , which in choudhary study was more focused on the types of work but we only asked about working or not working .
another possible cause is the considerable difference between the prevalence of lbw in india and iran .
based on who statistics , the prevalence of lbw in india is approximately 30% , whereas in our country it is approximately 8.5% .
our study also has found significant relationship between lbw and mothers age ( p < 0.001 ) , pre - pregnancy mothers weight ( p < 0.001 ) , ga ( p
there was no relationship between lbw and mothers work , level of education , and history of abortion .
the statistics showed that the rate of lbw in iran is better than the mean rate in our region as based on who report , the mean lbw of western asian countries is 15.4% .
the age of marriage in iran has been increased during the past decade ; then educational programs during marriage consultations should be designed to encourage couples ( especially who married at older ages ) to bear a child soon because of maternal and fetal complications .
also mothers should be educated on the optimal weight before and during pregnancy . due to national programs implementation in health and clinical centers of the country under supervision of iranian ministry of health ,
the mothers at any level of education are educated on the necessary issues , which a pregnant woman must know .
family physician program is held in most of rural areas in the country and spreading the same to urban areas , will play an important role in face - to - face education and controlling risk factors of lbw .
although this study included approximately 250 participants , results give a broader regional perspective on the situation of risk factors in urmia city and province and highlight the educational needs for this region .
the differences between cultures and socioeconomic situations cause some risk factors more important than other .
we could not control them and another limitation in this study was that some of our needed information was unregistered in documents so we invited mothers and completed the questionnaire .
our study also has found significant relationship between lbw and mothers age , pre - pregnancy mothers weight , ga , and children 's gender .
most of these causes are preventable with educational programs and strict and regular prenatal care . decreasing incidence of lbw children can be achieved by cooperation between different parts of health and clinical systems . | introduction : children are more risk - prone group of the population and low birth weight ( lbw ) is the leadingcause of newborns mortality and morbidity .
lbw is defined as child 's birth weight lower than 2500 g. many maternal and fetal factors are determined as risk factors of lbw .
this study tries to detect related factors to lbw and effect of them on children 's growth pattern up to sixth month of life in health centers of urmia city , iran.materials and methods : a cross - sectional study was carried out in urmia city using registered data from mothers documents .
all related data such age and weight of infants , mothers age , gestational age ( ga ) at the time of delivery , time gap between two pregnancies , past history of abortion , prenatal care history , systemic and underlying diseases , hemoglobin of mothers during pregnancy , and pattern of infant 's growth up to sixth month of age were registered in a questionnaire .
all registered data were transferred to spss 15 software and analyzed.results:mean sd of birth weight was 3071 625.66 g. there was a significant relationship between birth weight and mother 's age ( p < 0.001 ) and weight ( p < 0.001 ) .
children of mothers younger than 18 years had much birth weights .
there was a significant relationship between birth weight and ga during delivery ( p < 0.001 ) .
children of preterm labor had lower birth weights . in twins ,
lbw was more prevalent ( p < 0.001).conclusion : our results show that lbw is related to multiple causes and that most of them are preventable with educational programs and also strict and regular prenatal care .
decreasing incidence of lbw children can be achieved by cooperation between different parts of health and clinical systems . |
LONDON (AP) — The FBI has put a spoke in the wheel of a major Russian digital disruption operation potentially aimed at causing havoc in Ukraine, evidence pieced together from researchers, Ukrainian officials and U.S. court documents indicates.
On Wednesday, network technology company Cisco Systems and antivirus company Symantec warnedthat a half-million internet-connected routers had been compromised in a possible effort to lay the groundwork for a cyber-sabotage operation against targets in Ukraine.
Court documents simultaneously unsealed in Pittsburgh show the FBI has seized a key website communicating with the massive army of hijacked devices, disrupting what could have been — and might still be — an ambitious cyberattack by the Russian government-aligned hacking group widely known as Fancy Bear.
"I hope it catches the actors off guard and leads to the downfall of their network," said Craig Williams, the director of outreach for Talos, the digital threat intelligence unit of Cisco that cooperated with the bureau. But he warned that the hackers could still regain control of the infected routers if they possessed their addresses and the right resources to re-establish command and control.
FBI Assistant Director Scott Smith said the agency "has taken a critical step in minimizing the impact of the malware attack. While this is an important first step, the FBI's work is not done."
Much about the hackers' motives remains open to conjecture. Cisco said the malicious software, which it and Symantec both dubbed VPNFilter after a folder it creates, was sitting on more than 500,000 routers in 54 countries but mostly in Ukraine, and had the capacity to render them unusable — a massively disruptive move if carried out at such a scale.
"It could be a significant threat to users around the world," said Williams.
The U.S. Justice Department said the malware "could be used for a variety of malicious purposes, including intelligence gathering, theft of valuable information, destructive or disruptive attacks, and the misattribution of such activities."
Ukraine's cyberpolice said in a statement that it was possible the hackers planned to strike during "large-scale events," an apparent reference either to the upcoming Champions League game between Real Madrid and Liverpool in the capital, Kiev, on Saturday or to Ukraine's upcoming Constitution Day celebrations.
Ukraine has been locked in a years-long struggle with Russia-backed separatists in the country's east and has repeatedly been hit by cyberattacks of escalating severity. Last year witnessed the eruption of the NotPetya worm, which crippled critical systems, including hospitals , across the country and dealt hundreds of millions of dollars in collateral damage around the globe. Ukraine, the United States and Britain have blamed the attack on Moscow — a charge the Kremlin has denied.
Cisco and Symantec both steered clear of attributing the VPNFilter malware to any particular actor, but an FBI affidavit explicitly attributed it to Fancy Bear, the same group that hacked into the Democratic National Committee in 2016 and has been linked to a long series of digital intrusions stretching back more than a decade. The U.S. intelligence community assesses that Fancy Bear acts on behalf of Russia's military intelligence service.
An FBI affidavit — whose existence was first reported by The Daily Beast — said the hackers used lines of code hidden in the metadata of online photo albums to communicate with their network of seeded routers. If the photo albums disappeared, the hackers turned to a fallback website — the same site whose seizure the FBI ordered Tuesday.
An email sent to the website's registered owner was returned as undeliverable.
When asked why the FBI specifically named Fancy Bear where Cisco did not, Williams noted that while attribution was extremely tricky based on malware analysis alone, "if you combine that knowledge with a traditional intelligence apparatus interesting things can come to light."
In any case, he said, "we have a high degree of confidence that the actor behind this is acting against the Ukraine's best interest."
Cisco said in a research note that the malware affected devices geared for small and home offices from manufacturers including Netgear, TP-Link and Linksys and had the potential to disable "internet access for hundreds of thousands of victims worldwide or in a focused region."
The malware's principal capabilities, the company said, included stealthy intelligence-collecting, monitoring industrial-control software and, if triggered, "bricking" or disabling routers. It also persists on the infected routers after they are rebooted.
___
Bajak reported from Boston. Chad Day in Washington contributed to this report.
___
Court documents: https://www.documentcloud.org/documents/4482618-VPNFilter-FBI-affidavit.html
Talos' blog post: https://blog.talosintelligence.com/2018/05/VPNFilter.html ||||| The FBI is advising users of consumer-grade routers and network-attached storage devices to reboot them as soon as possible to counter Russian-engineered malware that has infected hundreds of thousands devices.
Researchers from Cisco’s Talos security team first disclosed the existence of the malware on Wednesday. The detailed report said the malware infected more than 500,000 devices made by Linksys, Mikrotik, Netgear, QNAP, and TP-Link. Known as VPNFilter, the malware allowed attackers to collect communications, launch attacks on others, and permanently destroy the devices with a single command. The report said the malware was developed by hackers working for an advanced nation, possibly Russia, and advised users of affected router models to perform a factory reset, or at a minimum to reboot.
Limited persistence
Later in the day, The Daily Beast reported that VPNFilter was indeed developed by a Russian hacking group , one known by a variety of names, including Sofacy, Fancy Bear, APT 28, and Pawn Storm. The Daily Beast also said the FBI had seized an Internet domain VPNFilter used as a backup means to deliver later stages of the malware to devices that were already infected with the initial stage 1. The seizure meant that the primary and secondary means to deliver stages 2 and 3 had been dismantled, leaving only a third fallback, which relied on attackers sending special packets to each infected device.
The redundant mechanisms for delivering the later stages address a fundamental shortcoming in VPNFilter—stages 2 and 3 can’t survive a reboot, meaning they are wiped clean as soon as a device is restarted. Instead, only stage 1 remains. Presumably, once an infected device reboots, stage 1 will cause it to reach out to the recently seized ToKnowAll.com address. The FBI’s advice to reboot small office and home office routers and NAS devices capitalizes on this limitation. In a statement published Friday, FBI officials suggested that users of all consumer-grade routers, not just those known to be vulnerable to VPNFilter, protect themselves. The officials wrote:
The FBI recommends any owner of small office and home office routers reboot the devices to temporarily disrupt the malware and aid the potential identification of infected devices. Owners are advised to consider disabling remote management settings on devices and secure with strong passwords and encryption when enabled. Network devices should be upgraded to the latest available versions of firmware.
In a statement also published Friday, Justice Department officials wrote:
Owners of SOHO and NAS devices that may be infected should reboot their devices as soon as possible, temporarily eliminating the second stage malware and causing the first stage malware on their device to call out for instructions. Although devices will remain vulnerable to reinfection with the second stage malware while connected to the Internet, these efforts maximize opportunities to identify and remediate the infection worldwide in the time available before Sofacy actors learn of the vulnerability in their command-and-control infrastructure.
The US Department of Homeland Security has also issued a statement advising that "all SOHO router owners power cycle (reboot) their devices to temporarily disrupt the malware."
As noted in the statements, rebooting serves the objectives of (1) temporarily preventing infected devices from running the stages that collect data and other advanced attacks and (2) helping FBI officials to track who was infected. Friday’s statement said the FBI is working with the non-profit Shadow Foundation to disseminate the IP addresses of infected devices to ISPs and foreign authorities to notify end users.
Authorities and researchers still don’t know for certain how compromised devices are initially infected. They suspect the attackers exploited known vulnerabilities and default passwords that end users had yet to patch or change. That uncertainty is likely driving the advice in the FBI statement that all router and NAS users reboot, rather than only users of the 14 models known to be affected by VPNFilter, which are:
Linksys E1200
Linksys E2500
Linksys WRVS4400N
Mikrotik RouterOS for Cloud Core Routers: Versions 1016, 1036, and 1072
Netgear DGN2200
Netgear R6400
Netgear R7000
Netgear R8000
Netgear WNR1000
Netgear WNR2000
QNAP TS251
QNAP TS439 Pro
Other QNAP NAS devices running QTS software
TP-Link R600VPN
The advice to reboot, update, change default passwords, and disable remote administration is sound and in most cases requires no more than 15 minutes. Of course, a more effective measure is to follow the advice Cisco gave Wednesday to users of affected devices and perform a factory reset, which will permanently remove all of the malware, including stage 1. This generally involves using a paper clip or thumb tack to hold down a button on the back of the device for 5 seconds. The reset will remove any configuration settings stored on the device, so users will have to restore those settings once the device initially reboots. (It's never a bad idea to disable UPnP when practical, but that protection appears to have no effect on VPNFilter.)
There's no easy way to know if a router has been infected by VPNFilter. For more advanced users, Cisco provided detailed indicators of compromise in Wednesday’s report, along with firewall rules that can be used to protect devices. Ars has much more about VPNFilter here. | – Reboot your internet router now. That's what the FBI is telling the users of some 500,000 devices believed to be infected with powerful Russian malware capable of intelligence-collecting, software monitoring, and disabling routers, according to the New York Times. Network technology company Cisco Systems and antivirus company Symantec first issued a warning on Wednesday about the routers, which the company said have been compromised in a possible effort to lay the groundwork for a cyber-sabotage operation against targets in Ukraine, per the AP. According to ArsTechnica, the so-called VPNFilter malware uses three distinct stages in order to send gathered data back to the dark actors, who've been identified as the Russian government-linked hacker group Fancy Bear. While the first stage can survive rebooting, the second and third reportedly cannot. Routers from Linksys, Mikrotik, Netgear, QNAP, and TP-Link are reportedly those vulnerable to the malware, but the FBI recommended Friday that any owner of small office and home office routers "reboot the devices to temporarily disrupt the malware and aid the potential identification of infected devices." The FBI advised router owners to consider disabling remote management settings on devices and to secure routers with strong passwords and encryption whenever possible. Network devices should also be upgraded to the latest available versions of firmware, the FBI said. The group Fancy Bear reportedly is known by many other names, including Sofacy, APT 28, and Pawn Storm, and is believed to be the party responsible for the 2016 DNC hack. |
thanks in large part to the palomar survey ( e.g. , @xcite ) , we know that most active galactic nuclei ( agn ) in the present - day universe have low luminosities , being thus called low - luminosity agns ( llagns ) .
the bulk of the llagn population ( @xmath0 ) are low - ionization nuclear emission - line regions ( liners ) , which are extremely sub - eddington systems with an average eddington ratio of @xmath1 @xcite .
the observational properties of liners and llagns in general are quite different from those of more luminous agns .
regarding the seds , llagns seem not to have the big blue bump feature ( e.g. , @xcite ; but see @xcite ) which is one of the signatures of the presence of an optically thick , geometrically thin accretion disk . regarding the emission - lines , llagns typically have weak and narrow fe k@xmath2 emission @xcite and a handful of liners display broad double - peaked h@xmath2 lines ( e.g. , @xcite ) ; these properties of the emission - line spectrum are consistent with the absence of a thin accretion disk , or a thin accretion disk whose inner radius is truncated at @xmath3 .
last but not least , with the typical fuel supply of hot diffuse gas ( via bondi accretion ) and cold dense gas ( via stellar mass loss ) available in nearby galaxies , llagns would be expected to produce much higher luminosities than observed on the assumption of standard thin disks with a @xmath4 radiative efficiency @xcite .
taken together , this set of observational properties favors the scenario in which the accretion flow in llagns / liners is advection - dominated or radiatively inefficient .
advection - dominated accretion flows ( adafs ; for a recent review see @xcite ) are very hot , geometrically thick , optically thin flows which are typified by low radiative efficiencies ( @xmath5 ) and occur at low accretion rates ( @xmath6 ) .
supermassive black holes are thought to spend @xmath7 of their lifes in the adaf state @xcite , the best studied case being sgr a * ( e.g. , @xcite ) .
adafs are relevant to the understanding of agn feedback since they are quite efficient at producing powerful outflows and jets , as suggested by theoretical studies ( including analytical theory and numerical simulations ; e.g. , @xcite ) , as well as different observational studies of llagns ( e.g. , @xcite ) .
in fact , the so - called `` radio mode '' of agn feedback invoked in semi - analytic and hydrodynamic simulations of galaxy formation ( e.g. , @xcite ) would correspond to the adaf accretion state actively producing jets as explicitly incorporated in some works @xcite .
an alternative and perhaps more appropriate expression for the `` radio '' feedback mode would be the `` liner / adaf '' mode .
it is clear that advances in the understanding of the physical nature of liners and llagns are required in order to understand the nature of black hole accretion and feedback in the local universe .
the goal of this work is therefore to probe the physics of accretion and ejection in the liner population , by modeling their nuclear multiwavelength seds which provide constraints to physical models for the emission of the accretion flow and the jet .
our data set consists of 24 seds of liners which include radio ( vla ) , near - ir optical uv ( hst ) and x - ray ( chandra ) data with high spatial resolution . these seds were selected from a sample of 35 seds compiled by @xcite using two selection criteria : ( i ) there should be estimates of the black hole mass for the corresponding galaxies and ( ii ) there should be good x - ray estimates of the photon index and x - ray luminosity . based on these criteria , we can separate the seds in two groups : group a , comprising 10 liners with the most complete sampling of the seds , and group b , for which there is lack of data in some parts of the sed .
for illustratiom we list the liners in the group a : ngc 1097 , m81 , ngc 3998 , ngc 4143 , ngc 4278 , m84 , m87 , ngc 4579 , ngc 4594 and ngc 4736 .
in order to model the liner seds and constrain the properties of their central engines , we adopt the physical scenario which is favored to explain the observational properties of llagns ( @xcite ; see figure 13 of @xcite for a cartoon ) . in this model ,
the accretion / ejection flow consists of three components : ( i ) the inner parts of the flow are _ advection - dominated _ , geometrically thick ; ( ii ) the outer parts of the accretion flow are in the form of a _ standard thin disk truncated at a certain transition radius _ ; ( iii ) near the innermost parts of the adaf a _
relativistic jet _ is launched .
more details can be found in @xcite and nemmen et al .
2010 ( _ in preparation _ ) .
the radiative processes operating in the adaf are synchrotron emission , bremsstrahlung and inverse compton scattering .
the truncated thin disk spectrum is simply thermal .
the jet contributes with synchrotron emission .
some of the main parameters of the models will be discussed in the section below .
we describe the results of the spectral fits for only two liners of our data set for the sake of brevity : ngc 4374 ( m84 ) and ngc 4594 ( sombrero ) .
these two examples nevertheless are illustrative of our general results . in the sed plots below , the arrows in the ir
correspond to upper limits due to significant contamination of the nuclear emission by the host galaxy .
the error bars in the near - ir to uv denote the uncertainty due to the range of possible extinction corrections .
when fitting the seds , we explored the parameter space of the accretion / jet models within the range of values plausibly allowed by theory .
we avoided to fix the values of some dynamical and microphysical parameters for which there is a substantial uncertainty on theoretical grounds .
one example is the parameter @xmath8 , which controls the amount of energy dissipated via turbulence that is directly deposited on electrons , for which there is a considerable uncertainty with plausible values in the range @xmath9 ( e.g. , @xcite ) . in all models we considered
that only some fraction of the mass accretion rate available at the outer boundary of the accretion flow ends up being accreted , due to mass - loss via winds produced in the adaf @xcite .
the sed of m84 is plotted in figure [ m84 ] , together with two different spectral fits .
the left panel of this figure shows a model in which the adaf dominates the emission from the near - ir upwards , particularly the x - ray emission .
the thin disk dominates the flux output around @xmath10 m and the jet is responsible for the bulk of the core radio emission . for this model ,
the mass accretion rate supplied at the outer radius @xmath11 of the adaf is @xmath12 , compatible with the bondi accretion rate estimated by @xcite ; furthermore , @xmath13 and @xmath14 .
the mass - loss rate in the jet was estimated as @xmath15 ) .
the right panel of figure [ m84 ] shows a model for the sed of m84 in which the jet dominates the radio and x - ray emission .
notice that the shape of the x - ray spectrum from the adaf is not consistent with the data . for this model , @xmath16 (
again compatible with the bondi accretion rate ) , @xmath17 , @xmath18 and @xmath19 .
therefore , we demonstrated that there are two possible types of models which can accomodate the observed sed of m84 . in the first type , the emission from the adaf dominates the observed x - rays ; in the second type of model , the jet emission dominates the x - rays . by construction ,
a third type of model is also possible in which the jet and the adaf contribute with similar intensities to the high energy emission .
these results apply also to the other liners in our sample . to illustrate this
, we also show spectral fits to the nuclear sed of the sombrero galaxy in fig .
[ sombrero ] .
the left panel shows an `` adaf dominated x - rays '' model with @xmath20 , @xmath21 , @xmath14 and @xmath22 . by varying the microphysical parameters of the jet model
, we are also able to obtain a `` jet - dominated x - rays '' model which also successfully explains the entire sed ( right panel ) .
figure [ median ] shows the qualitative average sed of liners based on the one computed by @xcite . from our modelling of the seds of liners
, we are able to unveil the physical nature of the liner continuum emission in each waveband , as outlined in the upper part of fig .
[ median ] .
the uncertainty about the origin of x - rays in liners has been debated in the context of llagns and also sgr a * @xcite . in our case ,
the root of the uncertainty in the origin of x - rays lies in the uncertainties regarding the microphysics of the hot plasma in the adaf and the jet ( mainly the uncertainty on the value of @xmath8 and the effect of shocks in the jet ) , which has a major impact in the fitting of the observed x - ray spectrum .
we suggest that monitoring campaigns of the variability of radio and x - ray emission in liners , and the comparison of such observations with predictions of jet / adaf models would help to pin down the nature of the x - ray emission in llagns . from our modelling of the liner
seds with the coupled jet - adaf model , we are able to constrain important parameters that characterize their central engines : * the typical accretion rate available at the outerskirts of the accretion flow is @xmath23 . * typical jet mass - loss rates are in the range @xmath24 . * the values of @xmath25 above together with the typical lorentz factors result in the jet powers @xmath26 .
* given the bolometric luminosities @xmath27 , we have @xmath28 . taking the values above
, we can roughly estimate the fraction of accreted mass that is channeled into the jets in liners , @xmath29 .
similarly , we can estimate the efficiency of jet production as @xmath30 , in rough agreement with other estimates for llagns @xcite .
these values provide useful indicators of the relevant feeding and feedback properties of liners , and by extension of the whole llagn population ; this is particularly relevant in the light of this symposium .
finally , our work allows us to draw a link between the supermassive black holes in liners and the quiescent beast in our galaxy , sgr a*. by modelling the sed of sgr a * with current adaf models , @xcite estimated @xmath31 .
this value is two orders of magnitude below the typical accretion rate of liners that we estimated .
therefore , we could say that if sgr a * accreted at a rate 100 times its present rate , it would `` light up '' and presumably become a liner .
we were able to successfully model the seds of 24 liners in the context of a coupled adaf - jet scenario .
while the the radio emission is dominated by the relativistic jet , the adaf dominates in the band 1 mm 100 @xmath32 m and the x - ray radiation can be dominated by either the adaf , the jet or a combination of both .
we find that strong jets are implied by our modelling , for which the kinetic power considerably exceeds the radiated power ( @xmath28 ) .
furthermore , we obtained estimates of the fundamental parameters of the central engines of llagns which can be useful in studies of the feeding / feedback properties of agns , such as the mass accretion rates , jet powers and mass - loss rates in the jet .
finally , we would like to point out that our sed models provide a library of templates for the compact emission of liners , which can be useful for studies of e.g. the nuclear emission of stellar populations , dust and pah features .
a detailed description of the results above and the sed models will be available in nemmen et al .
( 2010 ) , _ in preparation_. allen , s. w. , r. j. h dunn , fabian , a. c. , taylor , g. b. , & reynolds , c. s. 2006 , _ mnras _ , 372 , 21 bower , r. g. , et al . 2006 , _ mnras _ , 370 , 645 elvis , m. , et al . 1994 , _ apjs _ , 95 , 1 eracleous , m. , hwang , j. a. , & flohic , h. m. l. g. 2010 , _ apjs _ , in press ( arxiv:1001.2924 ) falcke , h. , krding , e. , & markoff , s. 2004 , _ mnras _ , 414 , 895 heinz , s. , merloni , a. , & schwab , j. 2007 , _ apj _ , 658 , 9 ho , l. c. , filippenko , a. v. , & sargent , w. l. w. 1995 , _ apjs _ , 98 , 477 . 1997 , _ apj _ , 487 , 568 ho , l. c. 2002 , _ apj _ , 564 , 120 . 2008 , araa , 46 , 475 . 2009 , _ apj _ , 699 , 626 hopkins , p. f. , narayan , r. , & hernquist , l. 2006 , _ apj _ , 643 , 641 maoz , d. 2007 , _ mnras _ , 377 , 1696 merloni , a. , heinz , s. , & di matteo , t. 2003 , _ mnras _ , 345 , 1057 narayan , r. , & mcclintock , j. e. 2008 , _ new astronomy reviews _
, 51 , 733 nemmen , r. s. , et al . 2006 , _
apj _ , 643 , 652 nemmen , r. s. , bower , r. g. , babul , a. , & storchi - bergmann , t. 2007 , _ mnras _ , 377 , 1652 okamoto , t. , nemmen , r. s. , & bower , r. g. 2008 , _ mnras _ , 385 , 161 pellegrini , s. 2005 , _ apj _ , 624 , 155 storchi - bergmann , t. , et al .
2003 , _ apj _ , 598 , 956 sharma , p. , quataert , e. , hammett , g. w. , & stone , j. m. 2007 , _ apj _ , 667 , 714 sijacki , d. , springel , v. , di matteo , t. , & hernquist , l. 2007 , _ mnras _ , 380 , 877 terashima , y. , iyomoto , n. , ho , l. c. , & ptak , a. f. 2002 , _ apjs _ , 139 , 1 yuan , f. , quataert , e. , & narayan , r. 2003 , _ apj _ , 598 , 301 yuan , f. 2007 , in asp conf .
373 , the central engine of active galactic nuclei , ed .
l. c. ho & j .-
wang ( san francisco : asp ) , 95 yuan , f. , yu , z. , & ho , l. c. 2009 , _ apj _ , 703 , 1034 | low - luminosity active galactic nuclei ( llagns ) represent the bulk of the agn population in the present - day universe and they trace low - level accreting supermassive black holes .
the observational properties of llagns suggest that their central engines are intrinsically different from those of more luminous agns .
it has been suggested that accretion in llagns occurs via an advection - dominated accretion flow ( adaf ) associated with strong jets . in order to probe the accretion physics in llagns as a class , we model the multiwavelength spectral energy distributions ( seds ) of 24 liners ( taken from a recent compilation by eracleous et al . ) with a coupled accretion - jet model .
the accretion flow is modeled as an inner adaf outside of which there is a truncated standard thin disk .
these seds include radio , near - ir to near - uv hst data , and chandra x - ray data .
we find that the radio emission is severely underpredicted by adaf models but can be explained by the relativistic jet .
the origin of the x - ray radiation in most sources can be explained by three distinct scenarios : the x - rays can be dominated by emission from the adaf , the jet , or from both components contributing at similar levels . from the model fits , we estimate important parameters of the central engine of liners , such as the mass accretion rate relevant for studies of the feeding of agns and the mass - loss rate in the jet and the jet power
relevant for studies of the kinetic feedback from jets . |
scrambling describes a property of the dynamics of closed quantum systems , in which initially localized information spreads out over the whole system , thereby becoming inaccessible locally .
the concept of scrambling originates from the study of black holes in quantum gravity @xcite .
if information escapes from a black hole , the thermal nature of the hawking radiation @xcite indicates that the state of any matter and information falling into the black hole has been scrambled and so gets lost from the perspective of an external observer .
in particular , the `` fast scrambling conjecture '' @xcite states that the fastest scramblers take time logarithmic in the system size to scramble information , and that black holes are the fastest scramblers . scrambling and similar notions play important roles in other areas of physics as well .
for example , scrambling is closely related to many - body localization and thermalization ( see @xcite for a recent review ) : quantum systems that exhibit localization clearly do not scramble or thermalize , since information of local initial conditions fails to spread , and so remains accessible to local measurements .
by contrast , a many - body system that undergoes scrambling moves to states that appear random with respect to local measurement : here , the notion of scrambling can be seen as a form of thermalization at infinite temperature .
quantum chaos is also a close relative of scrambling . under chaotic dynamics , initially local operators
grow to overlap with the whole system ( the butterfly effect ) .
that is , chaotic quantum systems are scramblers @xcite .
in particular , the behaviors of the so - called out - of - time - order ( oto ) correlators can probe the growth of local perturbations @xcite .
their role as diagnostics of chaos has led to the active application of oto correlators to the study of scrambling @xcite and many - body localization @xcite .
this work is mainly motivated by two key features of scrambling .
first , scrambling of quantum information and the growth of entanglement go hand in hand : information initially present in local perturbations ends up encoded in global entanglement so that it becomes irretrievable by simple measurements .
entanglement captures the nonclassical essence of scrambling , and leads to information - theoretic measures of scrambling such as the entanglement entropy .
second , scrambling can be achieved by a sufficiently random dynamics .
the main idea of the foundational paper of this field @xcite is to use random dynamics to simulate scrambling behaviors of black holes .
that is , scrambling is also related to the generation of randomness .
the goal of this paper is to connect entanglement entropy , given by generalized entropies of different orders , with degrees of randomness , characterized by designs .
we establish a strong connection between rnyi entanglement entropies and the degree of randomness induced by designs of the same order , in both the random unitary and random state settings .
( we note that a recent paper @xcite establishes a related connection between @xmath1-point oto correlators and @xmath0-designs via frame potentials .
however , the average oto correlators that @xcite mainly studies may not directly correspond to entropies in our setting . )
generalized entropies of order @xmath0 raise the density matrix to the @xmath0-th power .
the higher the order of the generalized entropy , the more sensitive that entropy is to nonuniformity in the spectrum of the density matrix .
an @xmath0-design is an ensemble of states or unitaries whose first @xmath0 moments are indistinguishable from completely random states or unitaries ( drawn uniformly from the haar measure ) .
the higher the order of the design , the better it emulates the full randomness of the haar distribution .
our main result is that @xmath0-designs induce almost maximal rnyi-@xmath0 entanglement entropies .
our analysis strengthens known results relating entanglement entropy and randomness , such as page s theorem @xcite for random states and hosur - qi - roberts - yoshida @xcite for random unitaries .
these results allow us to elucidate the tradeoffs involved in creating and characterizing higher degrees of randomness . on the one hand , a completely random unitary channel ( drawn uniformly from the haar measure )
certainly scrambles information , but it requires exponentially many local gates and random bits to generate @xcite .
in fact , 2-designs , which can be efficiently implemented @xcite , can already achieve information scrambling .
so , for example , simple quantum circuits can create 2-designs with almost maximal von neumann entanglement entropy . on the other hand , generalized entanglement entropies of higher orders
are more sensitive to nonuniformity ( such as sharp peaks ) in the spectrum of the reduced density matrix , i.e. the entanglement spectrum @xcite , which can be invisible to the ordinary ( von neumann ) entanglement entropy .
pseudorandom unitaries may typically exhibit almost maximal von neumann entanglement entropy , but do not necessarily maximize high - order rnyi entanglement entropies .
our results reveal a fine - grained hierarchy of complexities between information and haar scrambling .
this hierarchy is defined relative to the moments of the haar measure , and can be probed by generalized entanglement entropies . to summarize
, we aim at deriving tight relations between entropy and randomness .
to do so , we calculate the generalized entanglement entropies averaged over state and unitary designs .
now we introduce our approaches and results more specifically . to study scrambling properties of unitary channels , we map them to a dual state via choi isomorphism , and study the entanglement properties of this dual state . as in @xcite , we partition the input register of the choi state into two parts , @xmath2 and @xmath3 , and the output register into @xmath4 and @xmath5 .
our results rely on the calculation of average @xmath6 , the defining element of order-@xmath0 entanglement entropies between @xmath7 and @xmath8 of the choi state .
we employ tools from weingarten calculus to explicitly compute the haar integrals of @xmath6 for all @xmath0 in both the asymptotic and nonasymptotic regimes , which are by definition equal to the average over unitary @xmath0-designs .
we are able to use these results to lower bound the regular rnyi entanglement entropies , as they are convex in @xmath6 .
the key conclusion is that the rnyi-@xmath0 entanglement entropies averaged over unitary @xmath0-designs are almost maximal . in other words ,
a random unitary sampled from a unitary @xmath0-design typically exhibits nearly maximal rnyi-@xmath0 entanglement entropies .
these results indicate that the rnyi-@xmath0 entanglement entropies can diagnose the randomness complexity of unitary @xmath0-designs .
we also study the @xmath9 limit of rnyi , the min entanglement entropy , in particular .
we show that unitary designs of an order that scales logarithmically in the dimension of the unitary achieve almost maximal average min entanglement entropy , which implies that they are already indistinguishable from haar random by the entanglement spectrum alone .
higher moments make no difference in the entanglement spectrum .
then we consider the more straightforward and well - known problem of entanglement in random states .
previous results in this setting , e.g. page s theorem , are also not tight in a similar sense .
we obtain analogous results as in the random unitary setting .
most importantly , we show that ( projective ) @xmath0-designs exhibit almost maximal rnyi-@xmath0 entanglement entropies , which can be regarded as a tight version of page s theorem . as in the case of unitary designs , state designs of logarithmic order already maximize min entanglement entropy .
moreover , we are able to show that there exists a projective 2-design such that all higher rnyi entanglement entropies are bounded away from maximum .
the existence of such a 2-design establishes a clear separation between the entropic complexity of 2-designs and those of higher orders .
our results reveal intrinsic connections between the order of generalized entropies and the moment of randomness .
we also include several results related to e.g. rnyi entropies , designs , and weingarten calculus , which may be of independent interest .
the paper is organized as follows . in sec .
[ sec : prelim ] , we formally define the central concepts of this paper : the generalized quantum entropies and designs .
sections [ sec : randu ] and [ sec : rands ] contain analysis of the choi model of random unitary and the random state settings respectively . finally we provide some concluding remarks about our results and future directions in sec .
[ sec : dis ] .
peripheral results and technical details are included in the appendix .
see , e.g. , @xcite for a comprehensive introduction of standard and soft notations of asymptotics ( e.g. big - o and soft big - o ) that will be used throughout this paper .
the theme of this paper is to establish connections between generalized quantum entropies and designs , which we shall formally introduce in this section . some parametrized generalizations of the shannon and von neumann entropy , most importantly the rnyi and tsallis entropies ,
are found to be useful in both classical and quantum regimes .
we will find that rnyi entropies are more suitable for characterizing scrambling than tsallis entropies .
here we focus on entropies defined on a quantum state @xmath10 living in a finite - dimensional hilbert space .
a unified definition of generalized quantum entropies is given in @xcite : the quantum unified @xmath11-entropy of a state @xmath12 is defined as @xmath13.\ ] ] the two parameters @xmath0 and @xmath14 are respectively referred to as the order and the family of an entropy in this paper .
the @xmath15 element plays a key role in this paper .
entropies specified by a certain order @xmath0 are collectively called @xmath0 entropies .
the @xmath16 limit gives the von neumann entropy . by fixing @xmath14 ,
one obtains a family of entropies parametrized by order @xmath0 .
we define the following function to be the characteristic function of an entropy : @xmath17 which is obtained by treating @xmath15 as the argument @xmath18 .
the convexity of this characteristic function is important to many of our results .
the most representative families are rnyi ( the limiting case @xmath19 ) and tsallis ( @xmath20 ) .
however , @xmath21 entropies ( e.g. tsallis ) have some undesirable features for our purposes .
most importantly , the maximal value is dependent on @xmath0 , thus it does not make much sense to directly compare @xmath21 entropy of different orders @xmath0 , which is key to our results .
moreover , they are not even additive on maximally mixed states , so on runs into trouble when deriving e.g. tripartite information from entanglement entropy .
in addition , notice that for @xmath22 ( excluding tsallis ) , the characteristic function is concave in the relevant @xmath23 regime , which prevents us from using jensen s inequality to establish several crucial bounds .
these problems will be explained in more detail later .
in contrast , rnyi entropies do not have these problems . in this work
, we mainly focus on this well - behaved family : the quantum rnyi-@xmath24 entropy of a state @xmath12 is defined as @xmath25 for @xmath26 , @xmath27 is singular and defined by taking a limit .
@xmath28 is the max / hartley entropy ; @xmath29 is just the von neumann entropy .
the @xmath30 limit , the min entropy , plays a special role in our study : the quantum min entropy of a state @xmath12 is defined as @xmath31 where @xmath32 denotes the operator norm of @xmath10 , @xmath33 is the largest eigenvalue of @xmath10
. other rnyi entropies are well - defined by eq .
( [ renyidef ] ) .
the @xmath34 case @xmath35 , often called the rnyi-2 or collision entropy , is also a widely used and highly relevant quantity . in the context of scrambling
, a key result of @xcite is that the rnyi-2 entanglement entropy is directly related to the 4-point oto correlators , thus can probe chaos . ref .
@xcite also discusses scrambling in a wess - zumino - witten model using this quantity .
also notice that @xmath36 is directly related to the quantum purity @xmath37 ( recall that less pure subsystems dictate entanglement ) , and is so frequently employed in the study of entanglement @xcite .
it can be directly verified that rnyi entropies are additive and have the same maximal value @xmath38 for @xmath38 qubits .
an important feature of rnyi entropy for analyzing degree of scrambling is that the values of rnyi entropy of different orders can in principle be well gapped : consequently , the behavior of states and unitaries at different orders of rnyi entropy can be used to distinguish between different degrees of scrambling . here
we give a simple example .
consider a density operator in @xmath39-dimenisonal hilebert space has one large eigenvalue @xmath40 , and the rest of the spectrum is uniform / degenerate .
that is , the spectrum reads @xmath41 then the min entropy only cares about the largest eigenvalue by definition : @xmath42 which is @xmath43 ( linear in the number of qubits ) away from maximum .
however , rnyi-2 is insensitive to this single peak : @xmath44 which is almost maximal ( with a residual constant ) .
this establishes a clear separation between the low and high ends of rnyi entropies .
in fact , @xmath45 produces @xmath43 gaps between all finite orders : @xmath46 for @xmath47 .
so the slope decreases with @xmath0 .
indeed , it equals 1 for @xmath34 , and approaches @xmath48 in the @xmath9 limit .
the intuition is simply that promoting the power of eigenvalues essentially amplifies the nonuniformity of the spectrum .
similar separations will be constructed in the random state setting . in the appendix ,
we derive more properties of rnyi entropies , including inequalities relating different orders of rnyi entropies ( appendix [ app : renyiineq ] ) , and a weaker form of subadditivity ( appendix [ app : renyisub ] ) . fig .
[ fig : entropy ] summarizes the important generalized entropies in the relevant regime @xmath49 .
-entropies , @xmath49 .
italicized names refer to the whole line . ] in general , the notion of design is introduced to characterize distributions that can produce the behaviors of certain moments of the uniform distribution , and so can be considered as good approximations to complete randomness .
the idea applies to both quantum states and unitary channels , as we shall formally introduce now .
complex projective designs are distributions of vectors on the complex unit sphere that are good approximations to the uniform distribution ( pseudorandom ) , in the sense that they match the uniform distribution up to certain moments @xcite .
they are of interest in many research areas , such as approximation theory , experimental designs , signal processing , and quantum information .
there are many equivalent definitions of exact designs ( see @xcite ) . here
we mention a few that are directly relevant to the current study .
thr canonical definition is based on polynomials of vector entries .
define @xmath51 as the space of polynomials homogeneous of degree @xmath50 both in the coordinates of vectors in @xmath52 and in their complex conjugates .
an ensemble @xmath53 of pure state vectors in dimension @xmath39 is a ( complex projective ) _ @xmath50-design _ if @xmath54 where the integral is taken with respect to the ( normalized ) uniform measure on the complex unit sphere in @xmath52
. the second definition , based on the frame operator , will be useful in the error analysis .
let @xmath55 be the @xmath50-partite symmetric subspace of @xmath56 with corresponding projector @xmath57}$ ] .
the dimension of @xmath55 reads @xmath58}=\binom{d+t-1}{t}.\ ] ] the @xmath50-th frame operator of @xmath53 is defined as @xmath59}\operatorname{\mathbb{e}}_\nu ( { { { { |\psi\rangle}}\!{{\langle\psi|}}}})^{\otimes t},\ ] ] and the @xmath50-th frame potential is @xmath60 the ensemble @xmath53 is a @xmath50-design if and only if @xmath61}$ ] or , equivalently , if @xmath62}$ ] @xcite .
in analogy to complex projective designs , unitary designs are distributions on the unitary group that are good approximations to haar - random unitaries , in the sense that they match the haar measure up to certain moments @xcite .
they also play key roles in many research areas , such as randomized benchmarking , data hiding , and decoupling .
as in the case of state designs there are also many equivalent definitions of exact unitary designs ( see @xcite ) .
similarly , we formally define unitary designs by polynomials and frame operators / potentials .
let @xmath63 be the space of polynomials homogeneous of degree @xmath50 both in the matrix elements of @xmath64 and in their complex conjugates .
an ensemble @xmath53 of unitary operators in dimension @xmath39 is a _ unitary @xmath50-design _ if @xmath65 where the integral is taken over the normalized haar measure on @xmath66 .
the @xmath50-th frame operator of @xmath53 is defined as @xmath67,\ ] ] and the @xmath50-th frame potential is @xmath68 the ensemble @xmath53 is a unitary @xmath50-design if and only if @xmath69 , where @xmath70 is the @xmath50th frame operator of the unitary group @xmath71 with haar measure @xcite .
in addition , @xmath72 and the lower bound is saturated if and only if @xmath53 is a unitary @xmath50-design @xcite .
when @xmath73 , which is the case we are mostly interested in , @xmath74
we first study the scrambling properties of random unitary channels . as suggested by @xcite
, we employ the choi isomorphism to map a unitary channel to a dual state , and study scrambling by the entanglement properties of this state . in this section ,
we first briefly introduce the choi state model , and then present several calculations of averaged generalized entanglement entropies .
the results lead to an entropic notion of scrambling complexities , which we shall discuss in depth . ref .
@xcite proposed that one can use the negativity of the tripartite information associated with the choi state of a unitary channel to probe scrambling .
the negativity of the tripartite information is a measure of global entanglement that quantifies the degree to which local information in the input to the channel becomes non - local in the output .
here we introduce the definitions and motivations of this formalism to set the stage .
the choi isomorphism ( more generally , the channel - state duality ) is widely used in quantum information theory to study quantum channels as states .
it says that a unitary operator @xmath75 acting on a @xmath39-dimensional hilbert space @xmath76 is dual to the pure state @xmath77 which is called the choi state of @xmath75 .
now consider arbitrary bipartitions of the input register into @xmath2 and @xmath3 , and the output register into @xmath4 and @xmath5 .
let @xmath78 be the dimensions of subregions @xmath79 respectively ( @xmath80 ) .
one expects that , in a scrambling system , any measurement on local regions of the output can not reveal much information about local perturbations applied to the input . in other words ,
the mutual information between local regions of the input and output @xmath81 and @xmath82 should be small .
this suggests that the negative tripartite information @xmath83 can diagnose scrambling , since it essentially measures the amount of information of @xmath2 hidden nonlocally over the whole output register . here
@xmath84 is the mutual information , which measures the total correlation between @xmath2 and @xmath4 .
since the input and output are maximally mixed due to unitarity , the four subregions are all maximally mixed .
for example , here @xmath81 is reduced to @xmath85 , so we only need to analyze the entanglement entropy @xmath86 . note that @xmath87 can be reduced to the conditional mutual information @xmath88 @xcite , which is a quantity of great interest in quantum information theory .
the haar - averaged ( completely random ) values of the terms in the von neumann @xmath89 was computed in @xcite , as a baseline for scrambling .
however , it is clear that a pseudorandom ensemble ( such as a 2-design ) can already reach these roof values @xcite , which indicates that there is a hierarchy of fine - grained complexities of scrambling , corresponding to different degrees of randomness .
this section aims at formalizing this observation in the same choi state model .
we rewrite the information - theoretic quantities in terms of generalized quantum entropies ( such as rnyi and tsallis ) of different orders , to study the connections between these entropies and designs . by using individual indices for different subregions ,
we rewrite the choi state in eq .
( [ choi ] ) as @xmath90 where @xmath91 are respectively indices for @xmath79 .
the corresponding density operator is then @xmath92 by tracing out @xmath8 , we obtain the reduced density operator of @xmath7 : @xmath93 the entropy of @xmath94 measures the entanglement between @xmath7 and @xmath8 . in order to compute the generalized @xmath0 entanglement entropies , we need to raise @xmath94 to the power @xmath0 : @xmath95 thus @xmath96 this result can also take more concise operator forms : @xmath97 where @xmath98 where @xmath99 denotes partial transpose on even parties .
notice that @xmath100 so @xmath101 is unitary . other density operators can be derived in a similar way . again
note that the input and output are maximally entangled due to unitarity , so all four subregions are maximally mixed .
we first employ tools from random matrix theory and in particular weingarten calculus to study haar integrals of the trace term in generalized entropies .
it is known that the haar - averaged value of each monomial of degree @xmath0 can be written in the following form @xcite : @xmath102 where @xmath103 is the symmetric group of @xmath0 symbols , and @xmath104 are called weingarten functions of @xmath105 .
here @xmath106 means @xmath45 is a partition of @xmath0 , @xmath107 is the corresponding character of @xmath103 , and @xmath108 is the corresponding schur function / polynomial .
notice that @xmath109 is simply the dimension of the irrep of @xmath105 associated with @xmath45 .
the weingarten function can be derived by various tools in representation theory , such as schur - weyl duality @xcite and jucys - murphy elements @xcite . therefore , we obtain the following general result : @xmath110 where @xmath111 is the number of disjoint cycles associated with @xmath112 , and @xmath113 is the 1-shift ( canonical full cycle ) . one can easily recover the @xmath114 results given in @xcite from eq .
( [ general ] ) as follows .
the weingarten functions for @xmath115 are @xmath116 there are 4 terms corresponding to 2 different weingarten functions : [ cols="^,^,^,^,^,^,^",options="header " , ] [ tab:2 ] plugging them into eq .
( [ general ] ) yields @xmath117 which confirms eq .
( 66 ) of @xcite .
a series of results of @xcite such as an @xmath118 gap between haar and maximal rnyi-2 entanglement entropy are obtained based on this formula . more generally , we have @xmath119 where @xmath120 is the product of canonical full cycles on each of the @xmath14 blocks with @xmath0 symbols .
we now analyze the asymptotic behaviors of generalized entropies in the @xmath121 limit to provide a big picture .
later we shall introduce some explicit bounds that hold for general @xmath39 . to simplify the analysis
, we further require that @xmath122 here . that is , all partitions are equal .
this does not affect the main idea .
[ [ trace ] ] trace + + + + + we first introduce a series of useful combinatorics lemmas , which play critical roles in the behavior of generalized entanglement entropies ( in particular rnyi ) .
these results are known in the context of random matrix theory .
we refer to appendix a of @xcite ( c.f .
references therein ) for a good summary of related results . for intuition
, we still include our independent proof by induction for the key result lemma [ sumcycle ] in appendix [ app : lem ] .
[ sumcycle ] @xmath123 for all @xmath124 .
this result can be obtained by combining lemmas a.1 and a.4 of @xcite .
see appendix [ app : lem ] for our proof .
[ g ] let @xmath125 be the number of @xmath124 that saturates the inequality in lemma [ sumcycle ] .
then @xmath126 , i.e. , the @xmath0-th catalan number .
this result follows from lemmas a.4 and a.5 of @xcite .
such permutations lie on the geodesic from identity to @xmath127 .
it guarantees that the gap between haar and maximal rnyi entropies is independent of the system size @xmath38 , as will become clear shortly .
we note that catalan numbers frequently occur in counting problems .
the first few catalan numbers are @xmath128 .
some useful bounds on the catalan numbers are derived in appendix [ app : cat ] .
[ cyclecor ] @xmath129 for all @xmath130 .
the number of @xmath130 that saturates the inequality @xmath131 .
we also need the large @xmath39 asymptotic behaviors of the weingarten function : [ asym ] given @xmath124 with cycle decomposition @xmath132 .
let @xmath133 be the minimal number of factors needed to write @xmath112 as a product of transpositions .
the mbius function of @xmath112 is defined by @xmath134 where @xmath135 is the @xmath38-th catalan number ( defined in lemma [ g ] ) . in some literature , @xmath136 here is written as @xmath137 where the @xmath138 actually means the length of the cycle . to avoid confusions ,
we stick to the number of transpositions notation .
then , in the large @xmath39 limit , the weingarten function has the asymptotic behavior @xmath139 [ wgcor ] we mainly need to distinguish the following two cases : * @xmath140 : @xmath141 and @xmath142 , thus @xmath143 ; * @xmath144 : @xmath145 , thus @xmath146 .
some bounds on the mbius function are derived in appendix [ app : moeb ] .
now we are equipped to derive the asymptotic behaviors of @xmath147 : [ thm : asym ] for equal partitions ( @xmath122 ) , in the large @xmath39 limit , @xmath148 starting from eq .
( [ general ] ) : @xmath149 here , the second line follows from the equal bipartition assumption , the third line follows from lemma [ asym ] and corollary [ wgcor ] , and the fourth line follows from lemmas [ sumcycle ] , [ g ] and some simple scaling analysis .
similarly , the asymptotic behavior of @xmath150 is @xmath151 by corollary [ cyclecor ] .
[ [ s0-entropies ] ] @xmath21 entropies + + + + + + + + + + + + + + + + + + + + + + + + + + + the calculations of @xmath21 entropies ( e.g. tsallis ) are straightforward , since the term @xmath152 linearly appears in the definition . by theorem [ thm : asym ] , for positive integers
@xmath153 : @xmath154 notice that the maximum value of @xmath155 for a @xmath39-dimensional state is ( when it is the maximally mixed state @xmath156 ) @xmath157 so we see a gap between the haar - averaged and maximum value of @xmath158 : @xmath159 which is vanishingly small in @xmath39 .
as briefly mentioned , @xmath21 entropies are not ideal for our study for several reasons , which we elaborate here : 1 .
the definitions of mutual information and tripartite information in terms of @xmath21 entropies might not make much sense , since they are not even additive on product states .
recall that all partitions are in the maximally mixed state @xmath160 .
however , the mutual information @xmath161 is not directly given by @xmath162 .
define @xmath163 then @xmath164 which is dominated by the irrelevant @xmath165 ( @xmath166 is vanishingly small ) .
the definition of entropic scrambling complexities and several other arguments rely on comparing generalized entropies of different orders @xmath0 .
but we can see from eq .
( [ eq : smax ] ) that the maximally mixed values for @xmath21 entropies vary with @xmath0 , thus it also does not make much sense to do so .
recall that the characteristic function for @xmath167 entropies are concave ( tsallis is linear ) , in contrast to rnyi .
although theorem [ thm : asym ] enables us to directly calculate the haar - averaged @xmath167 entanglement entropies , we can not use jensen s inequality to lower bound their design - averaged values .
therefore , we are not going to devote much attention to @xmath21 entropies in the rest of this work . [ [ rnyi - entropy ] ] rnyi entropy
+ + + + + + + + + + + + + now we analyze the more interesting case of rnyi , the @xmath19 limit .
compared to @xmath21 entropies , the calculations of rnyi are trickier because of the logarithm , which nevertheless leads to some nice features such as additivity and constant roof value .
we are able to obtain the following result : for equal partitions , in the large @xmath39 limit , @xmath168 by definition , @xmath169 where @xmath170 is the characteristic function for rnyi . since @xmath171 when @xmath23 , @xmath172 is convex .
so @xmath173 by jensen s inequality .
we note that this jensen s lower bound due to convexity ( @xmath174 ) will be repeatedly used to establish bounds for rnyi entropies . then according to eq .
( [ expsumcycle ] ) , @xmath175 notice that lemma [ sumcycle ] already guarantees that the leading correction term ( the second term ) is independent of @xmath39 asymptotically .
further notice that @xmath176 is asymptotically linear in @xmath0 , and satisfies @xmath177 so the second term does not grow with @xmath0 as well . in summary , in the large @xmath178 limit , @xmath179
so the gap between the haar - averaged and maximum value of @xmath180 ( the `` residual entropy '' ) is @xmath181 that is , the average rnyi entanglement entropies over the haar measure is only bounded by a constant from the maximal .
the result implies that a random unitary sampled from haar typically has almost maximal rnyi entanglement entropies for all partitions : by markov s inequality , for any @xmath182 , @xmath183 \leq \frac{o(1)}{\delta}.\ ] ] that is , the probability for @xmath184 to be bounded away from the maximum is vanishingly small .
now we consider the rnyi mutual information and tripartite information based on the entanglement entropy results .
first , we can directly obtain @xmath185 which is equal to @xmath186 by additivity .
the results holds similarly for @xmath187 .
that is , the rnyi mutual information between any two local regions of the input and output is vanishingly small compared to the system size . on the other hand , for any partition size , notice that @xmath188 second line : the whole choi state is pure so @xmath189 ; third line : the three partitions involved are maximally mixed ; fourth line : @xmath190 . under the equal partition assumption , @xmath191
this is consistent with the fact that all information of @xmath2 is kept in the whole output @xmath192 because of unitarity . as a result : @xmath193 by plugging in all relevant terms .
so the rnyi tripartite information of haar scrambling is indeed close to maximum .
however , we would like to mention that the rnyi-@xmath0 entropy is not subadditive except for @xmath194 , thus @xmath195 is not necessarily nonnegative
. a weaker form of subadditivity of rnyi entropies is given in appendix [ app : renyisub ] .
here we prove some explicit bounds on the haar - averaged trace , rnyi entropies , and in particular the min entropy , in the nonasymptotic regime .
detailed definitions and lemmas can be found in the appendices .
these results provide further quantitative evidence in finite dimensions that all entanglement entropies are typically almost maximal , and also help us derive further results such as nontrivial moments .
[ [ trace - and - rnyi ] ] trace and rnyi + + + + + + + + + + + + + + + [ lem : sumbound ] suppose @xmath196 , and @xmath197 .
then @xmath198 where @xmath199 $ ] .
define @xmath200 as the number of permutations in @xmath103 with genus @xmath201 , that is , @xmath202 note that @xmath203 is the catalan number @xmath204 by lemma [ g ] .
then @xmath205 according to lemma [ lem : ngpermutationt ] in appendix [ app : genus ] .
as a consequence of this inequality and the assumption @xmath197 , @xmath206\leq { \mathrm{cat}}_\alpha d_ad_b^{\alpha}\left[1 + \frac{2}{3}\sum_{\delta=1}^{\infty } q^{\delta}\right ] = { \mathrm{cat}}_\alpha d_ad_b^{\alpha}\left[1 + \frac{2q}{3(1-q)}\right ] \nonumber\\ = & h(q){\mathrm{cat}}_\alpha d_ad_b^{\alpha}\leq \frac{4^\alpha h(q)}{\sqrt{\pi}\alpha^{3/2}}d_ad_b^\alpha .
\end{aligned}\ ] ] here the last inequality follows from lemma [ lem : catalanbound ] , which sets an upper bound on the catalan numbers .
suppose @xmath207 .
then @xmath208}{\alpha-1 } , \label{finited2}\end{aligned}\ ] ] where @xmath209 .
@xmath210/2 } { \mathrm{wg}}(d,\sigma\gamma^{-1})=\sum_{\zeta\in s_\alpha}\left [ \sum _ { \gamma\in s_\alpha}d^{[\xi(\zeta\gamma\tau)+\xi(\zeta\gamma)+\xi(\gamma\tau)+\xi(\gamma)]/2 } { \mathrm{wg}}(d,\zeta)\right]\nonumber\\ \leq & \sum_{\zeta\in a_\alpha}\left [ \sum _ { \gamma\in s_\alpha}d^{[\xi(\zeta\gamma\tau)+\xi(\zeta\gamma)+\xi(\gamma\tau)+\xi(\gamma)]/2 } { \mathrm{wg}}(d,\zeta)\right]\leq \sum_{\zeta \in a_\alpha } \sum _ { \gamma\in s_\alpha } d^{\xi(\gamma\tau)+\xi(\gamma ) } { \mathrm{wg}}(d,\zeta)\nonumber\\ \leq & { \mathrm{cat}}_\alpha d^{\alpha+1}\left(1+\frac{2q}{3(1-q)}\right ) \sum_{\zeta \in a_\alpha } { \mathrm{wg}}(d,\zeta)\leq \frac{a_\alpha{\mathrm{cat}}_\alpha d}{8}\left(1+\frac{2q}{3(1-q)}\right)\left(7+\cosh\frac{2\alpha(\alpha-1)}{d}\right),\label{cosh}\end{aligned}\ ] ] where @xmath211 is the set of even permutations , i.e. the alternating group .
the first inequality follows from the fact that @xmath212 is negative when @xmath213 is an odd permutation ; the second inequality follows from the cauchy - schwarz inequality , note that @xmath214 ; the third inequality follows from lemma [ lem : sumbound ] , and the last inequality follows from lemma [ lem : wgsumbound ] . by plugging eq .
( [ cosh ] ) into eq .
( [ general ] ) , we immediately obtain the trace result eq .
( [ finited1 ] ) .
the rnyi result eq .
( [ finited2 ] ) then follows from jensen s inequality .
we see that the leading terms indeed match the asymptotic results .
the overall observation is similar : the rnyi entanglement entropies is very likely to be almost maximal , for large enough @xmath39 . for intuition , we compute @xmath215 for @xmath216 based on explicit formula for weingarten functions , which also turn out to be useful later . when @xmath115 , @xmath116 when @xmath217 , @xmath218 therefore , @xmath219 [ [ min - entropy ] ] min entropy + + + + + + + + + + + the results so far only directly apply to positive integers @xmath0 .
the min entanglement entropy , which corresponds to the special limit @xmath9 , plays crucial roles in the discussions of scrambling complexities .
now we examine the min entanglement entropy . in particular .
[ thm : aveminentropy ] @xmath220 where @xmath221 .
. then we numerically obtain that @xmath223 now suppose @xmath224 .
let @xmath225 .
then @xmath226 so that @xmath227 consequently , @xmath228 \frac{4^\alpha d^{1-\alpha}}{\alpha^{3/2 } } \leq \frac { 4^\alpha d^{1-\alpha}}{\alpha^{3/2}},\ ] ] and thus @xmath229 the proof of eq .
is completed by observing that @xmath230 when @xmath231 and @xmath232 when @xmath233 .
. then follows from the convexity of @xmath234 .
we note that slightly lower @xmath235 can in principle be obtained by computing to higher orders in eq .
( [ 7 ] ) , which is nevertheless not important for the main idea .
as @xmath39 gets large , @xmath235 approaches the limit 4 , and @xmath236 approaches the limit 2 . as an implication of lemma [ lem : entropygap ] , theorem
[ thm : aveminentropy ] with @xmath39 replaced by @xmath237 also holds when the four subregions have different dimensions , as long as @xmath238 and @xmath239 .
the same remark also applies to theorem [ thm : aveminentropylog ] below .
now we state a key observation : the haar integral of @xmath6 , the defining term of @xmath0 entropies , only uses the first @xmath0 moments of the full haar measure .
equivalently , pseudorandom unitary @xmath0-designs are already indistinguishable from completely random by @xmath6 .
more explicitly , let @xmath240 be an @xmath0-design ensemble .
then we have @xmath241\ ] ] by definition .
therefore , all haar integrals of @xmath6 from the last part ( eqs .
( [ general2 ] ) and ( [ expsumcycle ] ) ) directly carry over to @xmath24-designs .
this observation implies that @xmath0 entropies can generically diagnose whether a scrambler is locally indistinguishable from random dynamics as powerful as @xmath0-designs .
the haar - averaged tsallis-@xmath0 entropies ( @xmath20 ) are exactly saturated by @xmath0-designs since tsallis entropy is linear in @xmath15 .
however , as mentioned , we can not make analogous arguments for @xmath167 : the exact saturation requires @xmath242 and the haar integral to commute asymptotically , which is not known to hold ; and the above jensen s bound does not apply since @xmath243 becomes concave .
in contrast , the rnyi entropies can be lower bounded because of convexity . due to the importance of rnyi entropies
, we state the results as a theorem : @xmath244 \geq f^{(\alpha)}_r\left(\int{\rm d}u\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}\right ) = \frac{1}{1-\alpha}\log\left(\int{\rm d}u\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}\right).\ ] ] in particular , in the large @xmath39 limit , @xmath244 \geq \log d - o(1).\ ] ] @xmath244 = \mathbb{e}_{\nu_\alpha}\left[f_r^{(\alpha)}\left(\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}\right)\right]\geq f^{(\alpha)}_r(\mathbb{e}_{\nu_\alpha}[\mathrm{tr}\{\rho_{ac}^\alpha\}])=f^{(\alpha)}_r\left(\int{\rm d}u\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}\right),\ ] ] where the inequality follows from jensen , and the last equality follows from the fact that @xmath240 is an @xmath0-design . by plugging in eq .
( [ expsumcycle11 ] ) , we directly obtain the asymptotic result . the theorem enables us to use the haar integrals of traces to lower bound the design - averaged rnyi entanglement entropies in all dimensions .
asymptotically , the @xmath118 upper bound on the residual rnyi-@xmath0 entropy still holds .
so we can conclude that tsallis and rnyi-@xmath0 entanglement entropies are very likely to be almost maximal when sampling from unitary @xmath0-designs , as well as haar .
the min entanglement entropy of designs will be explicitly analyzed later .
for rigorousness , we need to analyze the robustness / sensitivity of the entropic properties of designs .
indeed , one also expects ensembles that well approximate @xmath0-designs to have close - to - haar @xmath0 entropies
. the error analysis will be useful for e.g. relating scrambling complexities to circuit depth .
here we relate typical approximation measures of designs to rnyi entropies .
first , the canonical definition of designs by polynomials leads to the following direct measure of distance to designs : ensemble @xmath53 is an @xmath245-m - approximate unitary @xmath50-design ( `` m '' represents monomial ) if @xmath246\right| \leq \epsilon , ~~\forall q^{(k)},k\leq t.\ ] ] where @xmath247 is a monomial of degree @xmath248 both in the entries of @xmath75 and in their complex conjugates . note that the bound is on each monomial with unit constant factor , otherwise the difference can be arbitrarily amplified by including more terms or change the constant .
let @xmath249 be an @xmath245-m - approximate unitary @xmath0-design
. then @xmath250&\leq\int{\rm d}u\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}+d^\alpha\epsilon,\\ \mathbb{e}_{\omega_\alpha}\left[s_r^{(\alpha)}\left(\rho_{ac}\right)\right ] & \geq\frac{1}{1-\alpha}\log\left(\int{\rm d}u\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}+d^{\alpha}\epsilon\right)\end{aligned}\ ] ] in the large @xmath39 limit , @xmath251 \geq \log{d } - o(1 ) - \frac{1}{(\alpha-1){\mathrm{cat}}_\alpha\ln 2}d^{2\alpha-1}\epsilon\left(1+o\left(d^{-1}\right)\right).\ ] ] @xmath252-\int{\rm d}u\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}\leq\left|\int{\rm d}u\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}-\mathbb{e}_{\omega_\alpha}[\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}]\right|\leq \frac{1}{d^\alpha}d^{2\alpha}\epsilon = d^{\alpha}\epsilon\ ] ] by triangle inequality , since @xmath253 is the sum of @xmath254 monomials according to eq . ( [ tr ] ) .
then @xmath255 \geq f_r^{(\alpha)}\left(\mathbb{e}_{\omega_\alpha}\left[\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}\right]\right ) \nonumber\\= & \frac{1}{1-\alpha}\log\mathbb{e}_{\omega_\alpha}[\mathrm{tr}\{\rho_{ac}^\alpha\}]\geq\frac{1}{1-\alpha}\log\left(\int{\rm d}u\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}+d^{\alpha}\epsilon\right),\end{aligned}\ ] ] where the first inequality follows from jensen , and the second inequality follows from eq .
( [ eq : trerror ] ) and that @xmath234 is monotone decreasing .
we can then use the @xmath256 results to analyze the perturbation .
most importantly , in the large @xmath39 limit , @xmath257 = -\frac{1}{1-\alpha}\log\left(1+\frac{d^{\alpha}\epsilon}{\int{\rm d}u\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}}\right ) \nonumber \\
\leq&\frac{1}{\alpha-1}\log\left(1+\frac{1}{{\mathrm{cat}}_\alpha}d^{2\alpha-1}\epsilon\left(1+o\left(d^{-1}\right)\right)\right ) \leq\frac{1}{(\alpha-1){\mathrm{cat}}_\alpha\ln 2}d^{2\alpha-1}\epsilon\left(1+o\left(d^{-1}\right)\right ) , \label{eq : renyierror}\end{aligned}\ ] ] where the first inequality follows from eq .
( [ expsumcycle11 ] ) and the following analysis , and the second inequality follows from a logarithm inequality @xmath258 when @xmath259 .
so we conclude that the error in @xmath184 at most scales as @xmath260 .
recall the other definition of exact designs by frame operators .
the deviation of an ensemble from a unitary @xmath50-design can also be quantified by a suitable norm of the deviation operator @xmath261 the operator norm and trace norm of @xmath262 are two common figures of merit .
the latter choice is more convenient for the current study : ensemble @xmath53 is a @xmath45-fo - approximate unitary @xmath50-design ( fo represents frame operator ) if @xmath263 we note that this definition is very similar to the quantum tensor product expander ( tpe ) @xcite .
tpes conventionally use the operator norm , and the deviation operators relate to each other by partial transposes ( like operators @xmath264 in eqs .
( [ x],[y ] ) ) . here
we can directly use the operator form of local density operators derived earlier to do an error analysis of fo - approximate designs .
let @xmath265 be a @xmath45-fo - approximate unitary @xmath0-design .
we define @xmath266 , and explicit write out @xmath267 : @xmath268 - \int{\rm d}u(u\otimes u^\dagger)^{\otimes\alpha},\\ \delta_\alpha(\omega_\alpha ) & = \mathbb{e}_{\omega_\alpha}[u^{\otimes\alpha}\otimes { u^\dagger}^{\otimes\alpha } ] - \int{\rm d}uu^{\otimes\alpha}\otimes { u^\dagger}^{\otimes\alpha}.\end{aligned}\ ] ] let @xmath249 be a @xmath45-fo - approximate unitary @xmath0-design
. then @xmath250&\leq\int{\rm d}u\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}+\frac{1}{d^{\alpha}}\lambda,\\ \mathbb{e}_{\omega_\alpha}[s_r^{(\alpha)}\left(\rho_{ac}\right ) ] & \geq \frac{1}{1-\alpha}\log\left(\int{\rm d}u\mathrm{tr}\left\{\rho_{ac}^\alpha\right\}+\frac{1}{d^{\alpha}}\lambda\right).\end{aligned}\ ] ] @xmath269 - \int{\rm d}u\mathrm{tr}\left\{\rho_{ac}^\alpha\right\ } = \frac{1}{d^{\alpha}}\mathrm{tr}\{\tilde\delta_\alpha(\omega_\alpha ) y_\alpha\ } \leq \frac{1}{d^{\alpha}}{\left\lvert\tilde\delta_\alpha(\omega_\alpha)\right\rvert}_1{\left\lverty_\alpha\right\rvert}\nonumber \\= & \frac{1}{d^{\alpha}}{\left\lvert\tilde\delta_\alpha(\omega_\alpha)\right\rvert}_1 = \frac{1}{d^{\alpha}}{\left\lvert\delta_\alpha(\omega_\alpha)\right\rvert}_1\leq\frac{1}{d^{\alpha}}\lambda , \ ] ] where the first inequality follows from hlder s inequality , and the second line follows from the unitarity of @xmath270 .
we note that the essential difference between the different definitions of approximate designs is just in the norm used to measure the distance @xcite . letting @xmath271 recovers equivalent definitions of exact designs .
as motivated in the introduction , we expect that there is a hierarchy of scrambling complexities that lie in between information scramblers and haar - random unitaries , with different levels of the hierarchy indexed by the order of design needed to mimic the scrambler .
then our results of design - averaged rnyi entanglement entropies imply that we can use the generic maximality of rnyi-@xmath0 entanglement entropy as i ) a necessary indicator of the resemblance to an @xmath0-design , and ii ) a diagnostic of the scrambling complexity of an @xmath0-design , or @xmath0-scrambling .
for example , if a supposedly random unitary dynamics does not produce nearly maximal rnyi-@xmath0 entanglement entropy in all valid partitions , as @xmath0-designs must do , then it is simply not close to any @xmath0-design .
the maximality of rnyi entanglement entropies may not be relevant for probing designs at the global level , but it can probe the typical behaviors of entanglement between local regions which mimic designs . recall that rnyi entropy is monotone nonincreasing in the order , and all orders share the same roof value .
so @xmath0-scrambling necessarily implies @xmath272-scrambling , for @xmath273 .
here we consider @xmath275 more carefully , since the corresponding rnyi entropies are defined only by taking a limit , and they respectively correspond to the weakest and strongest entropic scrambling complexities .
recall that @xmath16 gives the von neumann entropy , which probes information / page scrambling .
first notice that 1-designs do not necessarily scramble quantum information .
for example , the ensemble of tensor product of pauli operators acting on each qubit @xmath276 forms a 1-design @xcite .
however , this pauli ensemble clearly does not scramble since it can not create interaction / entanglement among qubits ( so local operators do not grow ) .
so any entanglement entropy will be zero .
on the other hand , 2-designs are sufficient to maximize rnyi-2 entropies , which lower bounds the corresponding von neumann entropies .
it is shown in @xcite that there actually exists an arbitrarily large gap between them .
so one can also say that information scrambling is strictly weaker than 2-scrambling .
therefore , the minimal information scrambling corresponds to a scrambling complexity in between 1- and 2-designs .
another special case is @xmath9 , which leads to the min entropy @xmath277 .
almost maximal min entanglement entropy directly indicates that the spectrum of the reduced density operators ( the entanglement spectrum ) is almost uniform .
as the example of @xmath278 ( in section [ entropies ] ) shows , the min entropy is extremely sensitive to even one single peak in the entanglement spectrum , since it only cares about the largest eigenvalue .
so it is in a sense the `` harshest '' entropy measure .
the min entanglement entropy is then the strongest entropic diagnostic of scrambling : if it is always almost maximal , then the entanglement spectrum can not accommodate any peak , and we simply can not distinguish the scrambler from completely random by looking at the spectrum of any reduced density matrix alone .
we call this situation max scrambling
. we shall arrive at formal arguments that designs are distinguishable from haar only up to a finite order ( in both the random unitaries and random states settings ) for finite dimensions , by studying the min entanglement entropy .
given the definition of max scrambling by the min entropy , one may wonder if the full haar measure is required to achieve this strongest form of entropic scrambling . here
we answer this question in the negative : for a given dimension , only finite number of moments ( which scales logarithmically in the dimension ) are needed , which we call nontrivial moments . from the proof of theorem [ thm : aveminentropy
] , we can directly see that the same result holds if the average is taken over an @xmath0-design with @xmath279 .
the conclusion is clear from the proof when @xmath224 .
when @xmath280 , @xmath281 , so the conclusion also follows from the proof .
the conclusion is obvious when @xmath282 .
it remains to consider the case @xmath283 , which means @xmath284 .
therefore , eq . applies , so that @xmath285 therefore , eqs . and
still hold if the average is taken over an @xmath0-design with @xmath279 .
we can further show that , in fact , @xmath286-design is sufficient to achieve nearly maximal min entropy : [ thm : aveminentropylog ] let @xmath240 be a unitary @xmath0-design , where @xmath287 and @xmath288 ; then @xmath289 in particular , if @xmath290 , then @xmath291 if @xmath292 , then one can show that eq
. holds as in the proof of theorem [ thm : aveminentropy ] even without additional restrictions .
therefore , @xmath293\right)^{1/\alpha}\leq 4 \bigl(\frac{d}{\alpha^{3/2}}\bigr)^{1/\alpha}\leq 4 d^{1/\alpha}\leq 4 d^{a/\log d}=2^{2+a},\ ] ] which confirms eq . and implies eq . .
now suppose @xmath294 . if @xmath295 , then an @xmath0-design is also a @xmath296-design , so and hold according to theorem [ thm : aveminentropy ] .
otherwise , @xmath297 , and the two equations follow from and with @xmath298 .
the same conclusion also holds when @xmath299 .
a further question then arises as to whether the entropic scrambling complexities form a strict hierarchy , i.e. , whether different complexities are gapped . of course , by the nontrivial moments result just presented , we already know that @xmath300 complexities are not well separated .
a straightforward definition of a separation between @xmath0- and @xmath272-scrambling ( @xmath301 ) is the following : there exist scramblers such that the associated rnyi-@xmath272 entropies are always near maximal , but some rnyi-@xmath0 entropies can be bounded away from maximal .
such separations are in principle possible according to the properties of rnyi entropies .
we tried several approaches to establish general separations in the choi model , with limited success .
in particular , we attempted to generalize the partially scrambling unitary model @xcite , and attempted to extend the gap results in the random state setting ( next section ) to random unitaries .
the partially scrambling unitary model is used in @xcite to prove a large separation between von neumann and rnyi-2 tripartite information in the choi state setting . by contrast , as we analyze in appendix [ app : partial ] , this model is not likely to provide similar separations among generalized entropies .
the analysis nevertheless reveals a rather interesting tradeoff between sensitivity and robustness between rnyi and @xmath21 entropies .
however , we are able to establish gaps using projective designs in the random state setting ( see next section ) , but the results can not be directly generalized to unitary designs at the moment . the reasons will be explained in more detail later .
we leave the gap problem in the choi model open . via certain models of random dynamics
, we can relate entropic scrambling complexities to traditional complexities such as time and circuit depth .
for example , it is shown in @xcite that @xmath302)$ ] haar - random local gates are sufficient to form an @xmath245-approximate @xmath0-design of @xmath38 qubits . by the error analysis result
, one can easily see that the minimum number of gates / circuit depth needed to maximize rnyi-@xmath0 entropies scales polynomially in @xmath0 and @xmath38 : let @xmath303 so that @xmath304 , then the number of gates scales as @xmath305 , but meanwhile the error term in eq .
( [ eq : renyierror ] ) is vanishingly small , which indicates that such circuit is a good @xmath0-scrambler .
that is , the scrambling complexity and the random circuit complexity ( minimum number of gates ) can be polynomially related .
we note that the @xmath305 scaling can be improved to @xmath306 for @xmath307 by a recent result @xcite . from a more physical point of view , one may be interested in the minimum time it takes for a physical scrambler , say , a hamiltonian evolution of a quantum many - body system , to achieve certain scrambling complexities , which generalizes the fast scrambling conjecture .
@xcite introduces the notion of design hamiltonian , and conjectures that the shortest time for local time - independent hamiltonians to achieve approximate @xmath0-designs scales roughly as @xmath308 with unknown dependence in @xmath245 .
note that the error dependence will be crucial in translating it to the language of scrambling complexities : an @xmath309 dependence is sufficient to dominate @xmath310 .
suppose we conjecture that the minimum @xmath0-scrambling time scales roughly as @xmath308 as well .
then based on the nontrivial moment result : the minimum time for a physical system to max - scramble scales as @xmath311 .
the max scrambling conjecture contrasts with the minimum page scrambling time of @xmath312 .
the previous section focused on choi states , which encodes the intrinsic properties of corresponding unitary channels .
we showed that random unitaries of different degrees are associated with near - maximal generalized entanglement entropies of the corresponding orders . here
we consider a more straightforward problem entanglement entropy of random states to strengthen the connections between generalized entropies and designs . in this setting
, we obtain analogous main results that designs maximize corresponding rnyi entanglement entropies , and that there are logarithmic nontrivial moments .
we are also able to obtain solid results on the gap problem .
we shall follow similar steps as in the random unitary setting but be more concise .
the original problem and related results have been playing central roles in the study of e.g. black holes and quantum information theory for a long time .
the famous page s theorem roughly says that the average entropy of small subsystems of a bipartite state is nearly maximal ; or in other words , a randomly sampled state is very likely to be maximally entangled .
similar observations were even earlier made by lubkin @xcite and lloyd and pagels @xcite .
in particular , @xcite derived the distribution of the local eigenvalues of a random state , which may imply this result . also see e.g. @xcite for further studies of this phenomenon .
consider a bipartite system with hilbert space @xmath313 , where @xmath314 have dimensions @xmath315 , respectively , assuming @xmath197 .
we use @xmath316 to denote the average over states drawn uniformly from the unit sphere in @xmath317 . the page conjecture @xcite , proved in @xcite ,
states that the average entanglement entropy of each reduced state is given by @xmath318 the gap between the average entropy and the maximum @xmath319 is bounded by the dimension - independent constant @xmath320 . in this section
we shall strengthen this result by proving the gap between the average rnyi @xmath0-entropy of each reduced state and the maximum @xmath319 is also bounded by a constant that is independent of the dimensions @xmath315 and the parameter @xmath0 .
suppose @xmath321 is drawn uniformly from the unit sphere in @xmath317 . to establish a lower bound for the average rnyi @xmath0-entropy
, we shall derive an analytical formula for the average of the @xmath0-moment @xmath322 , where @xmath323 is the reduced state of @xmath321 for system @xmath2 .
expand @xmath324 in the standard product basis @xmath325 , where @xmath326 label the basis elements for @xmath327 , and @xmath328 label the basis elements for @xmath329
. then @xmath330 the general result of the haar - averaged trace is : @xmath331}}\sum_{\sigma\in s_\alpha}d_a^{\xi(\sigma\tau)}d_b^{\xi(\sigma)},\ ] ] where @xmath332}=\binom{d_ad_b+\alpha-1}{\alpha}=\frac{d_ad_b(d_ad_b+1)\cdots ( d_ad_b+\alpha-1)}{\alpha ! } \ ] ] is the dimension of the symmetric subspace of @xmath333 . by eq .
( [ eq : rhoa ] ) , @xmath334,\ ] ] where @xmath335 therefore , @xmath336}}{\operatorname{tr}}\{p_{[\alpha]}q_\alpha\},\ ] ] where @xmath337}$ ] is the projector onto the symmetric subspace of @xmath333 , and @xmath338}$ ] is its dimension .
recall that the symmetric group @xmath103 acts on @xmath333 by permuting the tensor factors , and @xmath337}$ ] can be expressed as follows @xmath339}=\frac{1}{\alpha ! } \sum_{\sigma\in s_\alpha } u_\sigma,\ ] ] where @xmath340 is the unitary operator associated with the permutation @xmath112 .
simple analysis shows that @xmath341 consequently , @xmath342}}\sum_{\sigma\in s_\alpha}d_a^{\xi(\sigma\tau)}d_b^{\xi(\sigma)}.\ ] ] we noticed that similar results have been derived and rederived several times @xcite .
compared to known approaches , our approach seems simpler ; in addition , it admits easy generalization to states drawn from ( approximate ) complex projective designs , which is not obvious for other approaches we are aware of . to get an intuitive understanding of eq . , it is worth taking a closer look at several concrete examples .
when @xmath34 , we reproduce a formula derived by lubkin @xcite : @xmath343 from this equation we can derive a nearly - tight lower bound for the average rnyi 2-entropy , @xmath344 when @xmath345 , the averages of the first few moments are given by @xmath346 which imply that @xmath347 note that the gap of each rnyi entropy from the maximum is tied with the corresponding catalan number .
this is not a coincidence .
when @xmath345 is large ( @xmath348 ) , the formula for @xmath349 can be simplified : @xmath350 @xmath351}}\sum_{\sigma\in s_\alpha}d_a^{\xi(\sigma\tau)+\xi(\sigma)}=\frac{{\mathrm{cat}}_\alpha d_a^{\alpha+1 } + o(d_a^{\alpha-1})}{d_a^{2\alpha}+o(d_a^{2\alpha-2})}={\mathrm{cat}}_\alpha d_a^{-\alpha+1}+o\bigl(d_a^{-(\alpha+1)}\bigr).\ ] ] consequently , @xmath352 this equation suggests that the gap between the average rnyi @xmath0-entropy and the maximum @xmath319 is bounded by a constant when @xmath345 is sufficiently large .
it turns out that this conclusion actually holds without restrictions on @xmath315 : [ lem : alphamoment ] let @xmath353 $ ] .
then @xmath354 according to lemma [ lem : sumbound ] , @xmath355}}\sum_{\sigma\in s_\alpha}d_a^{\xi(\sigma\tau)}d_b^{\xi(\sigma)}\leq \frac{h(q){\mathrm{cat}}_\alpha d_ad_b^{\alpha}}{d_a^\alpha d_b^\alpha } \leq h(q){\mathrm{cat}}_\alpha d_a^{1-\alpha } \leq \frac{4^\alpha h(q)}{\sqrt{\pi}\alpha^{3/2}}d_a^{1-\alpha},\ ] ] which in turn implies that @xmath356 [ thm : are ] @xmath357 recall that rnyi @xmath0-entropy is nonincreasing with @xmath0 , so to establish the theorem , it suffices to prove the lower bound for the min entropy , which corresponds to the limit @xmath358 .
then @xmath359 here the second inequality follows from lemma [ lem : avenormpower ] below .
[ lem : avenormpower ] @xmath360 the conclusion is obvious when @xmath361 .
when @xmath362 , note that @xmath363 is nondecreasing with @xmath0 for @xmath364 , so it suffices to prove the lemma in the case @xmath365 .
then @xmath366 and @xmath367 .
according to lemma [ lem : alphamoment ] , @xmath3684^\alpha d_a^{-\alpha}\nonumber\\ & \leq \frac{3-q}{3(1-q)\sqrt{32\pi q}}4^\alpha d_a^{-\alpha } \leq 4^\alpha d_a^{-\alpha},\end{aligned}\ ] ] which implies the lemma . here
the last inequality follows from the observation that @xmath369<1 $ ] for @xmath370 .
this fact can be verified immediately if we notice that the derivative @xmath371 has a unique zero at @xmath372 in the interval @xmath373 and that @xmath374 is monotonically decreasing for @xmath375 and monotonically increasing for @xmath376 . when @xmath377 , theorem [ thm : are ] can be improved as follows : [ thm : are2 ] for all @xmath378 , @xmath379 where @xmath380 if @xmath317 is real and @xmath381 if @xmath317 is complex .
this theorem follows from lemma [ lem : averootnorm ] below .
we believe that the constant @xmath382 in theorem [ thm : are2 ] and lemma [ lem : averootnorm ] can be set to one in both real and complex cases .
we note that hayden and winter had a similar result @xcite , but they are not so explicit about the constant and the dimensions for which their result is applicable .
the following lemma is proved in appendix [ app : lem ] .
[ lem : averootnorm ] @xmath383 where @xmath380 if @xmath317 is real and @xmath381 if @xmath317 is complex .
recall page s theorem , which states that haar - averaged von neumann entanglement entropies of small subsystems are nearly maximal .
this theorem is not tight from the perspectives of both entropy and randomness : the haar - averaged higher rnyi entanglement entropies are generically close to maximum as well , and the complete randomness is an overkill to maximize the entanglement entropies .
our results imply that page s theorem can be strengthend from both sides .
similar to the random unitary setting , since @xmath384 only uses @xmath0 moments of the haar measure , all bounds on @xmath384 and @xmath385 from the last part still hold if the average is over @xmath0-designs .
so we have a tight page s theorem for each order @xmath0 : let @xmath240 be an @xmath0-design .
then @xmath386 in particular , when @xmath345 is large , @xmath387 here we show that the average rnyi @xmath0-entanglement entropy of quantum states drawn from an approximate @xmath0-design with sufficient accuracy is also close to the maximum . a concrete example is provided based on typical clifford orbits . given an ensemble @xmath53 of quantum states , define @xmath388}\operatorname{\mathbb{e}}_\nu ( { { { { |\psi\rangle}}\!{{\langle\psi|}}}})^{\otimes t}-p_{[\alpha]}.\ ] ] ensemble @xmath53 is an @xmath245-approximate @xmath0-design if @xmath389 [ lem : appdesignentropy ] let @xmath265 be an @xmath245-approximate @xmath0-design with @xmath390 .
then @xmath391}},\\ \operatorname{\mathbb{e}}_{\omega_\alpha } s_r^{(\alpha)}(\rho_a)&\geq \frac{1}{1-\alpha}\log\left ( \operatorname{\mathbb{e}}{\operatorname{tr}}\{\rho_a^\alpha\}+\frac{\epsilon}{d_{[\alpha]}}\right).\end{aligned}\ ] ] according to the same argument that leads to eq . , @xmath392}}{\operatorname{tr}}\left\{(p_{[\alpha]}+\delta_\alpha(\nu))q_\alpha\right\},\nonumber\\ & = \operatorname{\mathbb{e}}{\operatorname{tr}}\{\rho_a^\alpha\}+\frac{1}{d_{[\alpha]}}{\operatorname{tr}}\left\{\delta_\alpha(\nu)q_\alpha\right\}\leq \operatorname{\mathbb{e}}{\operatorname{tr}}\{\rho_a^\alpha\}+\frac{1}{d_{[\alpha]}}\|\delta_\alpha(\nu)\|_1 \|q_\alpha\|\nonumber\\ & \leq \operatorname{\mathbb{e}}{\operatorname{tr}}\{\rho_a^\alpha\}+\frac{\epsilon}{d_{[\alpha]}},\end{aligned}\ ] ] where the last inequality follows from the assumption @xmath393 and the fact that @xmath394 , since @xmath395 is unitary .
we see that @xmath396 as long as @xmath397}=o(d_a^{1-\alpha})$ ] .
this leads to an entropic notion of designs : an ensemble of states @xmath53 has an @xmath0-design complexity if the average rnyi-@xmath0 entanglement entropies are nearly maximal . for convenience , we call them page complexities at the moment . as an application of lemma [ lem : appdesignentropy ] ,
let us consider the average rnyi entanglement entropy of clifford orbits for a multiqubit system . for simplicity
we assume @xmath398 , so that @xmath399 recall that the clifford group is a unitary 3-design @xcite , so any orbit of the clifford group forms a 3-design .
consequently , the average rnyi @xmath0-entanglement entropy for @xmath400 of any clifford orbit is close to the maximum , @xmath401 for any @xmath402 , where @xmath403 denotes the clifford orbit generated from @xmath402 .
however , the clifford group is not a four design , and clifford orbits are in general not four designs @xcite .
if @xmath402 is a stabilizer state , then @xmath404 according to @xcite . in this case
the bounds for the fourth moment and rnyi 4-entropy provided by lemma [ lem : appdesignentropy ] is not very informative , note that @xmath405 and @xmath406}\approx ( d_ad_b)^4/24=d_a^8/24 $ ] . for a typical clifford orbit , by contrast
, @xmath407 is much smaller @xcite .
now lemma [ lem : appdesignentropy ] implies that @xmath408}}\approx 14 d_a^{-3 } + 24d_a^{-6}\approx \operatorname{\mathbb{e}}{\operatorname{tr}}\{\rho_a^4\}.\end{aligned}\ ] ] therefore , eq . also holds for typical clifford orbits when @xmath409 .
again , the min entanglement entropy corresponds to the strongest entropic design complexity : if the average min entanglement entropies are always close to maximal , then we simply can not distinguish the ensemble from completely random by the entanglement spectrum .
the following theorem says that designs of order @xmath410 have almost maximally uniform entanglement spectrum : [ thm : moment ] suppose @xmath324 is drawn from an @xmath0-design in a bipartite hilbert space @xmath313 of dimension @xmath411 , where @xmath412 with @xmath413 .
let @xmath323 be the reduced state of subsystem a. then @xmath414 in particular , @xmath415 and @xmath416 if @xmath417 . according to lemma [ lem : alphamoment ] , @xmath418 where the first inequality follows from the fact that @xmath419 and @xmath420 given that @xmath421 by assumption .
consequently @xmath422^{1/\alpha}\leq d_a^{1/\alpha}\frac{4}{d_a}\leq d_a^{a/ \log d_a}\frac{4}{d_a}= \frac { 2^{2+a}}{d_a},\\ \operatorname{\mathbb{e}}s_{\min}(\rho_a)&\geq -\log \operatorname{\mathbb{e}}\|\rho_a\|\geq -\log \frac{2^{2+a}}{d_a}\geq \log d_a-2-a.\end{aligned}\ ] ] in the case , @xmath298 and @xmath417 , the inequality @xmath423 holds automatically ; therefore , @xmath415 and @xmath416 . in this random state setting
, we are able to establish a clear gap between the second and @xmath0-th entropic design complexities with all @xmath424 . in particular , we shall construct a family of 2-designs , the gap of whose average rnyi-@xmath0 entanglement entropies from the maximum is unbounded for @xmath425 .
our construction is based on the orbits of a special subgroup of the unitary group on @xmath313 . as mentioned before ,
any orbit of a unitary @xmath426-design is a complex projective @xmath426-design .
what is interesting , our construction of projective 2-designs does not require unitary @xmath426-designs .
in this way , we also provide a novel recipe for constructing projective 2-designs , which is particularly useful when the dimension is not a prime power .
consider the group @xmath427 , where @xmath428 are the unitary groups on @xmath314 , respectively .
it is irreducible , but does not form a 2-design .
simple analysis shows that @xmath429 has four irreducible components on @xmath430 , with dimensions @xmath431 , respectively .
the symmetric subspace of @xmath430 contains two irreducible components with dimensions @xmath432 and @xmath433 . by a similar continuity argument
as employed in @xcite , there must exist an orbit of @xmath429 that forms a 2-design .
let @xmath434 be a fiducial vector of a 2-design with reduced state @xmath323 for subsystem a. then @xmath435 is necessarily equal to the average over the uniform ensemble , that is , @xmath436 it turns out that this condition is also sufficient . to see this , note that the condition must be invariant under local unitary transformations and thus only depends on a symmetric polynomial of the eigenvalues of @xmath323 of degree 2 , which is necessarily a function of @xmath435 given the normalization condition @xmath437 .
it is worth to point out that the same conclusion also holds if @xmath428 are replaced by groups that form unitary 2-designs on @xmath314 , respectively .
next we study rnyi entanglement entropies of 2-designs constructed from orbits of @xmath429 .
note that eq
. holds if @xmath323 has the following spectrum @xmath438 if @xmath439 , then @xmath440 therefore , @xmath441 , and the gap of all rnyi entropies from the maximum is bounded .
we are mostly interested in the case in which the ratio @xmath442 is bounded by a constant , say @xmath443 .
then @xmath444 consequently , @xmath445 as @xmath446 increases , the gap of @xmath447 from the maximum is unbounded whenever @xmath47 .
we note that such construction can not be directly generalized to establish gaps in the choi setting .
as mentioned , any orbit of a unitary @xmath50-design is a complex projective @xmath50-design , but to construct a projective @xmath50-design , a unitary @xmath50-design is not required . here
the complex projective 2-design is constructed using a group that is a tensor product .
however , such a group can never be a unitary 2-design .
also , in the choi setting , four parties are involved , and it is not easy to ensure unitarity using the idea for constructing projective designs .
new approaches are necessary for such a construction .
this paper explores scrambling by establishing connections between entanglement entropies and degree of randomness .
we find that there is a hierarchy of degrees of randomness that lies between the scrambling of information ( page scrambling ) and complete randomness ( haar randomness ) .
our results directly relate the order of generalized entropies and designs : we show that @xmath0 designs yield almost maximal renji @xmath0 entropies .
moreover , we show by min entanglement entropy that designs of order logarithmic in the dimension already exhibit almost maximally uniform entanglement spectrum , and are indistinguishable from haar by spectral properties alone
. there are several open problems , especially in the choi setting for unitary designs .
for example , we are not yet able to give a construction that opens a gap between scrambling at order @xmath0 and scrambling at higher order , when @xmath448 in the choi setting .
that is , the gaps between the @xmath449 scrambling complexities have not been proven .
although we exhibit such gaps for projective 2-designs in the random state setting , similar techniques can not be directly generalized to unitaries , as explained earlier .
moreover , due to the lack of subadditivity , we know that the negative tripartite information in terms of rnyi entropies are not necessarily nonnegative .
it is worth considering related problems such as when it can be negative , and further about the meanings of such derived quantities .
this work focuses mostly on the intrinsic mathematical properties of unitary channels and states , instead of on physical dynamics .
the relations of our results to dynamics such as generalized fast scrambling remains speculative .
it would be interesting to further explore the dynamics of e.g. black holes or quantum many - body systems , and aspects of fast scrambling , under our framework .
we refer to @xcite for a set of interesting results along this direction .
we note that a recent paper @xcite concerns a similar gap between maximum entropy and maximum complexity of quantum dynamics . here
the complexity roughly means the computational / gate complexity , which is rather difficult to rigorously analyze .
it would also be interesting to further study the connections to their framework .
we thank dawei ding , alan guth , aram harrow , guang hao low , yoshifumi nakata , kevin thompson , and quntao zhuang for discussions related to this work .
special thank goes to aram harrow for bringing several helpful references to our attention .
zwl and sl are supported by afosr and aro .
eyz is supported by the national science foundation under grant contract number ccf-1525130 .
hz is supported by the excellence initiative of the german federal and state governments ( zuk 81 ) and the dfg .
research at mit ctp is supported by doe .
first , we present a series of inequalities relating rnyi entropies of different orders . it is well known that the rnyi entropy is monotonic nonincreasing with the parameter @xmath0 , that is @xmath450 whenever @xmath451 . on the other hand , @xmath452
can also be used to construct a lower bound for @xmath453 when @xmath454 as shown below , @xmath455 in particular , this equation yields a lower bound for the min entropy @xmath456 when @xmath457 , we have @xmath458 so the difference between @xmath452 and @xmath459 is less than 1 . when @xmath460 , we have @xmath461 , so the difference between @xmath462 and @xmath452 is upper bounded by @xmath463 .
next we derive another lower bound for @xmath453 in terms of @xmath452 and the min entropy in the case @xmath454 .
the following equation @xmath464 implies that @xmath465.\ ] ] in particular , any rnyi @xmath466-entropy with @xmath467 is lower bounded by a convex combination of rnyi @xmath426-entropy and the min entropy , @xmath468.\ ] ]
it is known that rnyi @xmath0-entropy is not subadditive except for the special case @xmath194 . the following lemma yields a weaker form of subadditivity : [ lem : entropygap ]
let @xmath469 be any bipartite state on the product hilbert space @xmath470 with dimension @xmath411 .
let @xmath471 be the two reduced states .
then @xmath472 the first inequality in lemma [ lem : entropygap ] means that the spectrum of @xmath469 majorizes that of @xmath473 .
the second and third inequalities are immediate consequences of the first one , which are are equivalent .
the second one can be seen as a weaker form of subadditivity , while the third one means that the gap of rnyi entropy of a joint state from the maximum is no smaller than the corresponding gap for each reduced state .
let @xmath474 for @xmath475 be an orthonormal basis for @xmath329 and @xmath476 be the corresponding projectors .
let @xmath477 where @xmath478 are subnormalized states that sum up to @xmath323 .
define @xmath479 where the addition in the indices is modulo @xmath480 ; note that @xmath481 .
then all @xmath482 have the same spectrum , which is majorized by @xmath469 , that is , @xmath483 .
consequently , @xmath484 since rnyi @xmath0-entropy is schur concave for @xmath485 , it follows that @xmath486 which confirms the second inequality in lemma [ lem : entropygap ] and implies the third inequality .
we include here an intuitive proof of lemma [ sumcycle ] by induction .
the intuition is that any element @xmath487 can be viewed as a local deformation of some element @xmath488 , such that @xmath489 can only increase by at most 1 .
we formalize the argument below .
suppose the statement is true for @xmath490 .
now for some @xmath491 , look at element @xmath492 .
there are two possibilities : 1 .
@xmath492 appears in a cycle of length 1 : @xmath493= k+1 $ ] .
so @xmath494 , where @xmath495 .
write @xmath496 , where @xmath497
. then @xmath498 , with @xmath499 .
compare the action of @xmath500 and @xmath501 on various elements .
the only differences are : @xmath502=\sigma_-\tau_-[k ] $ ] , @xmath503=k+1 $ ] , @xmath504=\sigma_-\tau_-[k ] $ ] .
it simply increases the length of a cycle in @xmath500 by 1 , and does nothing to other cycles .
so @xmath505 . from the induction hypothesis , @xmath506
, so @xmath507 .
@xmath508 appears in a cycle of length @xmath509 : @xmath510=k+1 $ ] , @xmath493=b $ ] for some elements @xmath511 .
define @xmath512 by @xmath513=\sigma[i ] $ ] for @xmath514 and @xmath515=b $ ] .
now compare the action of @xmath516 and @xmath517 on various elements .
depending on the value of @xmath518 , there are two cases : 1 .
the differences are : @xmath520=b$ ] and @xmath521=\sigma[1]$ ] , but @xmath522=k+1 $ ] , @xmath523=b$ ] and @xmath524=\sigma[1]$ ] .
they act identically on all other elements .
there are two possible effects : 1 . in @xmath516 , @xmath525 and @xmath526\ }
$ ] belong to the same cycle .
then @xmath527 breaks this cycle into two disjoint ones involving @xmath528\ } $ ] and @xmath529 respectively .
so @xmath530 ; 2 . in @xmath516 ,
@xmath525 and @xmath526\ } $ ] belong to two disjoint cycles
. then @xmath527 glues these two cycles together into one .
so @xmath531 .
2 . @xmath532
. then @xmath533 and @xmath527 act identically on @xmath534 and in addition @xmath504=k+1 $ ] .
so @xmath530 .
+ in conclusion , @xmath535 can only increase by 1 or decrease by 1 as compared to @xmath536 , so @xmath537 in either case .
lastly , consider @xmath538
. the only element of @xmath539 is @xmath540 , and @xmath541 , thus the statement trivially holds .
this completes our proof .
it is well known that the catalan number @xmath542 $ ] is approximated by @xmath543 when @xmath248 is large . to make this statement more precise , here provide both lower and upper bounds for @xmath544 .
[ lem : catalanbound ] the catalan number @xmath544 satisfies @xmath545 where @xmath248 is not necessarily an integer .
the basis of our proof is the following stirling approximation formula @xmath546 as an implication , @xmath547 here the second inequality follows from the inequality @xmath548 note that the left hand side is monotonically decreasing with @xmath248 and approaches @xmath549 in the limit @xmath550 . on the other hand
, @xmath551 here the last inequality follows from the inequality @xmath552 to confirm this claim , we shall prove the equivalent inequality @xmath553 <
1.\ ] ] the first and second derivatives of @xmath554 read @xmath555 since @xmath556 is positive , @xmath557 is monotonically decreasing , which implies that @xmath558 given that @xmath559 .
consequently , @xmath554 is monotonically increasing , which confirms our claim @xmath560 given that @xmath561 .
the following two corollaries are easy consequences of lemma [ lem : catalanbound ] , though it is straightforward to prove them directly .
[ cor : catalanmono ] @xmath562 for any positive integer @xmath248 .
the corollary holds for @xmath563 by direct calculation .
when @xmath564 , lemma [ lem : catalanbound ] implies that @xmath565 which confirms the corollary .
[ cor : catalansupmul ] @xmath566 for arbitrary positive integers @xmath567 .
the corollary holds when @xmath568 or @xmath538 according to corollary [ cor : catalanmono ] , given that @xmath569 .
when @xmath570 , lemma [ lem : catalanbound ] implies that @xmath571
recall the definition of the mbius function , @xmath572 [ lem : mobiusbound ] @xmath573 the lower bound is saturated iff @xmath112 is the identity or a product of disjoint transpositions .
the upper bound @xmath574 is saturated iff @xmath112 is a cycle of length @xmath575 .
the lemma holds when @xmath112 is the identity .
otherwise , suppose @xmath112 has disjoint cycle decomposition @xmath576 , where @xmath577 for @xmath578 are nontrivial cycles .
then @xmath579 given that @xmath580 for all @xmath581 .
the inequality is saturated iff @xmath582 for all @xmath581 , that is , @xmath112 is a product of disjoint transpositions . on the other hand , @xmath583 where the two inequalities follow from corollary [ cor : catalansupmul ] and lemma [ lem : catalanbound ] , respectively .
the first inequality is saturated when @xmath538 , but is strict whenever @xmath584 .
so the upper bound @xmath574 is saturated iff @xmath112 is a cycle of length @xmath575 .
the following theorem is reproduced from @xcite , [ thm : wgboundcm ] when @xmath585 , any @xmath586 satisfies @xmath587 the following lemma is an immediate consequence of theorem [ thm : wgboundcm ] and lemma [ lem : mobiusbound ] .
[ lem : wgbound ] when @xmath585 , any @xmath586 satisfies @xmath588 where @xmath589 is defined in theorem [ thm : wgboundcm ] .
[ lem : wgsumbound ] suppose @xmath585 ; then @xmath590.\ ] ] according to lemma [ lem : wgbound ] , @xmath591\nonumber\\ & = \frac{7a_k}{8 } + \frac{a_k}{16}\bigl(\frac{4}{d}\bigr)^{k}\left[\prod_{j=0}^{k-1}\bigl(\frac{d}{4}+j\bigr ) + \prod_{j=0}^{k-1}\bigl(\frac{d}{4}-j\bigr ) \right]\nonumber\\ & = \frac{7a_k}{8 } + \frac{a_k}{16}\left[\prod_{j=0}^{k-1}\bigl(1+\frac{4j}{d}\bigr ) + \prod_{j=0}^{k-1}\bigl(1-\frac{4j}{d}\bigr ) \right]\nonumber\\ & \leq\frac{7a_k}{8 } + \frac{a_k}{16}\left[\prod_{j=0}^{k-1}{\mathrm{e}}^{4j / d } + \prod_{j=0}^{k-1}{\mathrm{e}}^{-4j / d}\bigr ) \right ] = \frac{7a_k}{8 } + \frac{a_k}{16}\left[{\mathrm{e}}^{\sum_{j=0}^{k-1}4j / d } + { \mathrm{e}}^{-\sum_{j=0}^{k-1}4j / d}\bigr ) \right]\nonumber\\ & = \frac{7a_k}{8 } + \frac{a_k}{16}\left[{\mathrm{e}}^{2k(k-1)/d } + { \mathrm{e}}^{-2k(k-1)/d}\right]=\frac{a_k}{8}\left[7+\cosh\frac{2k(k-1)}{d}\right ] .
\end{aligned}\ ] ]
in this appendix , we provide an easy - to - use upper bound for the number of permutations with a given genus ( lemma [ lem : ngpermutationt ] below ) , which plays a crucial role in understanding rnyi entanglement entropies of haar random states as well as states drawn from designs .
the basis of our endeavor is the following theorem due to goupil and schaeffer @xcite .
[ thm : ngpermutationgs ] the number of permutations in the symmetric group @xmath592 with genus @xmath593 is given by @xmath594 where @xmath595 , @xmath596 , @xmath597 for @xmath598 , and @xmath599 here the summation runes over all partition @xmath600 of @xmath593 , the expression @xmath601 means that @xmath600 has @xmath602 parts equal to @xmath581 , and @xmath603 denotes the number of parts of @xmath600 .
in addition , we need two auxiliary lemmas . [
lem : partitions ] @xmath604 for all @xmath605 . by definition ,
the lemma holds when @xmath606 , or @xmath598 and @xmath607 .
now suppose @xmath608 ; then @xmath609\nonumber\\ & \leq \sum_{\substack{\gamma\vdash g,\;\ell(\gamma)=\ell \\ \gamma=1^{c_1}2^{c_2 } g^{c_g}}}\left [ \frac{1}{\prod_{j=1}^gc_j!j^{c_j } } \prod_{j=1}^g \bigl(\frac{1}{2}\bigr)^{c_j}\right]=\sum_{\substack{\gamma\vdash g,\;\ell(\gamma)=\ell \\ \gamma=1^{c_1}2^{c_2 }
g^{c_g}}}\left [ \frac{1}{\prod_{j=1}^gc_j!j^{c_j } } 2^{-\sum_{j=1}^g c_j}\right]\nonumber\\ & = \sum_{\substack{\gamma\vdash g,\;\ell(\gamma)=\ell \\ \gamma=1^{c_1}2^{c_2 } g^{c_g } } } \frac{1}{\prod_{j=1}^gc_j!j^{c_j } } 2^{-\ell(\gamma ) } = 2^{-\ell}\sum_{\substack{\gamma\vdash g,\;\ell(\gamma)=\ell \\ \gamma=1^{c_1}2^{c_2 } g^{c_g } } } \frac{1}{\prod_{j=1}^gc_j!j^{c_j } } \leq 2^{-\ell}.\end{aligned}\ ] ] here the last inequality can be derived as follows .
note that @xmath610 is the order of the centralizer in @xmath611 of each element in the conjugacy class labeled by the partition @xmath600 .
therefore , @xmath612 is the number of elements in this conjugacy class , so that @xmath613 which amounts to the identity @xmath614 as an immediate consequence , @xmath615 [ lem : binomratio ] suppose @xmath616 are nonnegative integers satisfying @xmath617 , @xmath618 , and @xmath619 .
then @xmath620 straightforward calculation shows that @xmath621 so without loss of generality , we may assume that @xmath622 .
then @xmath623 [ n(n-1)\cdots ( n+j - k+1)]}\nonumber\\ & = \frac{2^kn(n-\frac{1}{2})\cdots ( n-\frac{k}{2}+\frac{1}{2})}{[n(n-1 ) \cdots n - j+1 ] [ n(n-1)\cdots ( n+j - k+1 ) ] } = 2^kf,\end{aligned}\ ] ] where @xmath624 the square of @xmath625 can be bounded from below as follows , @xmath626 therefore @xmath627 , from which the lemma follows .
[ lem : ngpermutationt ] @xmath628 recall that @xmath629 $ ] .
the values of @xmath630 can be computed explicitly according to theorem [ thm : ngpermutationgs ] , with the result @xmath631 the coefficients @xmath632 necessary for deriving this result are given by @xmath633 as a consequence , @xmath634 therefore , lemma [ lem : ngpermutationt ] holds when @xmath635 .
now suppose @xmath636 , so that @xmath637 .
according to theorem [ thm : ngpermutationgs ] , we have @xmath638 here the first inequality follows from lemma [ lem : binomratio ] , and the last one from lemma [ lem : partitions ] and the fact that @xmath597 for @xmath639 .
the fraction at the end of the above equation is no larger than 1 given that @xmath636 .
therefore , @xmath640}{\frac{n}{4}-1}\frac{[\left(\frac{n}{4}\right)^{g_2 + 1}-1]}{\frac{n}{4}-1 } \nonumber\\ & \leq \frac{(n+1)_{2g}}{2^{4 g } } \frac{1}{(\frac{n}{4}-1)^2 } \sum_{g_1+g_2=g } \left(\frac{n}{4}\right)^{g+2}=\frac{(n+1)_{2g}}{2^{4 g } } \frac{(g+1)\left(\frac{n}{4}\right)^{g+2}}{(\frac{n}{4}-1)^2 } \nonumber\\ & = \frac{(g+1)n^{g+2}(n+1)_{2g}}{2^{6g}(n-4)^2 } \leq \frac{(g+1)n^{3g-3 } ( n+1)(n-1)(n-2)(n-3)(n-4)}{2^{6g}(n-4)^2}\nonumber\\ & \leq \frac{(g+1)n^{3g}}{2^{6g}}.\end{aligned}\ ] ] this result confirms the first inequality in lemma [ lem : ngpermutationt ] in the remaining case @xmath641 , which in turn implies the second inequality in the lemma .
here we analyze the partially scrambling unitary model proposed in @xcite , which can lead to a large separation between von neumann and rnyi-2 entanglement entropies and tripartite information in the choi state setting . more explicitly , let @xmath642 be a unitary that perfectly scrambles on almost the whole space besides a small subspace .
then , on the one hand , @xmath642 still has nearly maximal @xmath87 due to continuity ; while on the other hand , @xmath643 can be gapped from maximum by @xmath644 .
however , we find that this model is not likely to provide clear separations between rnyi entropies of order @xmath449 .
the generalized partially scrambling unitary is defined as follows .
given @xmath0 , define @xmath645 where @xmath646 is @xmath0-scrambling , and @xmath647 controls the size of this @xmath0-scrambling subspace ( labeled by subscript @xmath648 ) .
then the choi state of @xmath642 is @xmath649 the question is whether there exists some @xmath5 that can lead to separations between higher rnyi entropies associated with this choi state , say @xmath0 and @xmath272 , @xmath650 . to establish such separations , we need to show a large ( @xmath644 ) gap between rnyi-@xmath272 entropies and the maximum for some small @xmath5 , as well as upper bound the difference between rnyi-@xmath0 entropies and the maximum by continuity .
the gap side can work out by directly generalizing the corresponding calculation in @xcite : let @xmath651 .
then @xmath652 as long as @xmath466 is a positive constant
. however , we find that the continuity bound for unified entropies can only give trivial results on the continuity side : let @xmath10 and @xmath653 be density operators in hilbert space of dimension @xmath39 . denote @xmath654 . for @xmath23 and @xmath655 : @xmath656,\ ] ] where @xmath657 for @xmath658 , and @xmath659 for @xmath660 .
@xmath661 denotes the @xmath0 binary entropy .
it can be seen that this generalized fannes bound for rnyi grows with the dimension @xmath39 for @xmath23 , which indicates that even a tiny non - scrambling subspace may perturb the rnyi entropies drastically .
indeed , some simple scaling analysis can confirm that this bound is trivial even for rnyi-2 .
notice that @xmath662 .
then it must hold that @xmath663 so that @xmath664 .
this gives @xmath665 , which has no overlap with the @xmath666 solution on the gap side when @xmath448 .
equivalently , by plugging in @xmath666 we can solve that the desired separation can exist when @xmath667 . summarizing , in order to have a nontrivial bound on rnyi entropies @xmath5 needs to be @xmath668 , which is meaningless .
this is hardly surprising : one expects that rnyi entropies are very sensitive , especially in the near - maximum regime , due to the logarithm .
in fact , we are able to obtain a large gap on the @xmath272 side basically because of such exponential sensitivity .
suppose we consider @xmath21 entropies instead .
then the continuity bound is strong since @xmath669 , but it becomes hard to find a gap on the other side .
there is a fundamental tradeoff between sensitivity and robustness in these unified entropies . in conclusion
, we believe that partially scrambling unitaries are not likely to produce clear separations between generalized entropies in the choi model .
to prove lemma [ lem : averootnorm ] , we need to introduce several auxiliary concepts and lemmas .
an @xmath670 matrix @xmath429 is a ( standard ) gaussian random matrix if the entries of @xmath429 are i.i.d .
standard gaussian random variables ( with mean 0 and variance 1 ) .
it is a complex gaussian random matrix if its real part and imaginary part are independent gaussian random matrices .
usually this lemma is stated without the intermediate term , as it appears in @xcite . however , the first inequality is essential to achieve our goal .
fortunately , this inequality is already implied by the proof in @xcite .
note that @xmath672 is the average norm of a vector composed of @xmath673 iid standard gaussian random variables , while @xmath674 is the root mean square norm .
this observation implies the second inequality in the lemma , which is nearly tight when @xmath675 are large .
it is well known that @xmath680 considered as a unit vector in @xmath313 is distributed uniformly . in addition , the spectrum of @xmath680 is independent of the frobenius norm @xmath681 .
therefore , @xmath682^a
\operatorname{\mathbb{e}}\left\|\frac{g}{\|g\|_2}\right\|^{2a}= \operatorname{\mathbb{e}}[{\operatorname{tr}}\{gg^\dag\}]^a \operatorname{\mathbb{e}}\|\rho_a\|^a=\frac{2^a\gamma(k+a)}{\gamma(k)}\operatorname{\mathbb{e}}\|\rho_a\|^a,\ ] ] from which the lemma follows . here
the last equality in the above equation follows from the fact that @xmath683 obeys @xmath684-distribution with @xmath685-degrees of freedom and pdf .
@xmath686 which satisfies @xmath687 according to lemmas [ lem : gaussianuniform ] and [ lem : gaussianan ] , in the real case , we have @xmath688 where @xmath689 , and the last inequality follows from the fact that @xmath690 is monotonic increasing with @xmath673 for @xmath691 .
this conclusion is intuitive if we observe that @xmath690 is equal to the ratio of the mean length over the root mean square length of a standard gaussian random vector with @xmath673 components . to derive an analytical proof
, we can compute the log - derivative of @xmath690 with respect to @xmath673 , note that the definition of @xmath690 can be extended to positive real numbers .
straightforward calculations shows that @xmath692\geq \frac{1}{4}\left[\psi^{(0)}\bigl(\frac{m+2}{2}\bigr)-\psi^{(0)}\bigl(\frac{m}{2}\bigr)-\frac{2}{m}\right]=0.\ ] ] here @xmath693 denotes the digamma function ( instead of a ket ) , the inequality follows from the concavity of @xmath693 , and the last equality follows from the identity @xmath694 .
67ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1''''@noop [ 0]secondoftwosanitize@url [ 0 ]
+ 12$12 & 12#1212_12%12@startlink[1]@endlink[0]@bib@innerbibempty http://stacks.iop.org/1126-6708/2007/i=09/a=120 [ * * , ( ) ] link:\doibase 10.1088/1126 - 6708/2008/10/065 [ * * , ( ) ] , @noop ( ) , @noop * * , ( ) http://projecteuclid.org/euclid.cmp/1103899181 [ * * , ( ) ] link:\doibase 10.1103/physrevd.13.191 [ * * , ( ) ] link:\doibase 10.1146/annurev - conmatphys-031214 - 014726 [ * * , ( ) ] link:\doibase 10.1007/jhep02(2016)004 [ * * , ( ) ] link:\doibase 10.1007/jhep03(2014)067 [ * * , ( ) ] link:\doibase 10.1007/jhep12(2014)046 [ * * , ( ) ] link:\doibase 10.1007/jhep03(2015)051 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.115.131603 [ * * , ( ) ] link:\doibase 10.1007/jhep08(2016)106 [ * * , ( ) ] @noop ( ) , link:\doibase 10.1007/jhep05(2015)132 [ * * , ( ) ] @noop ( ) , @noop ( ) , link:\doibase 10.1002/andp.201600318 [ ( ) ] @noop ( ) , link:\doibase 10.1002/andp.201600332 [ ( ) ] , link:\doibase 10.1103/physrevlett.71.1291 [ * * , ( ) ] @noop ( ) , @noop ( ) , link:\doibase 10.1103/physrevlett.116.170502 [ * * , ( ) ] @noop ( ) , link:\doibase 10.1103/physrevlett.101.010504 [ * * , ( ) ] link:\doibase 10.1103/physreva.93.012347 [ * * , ( ) ] link:\doibase 10.1063/1.2165794 [ * * , ( ) ] link:\doibase 10.1007/s10955 - 011 - 0231-x [ * * , ( ) ] link:\doibase 10.1103/revmodphys.81.865 [ * * , ( ) ] http://dx.doi.org/10.1038/nature15750 [ * * , ( ) ] @noop * * , ( ) @noop * * , ( ) @noop ( ) , @noop * * , ( ) _ _ , @noop , ( ) , @noop * * , ( ) @noop * * , ( ) @noop * * , ( ) @noop ( ) , link:\doibase 10.1063/1.523807 [ * * , ( ) ] link:\doibase 10.1155/s107379280320917x [ * * , ( ) ] link:\doibase 10.1063/1.3251304 [ * * , ( ) ] link:\doibase 10.1007/s11005 - 009 - 0365 - 9 [ * * , ( ) ] link:\doibase 10.1142/s2010326316500118 [ * * , ( ) ] link:\doibase 10.1007/s00220 - 006 - 1554 - 3 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.116.170502 [ * * , ( ) ] http://dl.acm.org/citation.cfm?id=2011781.2011790 [ * * , ( ) ] link:\doibase 10.1063/1.523763 [ * * , ( ) ] link:\doibase 10.1016/0003 - 4916(88)90094 - 2 [ * * , ( ) ] link:\doibase 10.1007/s00220 - 006 - 1535 - 6 [ * * , ( ) ] link:\doibase 10.1103/physreva.74.062314 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.72.1148 [ * * , ( ) ] link:\doibase 10.1103/physreve.52.5653 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.77.1 [ * * , ( ) ] @noop * * , ( ) link:\doibase
10.1103/physreve.65.046131 [ * * , ( ) ] link:\doibase 10.1214/10-aap722 [ * * , ( ) ] http://dx.doi.org/10.1007/s00220-008-0624-0 [ * * , ( ) ] http://arxiv.org/abs/1510.02619 [ `` , '' ] ( ) , , http://arxiv.org/abs/1510.02769 [ `` , '' ] ( ) , , http://arxiv.org/abs/1510.02767 [ `` , '' ] ( ) , , @noop ( ) , http://arxiv.org/abs/1701.04493 [ `` , '' ] ( ) , , \doibase http://dx.doi.org/10.1006/eujc.1998.0215 [ * * , ( ) ] `` , '' in @noop _ _ , ( , ) | scrambling is a process by which the state of a quantum system is effectively randomized . scrambling exhibits different complexities depending on the degree of randomness it produces .
for example , the complete randomization of a pure quantum state ( haar scrambling ) implies the inability to retrieve information of the initial state by measuring only parts of the system ( page / information scrambling ) , but the converse is not necessarily the case . here
, we formally relate scrambling complexities to the degree of randomness , by studying the behaviors of generalized entanglement entropies in particular rnyi entropies and their relationship to designs , ensembles of states or unitaries that match the completely random states or unitaries ( drawn from the haar measure ) up to certain moments .
the main result is that the rnyi-@xmath0 entanglement entropies , averaged over @xmath0-designs , are almost maximal .
the result generalizes page s theorem for the von neumann entropies of small subsystems of random states . for designs of low orders
, the average rnyi entanglement entropies can be non - maximal : we exhibit a projective 2-design such that all higher order rnyi entanglement entropies are bounded away from the maximum .
however , we show that the rnyi entanglement entropies of all orders are almost maximal for state or unitary designs of order logarithmic in the dimension of the system .
that is , such designs are indistinguishable from haar - random by the entanglement spectrum .
our results establish a formal correspondence between generalized entropies and designs of the same order . |
as is well known any trajectory of a bounded in phase space motion of hamiltonian system recurs infinitely many times to some neighborhood of its initial position , for both regular ( with discrete spectrum ) as well as chaotic ( with continuous spectrum ) motion . these poincar recurrences ( pr )
do not imply a quasiperiodic motion which is still a widespread delusion ( see , e.g. , @xcite ) .
the difference between regular and chaotic motions lies in the statistics of recurrences which is usually described by the integral distribution @xmath0 that is by the probability for a recurrence time to be larger than @xmath1 . in a regular motion
such a survival probability @xmath0 has a strict upper bound in @xmath1 while for a chaotic motion @xmath1 can be arbitrarily long . in both cases
pr characterize some fluctuations including arbitrarily large ones in chaotic motion .
the pr statistics proved to be a very powerful and reliable method in the studies of chaotic dynamics due to its statistical stability . to my knowledge ,
such a method was first used ( implicitly ) in ref.@xcite for the study of a narrow chaotic layer along the separatrix of a nonlinear resonance . the result ( @xmath2 )
@xmath3 was a surprise as it contradicted the bounded motion in chaotic layer .
indeed , the total sojourn time @xmath4 of a trajectory , which is prportional to the measure of the chaotic component of the motion , diverges as @xmath5 .
later @xcite , this apparent contradiction has been resolved simply by increasing @xmath1 which showed that the exponent of the power law decay also increased from the initial @xmath6 to @xmath7 .
it is instructive to mention that the origin of a short time computation in ref.@xcite was in apparently reasonable decision to avoid any rounding off errors by enormous increase of the computation accuracy . as a result , the computation speed , and the available motion time , dropped by several orders of magnitude .
generally , for exponentially unstable ( chaotic ) motion such an approach is prohibited whatever the computer power .
fortunately , it is also unnecessary for calculating statistical characteristics of the motion like @xmath0 since most of the latter are robust .
true , the corresponding anosov theorem @xcite was ( and can be so far ) proved for the very simple anosov systems only .
moreover , such a theorem is even wrong for discontinuous ( discrete ) perturbations like rounding off ones ( see , e.g. , refs.@xcite ) .
nevertheless , all the numerical experience confirms a sort of robustness of the statistical behavior of chaotic systems , at least with some minimal precautions ( see , e.g. , refs.@xcite for discussion ) .
notice that without such an empirical robustness the numerical experiments with always _ approximate _ models would lose any physical meaning ! a power law decay @xmath8 , whatever the exponent @xmath9 , found in @xcite for a bounded motion , was at variance with the exponential decay believed to be a generic case . in ref.@xcite
the former was interpreted as a characteristic of a qualitatively new structure of the motion near the chaos border in phase space .
later , it was termed the critical structure , which was described by a renormalization group @xcite ( see also review @xcite and references therein ) . since then
, the exponential decay has been considered as a property of ergodic chaotic motion without any chaos borders . however , in recent numerical experiments @xcite with an asteroid motion a fairly long transient exponential decay was found .
moreover , it persists in the separatrix map also used , just the same map which seemed to have been well studied in many previous works @xcite ( see also @xcite and references therein ) . the main purpose of this paper is to reconsider various regimes of pr , and to formulate the conditions for their realization using two relatively simple models : separatrix and standard maps . only bounded motion will be considered , with or without chaos borders .
first , a classical problem of pr in an ergodic system will be discussed in some details in section 2 .
then , in section 3 , the analysis of various pr regimes in the separatrix map will be presented aimed to resolution of the apparent contradiction mentioned above . in section 4 , pr in the standard map in accelerator ( microtron ) regime will be described .
the latter model presents a unique possibility for quantitative study of the global critical structure .
particularly , a new part of this structure has been found which size was surprisingly large . finally , in section 5
, the main results of the present study are summarized .
in addition , the first preliminary empirical evidence is presented for a new regime of poincar recurrences including the transition from exponential to exponential statistics .
consider , first , an elementary example of 1d homogeneous diffusion in momentum @xmath10 .
it can be described by a gaussian distribution function @xmath11 where @xmath12 is the diffusion rate .
derivative @xmath13 with boundary condition @xmath14 which obeys the same diffusion equation @xmath15 describes , then , pr to @xmath16 . the distribution of recurrence times ( 1 ) is simply related to an auxiliary function @xmath17 by @xmath18 here @xmath19 is normalizing factor , and parameter @xmath20 provides a necessary truncation of the preceding diverging expression at small @xmath1 .
it characterizes the dynamical time scale of the diffusion ( cf .
, e.g. , free path in molecular diffusion ) .
if the motion in @xmath10 is actually bounded ( see below ) , eq.(4 ) describes initial free diffusion .
now , consider in more details another simple model the kicked rotator described by the so called standard map : @xmath21 on a torus ( @xmath22 ) .
we seek a solution @xmath23 of diffusion equation ( 3 ) with the boudary condition @xmath24 which provides a loss of probability because of pr to @xmath16 ( and to @xmath25 ) .
the orthogonal and normalized eigenfunctions of the diffusion equation for this problem have the form ( @xmath26 is integer ) @xmath27 with the corresponding eigenvalues @xmath28 which describe the decay rate of the eigenmodes ( 7 ) . in eq.(8 ) the diffusion rate is @xmath29 with the dynamical correlation function @xcite @xmath30 where @xmath31 is the bessel function . the set of eigenfunctions ( 7 ) and eigenvalues ( 8) provides a general solution of the diffusion equation with boundary condition ( 6 ) for an arbitrary initial distribution @xmath32
. peculiarity of pr statistics @xmath0 is just in a very particular initial condition .
specifically , for a single trajectory in numerical experiments the recurrence time @xmath1 is determined by the two successive crossings of the _ exit line _ which is , in the model under consideration , @xmath33 .
hence , the initial distribution is concentrated right here : @xmath34 .
the condition for a trajectory with initial @xmath35 to cross the exit line reads : @xmath36 .
whence , the probability of crossing is proportional to @xmath37 , and the normalized initial distribution can be taken in the form : @xmath38 an example of @xmath39 is shown in the insert to fig.1 .
it is convenient to chose @xmath40 on the one side of the exit line which is possible due to the symmetry of eigenfunctions ( 7 ) .
the difficulty with such an initial condition is in its narrow width which is always comparable with the dynamical scale ( both are @xmath41 , a single kick ) .
this violates the diffusion approximation for the exact integro
differential kinetic equation . a simple remedy is well known , for example , from the theory of neutron diffusion where the dynamical scale is the transport free path @xmath42 ( see , e.g. , refs.@xcite and @xcite , p.689 ) . a simple correction improving the diffusion approximation amounts to a relatively small shift of the boundary condition ( 6 ) from @xmath43 to @xmath44 where @xmath45 is the dynamical scale in our problem , and @xmath46 is unknown numerical factor to be determined below from the numerical experiments .
this implies an increase of the global scale : @xmath47 while the initial distribution remains unchanged as it is obtained directly from the dynamics ( 5 ) .
notice the corresponding change in eigenfunctions ( 7 ) .
the general solution of the diffusion problem is given by @xmath48 where the expansion coefficients @xmath49 are determined by the initial condition ( 11 ) : @xmath50 @xmath51 @xmath52 here @xmath53 is the struve function , @xmath54 ( see below ) , and @xmath55 is a small diffusion parameter .
the latter approximate expression in ( 13 ) holds true for @xmath56 .
now , the pr statistics is described by @xmath57 @xmath58 with @xmath59 because only odd modes contribute to the integral .
asymptotically , as @xmath5 , pr decay exponentially ( poisson statistics ) @xmath60 with the characteristic time @xmath61 which is determined by the first ( most slow ) mode @xmath62 , and which is of the order of the global diffusion time .
the factor @xmath63 in eq.(15 ) characterizes the share of asymptotic exponential decay which is small in the diffusive regime due to @xmath64 .
the main , initial , decay is a power
again , due to small @xmath65 , the sum in eq.(14 ) can be approximately replaced by the integral over @xmath66 to obtain : @xmath67 where @xmath68 function @xmath69 is given by eq.(10 ) , and the approximate expression for @xmath49 in eq.(13 ) is used .
the latter is not applicable for @xmath70 , so that the final expression in eq.(17 ) is an approximate truncation of the preceding diverging relation ( cf .
eq.(4 ) ) .
the power
law / exponential crossover time @xmath71 is obtained from the comparison of eqs .
( 15 ) and ( 17 ) , and is given approximately by the relation : @xmath72 again , in the diffusive regime ( @xmath73 ) the intermediate power
law decay may be very long until the exponential asymptotics is reached .
an example of pr in ergodic case is shown in fig.1 .
we use the standard map ( 5 ) on a torus of sufficiently large circumference @xmath74 to provide a diffusive relaxation ( @xmath64 , for the opposite limit of ballistic relaxation @xmath75 see section 4 below ) .
how strange it may seem , the conditions for ergodicity even in such an apparently simple model are still unknown !
however , numerical experiments ( see , e.g. , ref.@xcite indicate that , at least , for a particular value of the parameter @xmath76 the share of the regular domains , if any , is negligible ( @xmath77 ) besides the two small islets ( per map s period , see section 4 below ) . fortunately , their effect on pr is also negligible because they are related to the accelerator mode in which the momentum @xmath10 quickly moves around the torus , so that a trajectory immediately crosses the exit line @xmath33 ( cf . section 4 ) . in fig.1 empirical data for a particular value of @xmath78
are shown which corresponds to @xmath79 periods of map ( 5 ) in @xmath10 .
all the data were obtained from the run of a single trajectory over @xmath80 iterations . transition from a power law ( straight dashed line ) to an exponential ( dashed curve ) is clearly seen . for a quantitative comparison with the theory above ( section 2.1 ) we fix the dynamical parameter @xmath81 where the value @xmath82 is used which has been obtained from a special numerical experiment .
it considerably differs from the value @xmath83 according to approximate relation ( 10 ) just because of accelerator islets mentioned above . since our model is a map , the minimal empirical recurrence time is @xmath84 instead of @xmath85 in a continuous theory ( for example , in numerical data @xmath86 ) .
the corresponding corrections are negligible except the initial dependence for @xmath87 ( see below ) .
numerical data in fig.1 were fitted to eq.(15 ) in the interval @xmath88 iterations , and the empirical values of the characteristic time @xmath89 , and of the factor @xmath90 were obtained .
the corresponding values of the correction parameter are @xmath91 , and @xmath92 .
the difference in these two values of @xmath93 characterizes the accuracy of the correction which is rather poor because of a very narrow initial distribution ( see eq.(11 ) , and discussion around ) . without correction ( @xmath94 ) the theoretical values would be : @xmath95 , and @xmath96 which both are substantially underestimated . for a more systematic study the similar numerical data were computed for a number of @xmath97 values specified by the integer @xmath98 .
the results are shown in fig.2 .
dependence @xmath99 is well described by the uncorrected relation ( 16 ) for large @xmath100 as expected . in intermediate region ( @xmath101 )
the agreement is father improved by the correction which provides a smooth transition to the ballistic limit ( see eqs.(28 ) and ( 29 ) in section 4 ) . in other words ,
the correction is not very important for the asymptotic decay rate because it is determined by the first eigenfunction which is only slightly disturbed , for large @xmath100 , by the shift of the boundary .
this is no longer the case for the amplitude @xmath63 which strongly depends just on the distorted region near the boundary @xmath16 . as a result
the correction is most important for large @xmath100 .
the dependence @xmath102 in the intermediate region remains unclear . for @xmath103 both relations , eqs.(15 ) and ( 16 ) , are in a reasonable agreement with the numerical data for the same average value of the correction parameter @xmath104 .
coming back to fig.1 , we see that the initial power
law decay is well described by a simple relation ( 17 ) with @xmath105 which is shown by the dashed straight line , and which would correspond to @xmath106 .
now we consider an opposite limit of essentially nonergodic system with a large chaos border and the critical structure . as an example we take the separatrix map which was studied in many papers ( see , e.g. , refs.@xcite ) , and for which a new regime of pr has been recently observed @xcite .
the latter was the main motivation for the present studies .
we take the separatrix map in the form @xcite : @xmath107 here the motion is always strictly confined to the so called chaotic layer : @xmath108 . previously , the most studied case corresponded to big parameter @xmath109 . in this limit @xmath110 , so that the width of the layer ( @xmath111 ) is much larger than the dynamical scale of the diffusion ( a single kick ) which , for map ( 20 ) , is unity ( cf .
eq.(5 ) ) . besides the critical structure along the two borders ,
the average diffusion rate within the layer is nearly constant ( see eq.(10 ) ) : @xmath112 hence , the initial decay of pr is a simple power law ( 1 ) which was observed , indeed , from the beginning @xcite ( section 1 ) . the crossover time to a diferent law
is given by a simple diffusion estimate : @xmath113 unlike the ergodic case , the asymptotics of pr in the presence of chaos border is also a power law but with a different exponent @xmath114 .
this is explained by a very specific critical structure near the border where the diffusion rate rapidly drops . as a result
no trajectory can ever reach the exact border , even though it is approaching , from time to time , the border arbitrarily close ( see refs .
@xcite for details ) .
an example of this well known behavior is shown in fig.3 ( upper solid curve ) . a transition between the two different power laws ( dashed straight lines ) at @xmath115
is clearly seen in agreement with estimate ( 22 ) .
there is no sign of any exponential decay .
now , how does it appear in a similar model @xcite ?
the first observation is that in application to celestial mechanics ( dynamics of asteroids ) the parameter @xmath116 of map ( 20 ) is typically rather small : @xmath117 @xcite .
this drastically changes the structure of the layer .
first of all , the layer width is reduced down to the size of a single kick .
an example is shown in fig.4 .
hence , the diffusion approximation becomes inapplicable .
instead , the so called ballistic relaxation comes into play which is much quicker . in other words , a slow diffusive motion from the exit line to a critical structure
is replaced now by rapid jumps of a trajectory over the whole layer with some probability to get into the critical structure .
since those jumps are very irregular in a chaotic layer the pr are expected to decay exponentially .
this is the case indeed as an example in fig.3 demonstrates ( lower solid curve , @xmath118 ) .
the exponential decay can be intermediate only as the trajectory is evetually captured into the critical structure , and the decay turns to a power law .
generally , the initial part of the power law is an approximate relation in that its exponent is not universal , and is even varying with @xmath1 . in the latter example
@xmath119 which is rather different from @xmath114 for the upper curve in fig.3 .
another interesting and important question is how long is the intermediate exponential ?
for the lower curve in fig.3 it is rather long : @xmath120 which corresponds to the pr crossover as low as @xmath121 ! however , under different conditions with the same @xmath118 the exponential is much shorter : @xmath122 , and @xmath123 .
the difference is in the exit line as shown in fig.4 . in the latter case
the exit line is usual : @xmath16 .
the critical structure is determined by the two big islands comparable in size with that of the whole layer .
this entails a rapid capture of a trajectory into the critical structure , and a fast transition to a final power law ( with the local exponent @xmath124 ) .
the lower curve in fig.3 corresponds to the same @xmath118 but to a different exit line : @xmath125 it is chosen in such a way to cut through both stability islands and , thus , to suppress any sticking to their critical structure .
then , the final power law is determined by the critical structure at the layer borders which is apparently very narrow and can not be discerned by eye in fig.4 .
nevertheless , it does exist as the asymptotic power law of pr in fig.3 proves .
moreover , the latter even allows us to estimate the size of the critical structure : its relative area ( with respect to that of the layer ) is @xmath126 , or the width @xmath127 ( see section 4 ) .
this exponential transient is well fitted by the relation similar to ( 15 ) ( up to @xmath128 ) with @xmath129 , and @xmath130 .
both values are in a surprisingly well agreement with the uncorrected theory ( @xmath131 , see fig.4 ) which gives @xmath132 , and @xmath133 .
apparently , this is because the diffusion parameter @xmath134 is still not large enough .
now , we can summarize the conditions for the transient exponential in pr for a nonergodic motion : ( i ) fast , ballistic , relaxation , and ( ii ) a small measure of the regular domains . besides , it turns out that the exponential pr allow for , at least , some estimates of that measure .
a more quantitative study of this interesting relation is convenient to continue with the standard map again .
this is because the latter has an infinite series of the special values of parameter @xmath135 for which there are well studied islands of regular motion with a simple scaling and of rapidly decreasing area .
the main advantage of this microtron model is in that it is very simple , especially for numerical experiments , and well studied already .
here we are interested primarily in the domains of regular motion which exist for an infinite series of the special values of parameter @xmath135 where @xmath136 is any integer . within these domains ( islands )
@xmath137 grows indefinitly proportional to time which is the so called microtron acceleration .
it was well studied since the celebrated paper due to veksler in 1944 ( see , e.g. , @xcite and references therein ) . however , in the present paper , as well as in ref.@xcite , the main object for study is not the regular acceleration itself but rather the chaotic motion outside the microtron islands which is generally affected by the critical structure at the island borders .
a picture of this scale invariant border is shown in fig.5a in dimensionless variables @xmath138 where @xmath139 is a parameter of map ( 5 ) , and @xmath140 the latter inequalities determine the stability region around a fixed point @xmath141 . in fig.5a and below @xmath142 ( the center of stability ) .
for each integer @xmath100 there are two islets per phase space bin @xmath143 one of which is presented in fig.5a .
the picture shows a single trajectory of @xmath144 iterations . during this time interval
the trajectory is sticking to the critical structure very close to the exact chaos border which results , under particular conditions ( see below ) , in an asymptotic power
law decay of pr ( cf .
fig.3 above ) . the island relative area ( with respect to that of the phase space bin )
is given also by a dimensionless relation @xcite : @xmath145 where the latter value corresponds to @xmath142 .
this area rapidly decreases as island s number @xmath100 grows . yet , for any @xmath146 it determines the asymptotic pr decay , as we shall see below . in fig.5b another ,
much smaller , microtron island is shown for comparison . in this case
an outside , and much longer , trajectory was used which can not ever cross the chaos border and enter the island .
its area is given by the same estimate ( 26 ) with @xmath147 .
the main difficulty with the microtron model for our purposes here is the rapid growth in @xmath137 within and around the chaos border .
this destroys any long sticking of a trajectory whatever the exit line for pr ( cf .
section 3 , fig.4 ) . to overcome this difficulty we used the following method .
first , we have chosen the exit line in such a way not to cut any island .
it was done simply by fixing parameter @xmath148 in map ( 5 ) without any change in the configuration of map s torus .
second , we compensated acceleration by adding the term @xmath149 to the first eq.(5 ) .
this helps , of course , for one island of each pair only .
now , we need to provide the ballistic regime of relaxation that is a sufficiently large parameter @xmath150 ( section 2.1 ) .
it is convenient to take @xmath151 , so that the parameter @xmath152 is nearly independent of @xmath100 except a few small values of the latter . neglecting any dynamical correlations of the motion ( particularly , those caused by the presence of small microtron islets including the compensation of acceleration ) it is straightforward to calculate the probability @xmath153 , per map s iteration , for a trajectory to stay within the torus without crossing the exit line . as is easily verified , it is given by the relation : @xmath154 @xmath155 where @xmath156 these general relations were used in section 2.2 ( fig.2 ) to draw the ballistic approximation .
the latter expression in eq.(28 ) corresponds to the value @xmath157 used in numerical experiments . without additional shift @xmath158 discussed above the average time of the exponential decay
would be @xmath159 for @xmath157 the shift increases @xmath153 and @xmath160 up to @xmath161 @xmath162 now we can turn to numerical experiments with this microtron model .
the main results of numerical experiments are presented in fig.6 , and in the table below . in fig.6 the points show numerical data computed from a single trajectory ( for each @xmath100 ) up to @xmath163 iterations ( for the largest @xmath164 ) .
the straight solid line is the fitted intermediate exponential with the decay time @xmath165 in a good agreement with the expected theoretical value @xmath166 in eq.(31 ) .
this justifies neglecting dynamical correlations assumed in the above theory in ballistic regime .
the exponential / power law crossover time systematically increases with @xmath100 that is with the decrease of the microtron island area ( see table ) .
the power law tails of pr were fitted by the expression @xmath167 remarkably , all values of the exponent were found to be close : @xmath168 .
the relation of this expression to the size of the critical structure is based on the following hypothesis : dependence ( 32 ) , fitted to the tail of pr , can be extrapolated back to @xmath169 .
if true , it allows us to interpret the parameter @xmath170 as the relative area of the whole ( global ) critical structure around the corresponding microtron island of area @xmath171 .
one could expect that both areas are comparable : @xmath172 .
surprisingly , this is not the case ( see table , third column ; the data in fourth column will be discussed below ) .
their ratio @xmath173 is not only very large but also slowly increasing with @xmath100 according to the following approximate empirical relation @xmath174 the origin of this small correction to a simple scaling @xmath175 const remains unclear . in any event , the size of the whole critical structure seems to be much larger than expected .
this main outer part of the structure looks ergodic , and forms a sort of halo around the usually narrow inner part with a typical admixture of chaotic and regular components of motion .
the former reminds the ergodic critical structure around a parabolic fixed point , that is the limiting case of an island of zero size , studied in ref.@xcite . in a sense ,
such a halo is some hidden critical structure , without internal chaos borders but with apparently strong correlations in the motion which keep a trajectory within this relatively small domain .
now , the principal question to be aswered reads : is the observed halo a real physical structure or the result of a wrong interpretation of the empirical data using the above extension hypothesis ? to clarify this question a new series of numerical experiments was undertaken .
to this end , the exit times from the halo , instead of recurrences , were measured .
such a method was recently successfully used in the studies of the critical structure in ref.@xcite . in the problem under consideration here the measurement of exit times
was organized as follows . a number ( typically 100 ) of trajectories with the initial conditions homogeneously distributed over the circle around a microtron island ( see fig.5a ) were run until they leave the interval ( @xmath176 ) .
the dependence of the average exit time @xmath177 for a series of the circles with increasing radius @xmath178 as a function of the area within a circle @xmath179 ( in scaled variables ( 24 ) ) was thus computed .
the minimal circle of radius @xmath180 touches the island , and comprises the area @xmath181 while island s area in these units is @xmath182 , the minimal ratio being @xmath183 .
the main results of this measurement are shown in fig.7 for 8 different values of parameter @xmath100 up to @xmath184 with the island area as small as @xmath185 !
this is completely out of reach for the pr method ( cf .
ref.@xcite ) .
the difference is in a rather short exit time from the halo , we are interested in , as compared to the long recurrence time on the tail where it is eventually separated from the exponential ( fig.6 ) . the main result revealed in fig.7 is a transition between the two different scalings .
one , for relatively large @xmath177 , is the standard critical scaling shown by the dashed line which is the fitting of numerical data to the relation @xmath186 with @xmath187 .
as expected , this part of the data does not depend on @xmath100 .
moreover , scaling ( 34 ) is in a good agreement with the pr tail in fig.6 .
the relation between the two is well known @xcite .
generally , the power
law pr statistics is descibed by ( cf .
eq.(32 ) ) : @xmath188 where @xmath189 is the average pr time , and the latter expression is obtained from the ergodicity within the chaotic component of the motion . using approximate relation @xmath190 @xcite we obtain @xmath191 where @xmath192 ( see above ) . for integer map s time @xmath193 where @xmath194 is the riemann function . whence , for @xmath195 the relation ( 36 ) gives ( see eq.(34 ) ) @xmath196 which is in a reasonable agreement with numerical data ( third column in table ) .
however , unlike the data in fig.6 where the actual power law scaling is not seen under much larger exponential transient , the data in fig.7 clearly demonstrate that the critical scaling does not reach the limit @xmath169 assumed above
. moreover , the crossover time @xmath71 increases , and hence the size of the global critical structure ( @xmath197 ) decreases , as @xmath100 grows ( fig.7 , insert ) .
the increase of @xmath71 must have an upper bound because otherwise the critical structure near the chaos border would be also destroyed in contradiction to the detailed studies of that in anomalous diffusion @xcite .
indeed , the empirical dependence @xmath71 in fig.7 ( insert ) can be fitted reasonably well by the expression @xmath198 the upper limit in @xmath71 corresponds , according to eq.(34 ) , to the lower limit in the ratio @xmath199 .
combining eqs .
( 34 ) and ( 39 ) we obtain approximately @xmath200 the empirical values of this dependence are given in table ( fourth column ) .
they indicate much smaller , yet still a fairly large , size of the critical structure as compared to the limiting estimate for pr ( third column ) .
the former seems to be more reliable and realistic .
a different , new and unknown , scaling in fig.7 for @xmath201 requires further studies .
what is of importance here is the termination of the critical scaling at a finite @xmath202 .
this determines the outer border of the critical structure .
the original motivation of these studies was the unusual exponential transient observed in pr in the presence of chaos border @xcite .
however , in the course of investigating the mechanism and conditions of this phenomenon a more interesting observation has come out .
it suggests the existence of a new , unknown to my knowledge , part of the critical structure surrounding , like a halo , the well known inner part close to the chaos border . in spite of some contradictory empirical evidence the halo apparently occupies the most of the global critical structure . in any event , in the microtron model considered in this paper
the area of the halo is much larger than that of the regular island inside it , even according to the minimal estimates ( see table and fig.7 ) . as is well known , the scaling of the peripheral part of the critical structure is generally nonuniversal , at least quantitatively , in the sense of the corresponding power
law exponents , for example @xcite . however , it might be nevertheless typical qualitatively as it appears in our model . in this respect
, it would be interesting to look at different examples of the global crtical structure .
one possibility is to use the same model with a fixed parameter @xmath151 ( section 4 ) but for different values of the stability parameter @xmath203 in eq.(25 ) .
first preliminary numerical experiments have been done for 9 values of @xmath203 within the whole stability interval ( @xmath204 ) including the quasiergodic case @xmath76 used in section 2 for other purposes . in all cases but the latter the pr behavior was similar to that in the main series of numerical experiments ( fig.6 ) , at least qualitatively .
however , just for @xmath76 a sudden surprise has emerged which is presented in fig.8 . in spite of a very long run ( @xmath205 iterations )
no clear sign of the expected power
law decay is seen .
a small deviation from the final exponential at the end of the dependence is a typical feature due to a poor statistics ( cf .
, e.g. , fig.6 ) .
the first exponential is close to the expected one with the fitted decay time @xmath165 as compared to the theoretical @xmath206 ( see section 4 ) . for the second exponential the empirical decay time @xmath207 is about 10 times longer .
this means that a trajectory is kept within ( sticks to ? ) a certain domain but not in a way it does so in the usual critical structure .
moreover , the relative area @xmath208 of this peculiar domain , estimated similarly to @xmath170 in eq.(32 ) , is small and is comparable with that of the island inside ( fig.5b ) : @xmath209 .
this island does have a chaos border , yet contrary to usual behavior , it does not produce any appreciable power
law decay of pr .
another preliminary remark is that a more careful inspection of fig.5b seems to suggest a different , more regular than usual , structure of the chaos border for @xmath76 ( cf .
certainly , this anomaly deserves further investigation .
* acknowledgements .
* i am grateful to i.i .
shevchenko for many interesting discussions and important remarks .
this work was partially supported by the russia foundation for fundamental research , grant 970100865 .
99 , chief ed .
a.m. prokhorov , vol .
1 , ed . d.m .
alekseev , p.345 , soviet encyclopedia , moscow , 1988 . s. channon and j. lebowitz , numerical experiments in stochasticity and heteroclinic oscillation , ann .
academy sci . * 357 * , 108 ( 1980 ) .
b.v . chirikov and d.l .
shepelyansky , statistics of poincar recurrences , and the structure of stochastic layer of a nonlinear resonance , preprint budker inp 8169 , novosibirsk , 1981 ( in russian ) ; proc .
ix int . conf . on nonlinear oscillations ( kiev 1981 ) , naukova dumka * 2 * , 420 ( 1984 ) [ english translation : princeton univ .
report pppl - trans-133 , ( 1983 ) ] ; c. karney , physica d * 8 * , 360 ( 1983 ) .
anosov , dokl .
nauk sssr * 145 * , 707 ( 1962 ) .
chirikov , pseudochaos in statistical physics , preprint budker inp 9599 , novosibirsk , 1995 ; chao dyn/9705004 ; proc . intern . conf . on nonlinear dynamics , chaotic and complex systems ( zakopane , 1995 ) , eds .
e. infeld , r. zelazny and a. galkowski , cambridge univ .
press , 1997 , p.149 ; b.v .
chirikov and f. vivaldi , an algorithmic view of pseudochaos , preprint budker inp 9866 , novosibirsk , 1998 ; physica d * 129 * , 223 ( 1999 ) .
chirikov , phys .
reports * 52 * , 263 ( 1979 ) ; b.v .
chirikov , f.m .
izrailev and d.l .
shepelyansky , sov .
c * 2 * , 209 ( 1981 ) .
r. mackay , physica d * 7 * , 283 ( 1983 ) .
chirikov , lect .
notes in physics * 179 * , 29 ( 1983 ) ; proc .
conf . on plasma physics ( lausanne , 1984 ) * 2 * , 761 ( 1984 ) ; chaos , solitons and fractals * 1 * , 79 ( 1991 ) .
i.i . shevchenko and h. scholl , celestial mech .
dynamical asron . * 68 * , 163 ( 1997 ) ; i.i .
shevchenko , phys .
scripta * 57 * , 185 ( 1998 ) .
a. rechester and r. white , phys .
lett . * 44 * , 1586 ( 1980 ) ; a. rechester , m. rosenbluth and r. white , phys . rev .
a * 23 * , 2664 ( 1981 ) .
galanin , _ the theory of nuclear reactors with thermal neutrons _ , glavatom , moscow , 1959 ( in russian ) .
g. casati , g. maspero and d.l .
shepelyansky , relaxation process in a regime of quantum chaos , phys .
lett . * 82 * , 524 ( 1999 ) .
s. ruffo and d.l .
shepelyansky , phys .
lett . * 76 * , 3300 ( 1996 ) .
chirikov , zh .
* 110 * , 1174 ( 1996 ) .
r. artuso , correlation decay and return time statistics , physica d * 131 * , 68 ( 1999 ) .
b.v . chirikov and d.l .
shepelyansky , asymptotic statistics of poincar recurrences in hamiltonian systems with divided phase space , phys .
lett . * 82 * , 528 ( 1999 ) . | the mechanism of the exponential transient statistics of poincar recurrences in the presence of chaos border with its critical structure is studied using two simple models : separatrix map and the kicked rotator ( microtron ) . for the exponential transient to exist the two conditions
have been shown to be crucial : fast ( ballistic ) relaxation , and a small measure of the critical structure .
the latter was found to include a new peripheral part ( halo ) of a surprisingly large size .
first preliminary empirical evidence is presented for a new regime of poincar recurrences including the transition from exponential to exponential statistics . * poincar recurrences in microtron and * + * the global critical structure * + b.v . chirikov +
[ 5 mm ] _ budker institute of nuclear physics + 630090 novosibirsk , russia _ |
next - generation sequencing ( ngs ) offers unparalleled opportunities to study the causes and consequences of transposable element ( te ) activity across an ever - widening range of host species .
consequently , a large number of computational methods have recently been developed to identify both artificially and naturally induced te insertions using ngs data .
these methods use diverse approaches , but share the fundamental aim of assessing whether a particular te insertion is present in a given individual , or pool of , resequenced genome(s ) . te insertions discovered in resequenced genomes can either be known ( i.e. , insertions present in the reference genome ) or novel ( i.e. , de novo insertions not present in the reference genome ) .
since known te insertions occupy an identifiable span in the reference genome , representing them as a range of coordinate is no different than any other annotated genomic feature , such as genes or regulatory elements .
however , de novo te insertions are by definition not present in the genome sequence and their representation , while conceptually simple , is technically not straightforward . here
i discuss the challenges relating to the reference - based annotation of de novo te insertions .
i then propose a solution for representing de novo te insertions that accommodates known mechanisms of te insertion and established coordinate systems for genome annotation .
before considering issues relating to the reference - based annotation of de novo te annotations , it is necessary to introduce the two major coordinate systems for genome annotation .
the so - called base coordinate system anchors genomic feature to nucleotide positions in the genome .
in contrast , the interbase ( also known as zero - based or space - based ) coordinate system anchors genomic feature to the spaces between nucleotide positions in the genome . while they may seem trivially different , these two alternate representations have important implications for the mapping of de novo te insertions relative to a reference genome , and often cause confusion in the genomics community . for example , the ucsc genome bioinformatics team provides an answer to a frequently asked question ( http://genome.ucsc.edu/faq/faqtracks.html#tracks1 ) about this issue since this site uses base coordinate system ( which they refer to as one - based , fully - closed ) in the ucsc genome browser display but interbase coordinate system ( referred to as zero - based , half - open ) in their analysis tools and file formats .
base coordinate systems are in many ways more intuitive biologically , since features encoded by specific nucleotides in the genome are mapped to corresponding regions of the reference sequence . as
such , most genome annotation portals ( e.g. , ncbi or ensembl ) , bioinformatics software ( e.g. , blast ) and annotation file formats ( e.g. , gff ) use the base coordinate system .
interbase coordinate systems , despite being biologically non - intuitive , have a number of features that make them more computationally attractive , and thus are used by a growing number of genome bioinformatics systems , such as the ucsc genome browser ( http://genome.ucsc.edu/faq/faqtracks.html#tracks1 ) , chado ( http://gmod.org/wiki/introduction_to_chado#interbase_coordinates ) , and das2 ( http://biodas.org/documents/das2/das2_get.html#segment_ranges ) . to see why many genome informatics systems use the interbase coordinate system , it is first necessary to see how base and interbase coordinates are represented numerically for an annotation that is present in the reference genome ( such as a known te insertion that is present in the reference sequence ) .
let 's assume that we have an annotated feature spanning the nucleotides gggccc in a hypothetical reference genome shown in figure 1a . under the base coordinate system , this feature would be represented as a pair of coordinates : start = 3 and end = 8 . under the interbase coordinate system ,
the numerical difference between the two coordinate systems lies in terms of how the start coordinate is represented and how the coordinate range is interpreted .
genome coordinate systems and the annotation of te insertions . the location of an arbitrary genomic feature encoded by the sequence gggccc is represented differently in base and interbase coordinate systems ( a ) . since de novo te insertions occur between bases in the reference genome , they are more naturally represented by interbase coordinate systems . on the widely - used base
coordinate system , mapping a de novo te insertion requires the invocation of arbitrary rules ( either before or after the insertion site ) ( b ) .
these arbitrary rules can lead to ambiguity in the mapping and interpretation of de novo te insertions . as noted above
, there are several advantages for using the interbase coordinate system including : ( 1 ) the ability to represent features that occur between nucleotides ( like a splice site or de novo te insertion ) , ( 2 ) simpler arithmetic for computing the length of features ( e.g. , the length of a coordinate span is end - start , rather than end - start+1 as it is for base coordinates ) , ( 3 ) simpler arithmetic for calculating range overlaps , and ( 4 ) more rational conversion of coordinates from the positive to the negative strand ( for further discussion , see http://genomewiki.ucsc.edu/index.php/coordinate_transforms ) .
so why is the choice of coordinate system important for the annotation of de novo te insertions mapped to a reference sequence ?
the short answer is that de novo te insertions are not a part of the reference sequence and occur between nucleotides in the reference coordinate system .
therefore it is intrinsically difficult to accurately represent the location of a de novo te insertion on base coordinates . nevertheless , one - base coordinate systems dominate most of genome bioinformatics systems and are an established framework that one has to work within .
so how then should we annotate de novo te insertions on base coordinates ? answering this question leads to several unanticipated considerations , and why i believe that a standard must be established in the field of te genomics if we wish to create easily interpretable annotations of de novo te insertions identified using ngs technologies .
moreover , solving this problem is particularly crucial for applications where we wish to map te insertions with nucleotide - level precision , such as extracting information about the exact nature of a te - induced mutation or detailed understanding of the target site preferences of a te family . to begin , let s consider a te that inserts between positions x and x + 1 in a genome .
under a base coordinate system , if we wish to map a te insertion to single base resolution , we quickly encounter our first problem .
do we annotate both the start and stop coordinates at position x , or both coordinates at position x + 1 ( fig .
if we chose to annotate the insertion at position x , then we need to invoke a rule that the te inserts after nucleotide x to interpret this annotation correctly .
conversely , if we chose to annotate the insertion at position x + 1 , then we need to invoke a rule that the te inserts before nucleotide x to interpret this annotation correctly .
so , should we instead annotate the te as a two base span starting at x and ending at x + 1 , with the interpretation that the insertion occurs between the start and end positions ? this too is an unsatisfactory solution since at face value it incorrectly implies that the te insertion spans two base pairs in the genome or that it is imprecisely mapped .
in addition to the fact that tes insert between bases in the reference genome and therefore present an intrinsic challenge to base coordinate systems , a second problem concerning the annotation of de novo insertions arises from the joint effects of ( 1 ) the presence of target site duplications ( tsds ) and ( 2 ) the sequence information used to map te insertions to a reference genome . first , most tes create staggered cuts in the genomic dna that are filled on te integration leading to short tsds at the ends of te insertion .
tsds , however short they may be , represent duplication of sequence that is present as a single copy in the pre - insertion sequence represented by the reference genome .
second , methods used to map de novo te insertions to precise coordinates in the genome use sequence information in the junction region between a te and its unique flanking sequence ( such methods are sometimes referred to as split - read methods ) .
these te - flank junction sequences can be obtained from either the 5 or 3 end of the te insertion ( fig . 2 ) .
because the tsd is present on both ends of the te insertion but only occurs once in the reference genome , it turns out that where a de novo te insertion is annotated depends on whether one uses the te - flank sequence from the 5 or 3 end and the orientation of the te insertion in the genome .
unique dna in the reference genome ( e.g. , positions 37 for a 5 bp tsd ) is duplicated on insertion of a te for both insertions on the positive strand ( > > > ) and negative strand ( < < < ) . when ngs reads ( solid gray arrows ) that span the te - flanking region junction are used to map de novo te insertions on the positive strand , the placement of the insertion relative the tsd differs for reads from the 5 ( after tsd ) and 3 ( before tsd ) ends of the te .
differential annotation of te insertion sites is also observed for negative strand insertions , but placement relative to the tsd is reversed relative to positive strand insertions .
these tsd - induced effects can lead to ambiguity in the mapping and interpretation of de novo te insertions .
an example of how these effects together create problems for mapping te insertions is shown in figure 2 . in this case , imagine that at te creates a five base pair tsd on insertion , represented once in the reference genome but in two places in the genome with the te insertion . for an insertion on the positive strand ( > > > ) , a te - flank sequence from the 5 end is annotated to occur at the 3 end of the tsd .
in contrast , an insertion mapped using information from the 3 te - flank sequence is placed at the 5 end of the tsd . on the other hand , for an insertion on the negative strand ( < < < ) , the opposite effect occurs .
regardless of orientation , te - flank junction sequences from the 5 or 3 end map the te insertion to different locations in the genome , which is highly undesirable and could lead to differences in interpretation among researchers .
in fact , depending on ( 1 ) the orientation of the te insertion and ( 2 ) which end of the te is mapped to the genome , a given target site can lead to a total of four potential mappings . as a consequence ,
both the one- and two - base coordinate representations suggested above to map insertion sites are flawed , since even with consistent rules about mapping from either the 5 or 3 end , tes that insert into the same target site but occur on different strands would be annotated at two different locations .
this is precisely the case for the annotation of artificial p - element insertions into the d. melanogaster genome ( which have the same reference - based mapping problems as te insertions discovered using ngs ) , and why we previously observed an unexpected excess of insertions spaced exactly eight base pairs apart ( the length of the tsd for the p - element ) in the genome annotation on opposite strands for this te . as an solution to the problem of mapping de novo te insertions on base coordinate systems ,
i propose that we abandon the idea of annotating the insertion site and instead annotate the genomic sequence that is duplicated to give rise to the tsd ( the pre - tsd sequence ) .
specifically , i suggest annotating the start and end of the pre - tsd sequence as the feature span and labeling the orientation of the te in the strand field .
this formulation works because the pre - tsd actually does exist in the genome and therefore can be naturally annotated on base coordinate systems .
moreover , this solution bypasses having to chose an arbitrary rule about where to locate the te relative to the tsd , as is required under the one - base / two - base annotation framework ( see for example ref .
furthermore , it represents insertions into the same target site , but which occur on different strands , at the same location in the genome .
finally , under this framework one can use both 5 and 3 te - flanking sequence information jointly to map de novo te insertion sites .
in fact , the overlap on genome coordinates from sequences supporting the 5 and 3 te - flanking regions defines the pre - tsd . this solution is flexible enough to accommodate most mechanisms of te integration , since it requires no prior information about tsd length for a given te family , and it also works for te families that generate variable length tsds , since the pre - tsd is annotated on a per insertion basis .
one problem left open by this solution is that posed by exceptional te families that do not create a tsd , which do exist .
however , since these families by definition do not generate a tsd , several of the key problems with the one - base / two - base representations discussed above do not apply .
thus either of these strategies could suffice and would be in principle compatible with the pre - tsd annotation scheme advocated here .
i suggest using the one - base representation , with insertions mapped consistently to the x position regardless of strand .
finally , the framework proposed here should be seen not as the ultimate solution to the problem of representing de novo te insertions , but as a step toward establishing a standard for studies that harness the power of ngs technology to answer fundamental questions about the role of tes in functional and evolutionary genomics . by raising the issues relating to the seemingly simple task of mapping tes to a reference genome here
, it is hoped that further consideration of this matter will lead to the adoption of a general solution that allows for the annotation of te insertions in a concerted and uniform manner in the field . | understanding the causes and consequences of transposable element ( te ) activity in the genomic era requires sophisticated bioinformatics approaches to accurately identify individual insertion sites .
next - generation sequencing technology now makes it possible to rapidly identify new te insertions using resequencing data , opening up new possibilities to study the nature of te - induced mutation and the target site preferences of different te families . while the identification of new te insertion sites is seemingly a simple task , the mechanisms of transposition present unique challenges for the annotation of de novo transposable element insertions mapped to a reference genome .
here i discuss these challenges and propose a framework for the annotation of de novo te insertions that accommodates known mechanisms of te insertion and established coordinate systems for genome annotation . |
for tumors to survive , grow and disseminate , they must be able to secrete critical growth factors and cytokines .
some of these cytokines can positively influence neovascularization or angiogenesis , and others negatively regulate this process .
it is thought that an " angiogenic switch " , or the balance between positive and negative regulators , regulates the process of angiogenesis .
the neovascularization process ultimately serves as a conduit to bring in nutrients that promote growth and metastasis .
many of these cytokines are also used under normal physiological conditions in various cells and tissues ; therefore , direct interference with these cytokines is not a viable option .
it has been recently shown that signaling events mediated by bfgf in endothelial cells targets raf-1 to the mitochondria , which protects these cells from apoptosis .
this provides a mechanism that effectively explains why targeting the tumor neovasculature with a mutant raf-1 gene exerts anti - angiogenic effects .
a number of cytokines and growth factor polypeptides have been shown to act as survival factors during angiogenesis , including the acidic and basic fibroblast growth factors ( fgfs ) and vascular endothelial growth factor ( vegf ) .
basic fgf ( bfgf ) and vegf are two of the cytokines that have been most widely studied , because of their ability to induce many physiological responses , including survival and tumor growth . both in vitro and in vivo studies suggest that these mediators play a role in angiogenesis .
furthermore , these factors and their receptors are up - regulated under ischemic conditions in vivo , and administration of these proteins in vivo enhances capillary morphogenesis .
the intracellular signaling components regulated by these cytokines have been studied both in cultured cells and in vivo .
the production of vegf by tumors is known occur in response to various upstream factors , including hypoxia , elevated concentrations of bfgf , epidermal growth factor ( egf ) , insulin - like growth ( igf ) and hydrogen peroxide ( h2o2 ) ( figure 1 ) .
several lines of evidence show that vegf is one of the most important factors in tumor cell survival and neovascularization .
for example , deletion of vegf or its receptor in mice results in the loss of functional blood vessels and early embryonic lethality .
furthermore , blocking vegf or vegf receptor functions can induce regression of tumor vasculature in vivo .
the answer may lie in the fact that tumors secrete multiple cytokines and growth factors .
the apoptotic signals exerted by " intrinsic " and " extrinsic " pathways could be rescued by bfgf and vegf .
these cytokines activate the pak-1 and src kinases , and phosphorylation of specific amino acid residues ( indicated by the lightning symbols ) within the raf-1 kinase signals it to be targeted to the mitochondria , which promotes endothelial cell survival [ adapted from alavi , a. et al . ,
vegf and bfgf act through specific cell surface receptor tyrosine kinases , which both utilize the canonical ras / raf / mitogen - activated protein kinase ( map ) / extracellular - signal - regulated kinase ( erk ) signaling events that link growth factor receptors to nuclear events .
the raf signaling pathway has been highly conserved throughout evolution , and activation of the raf protein kinase is considered to be a primary event in the ras signaling pathway .
depending on the specific stimulus and cell type involved , this signaling pathway can promote cell survival , proliferation , or apoptosis .
the raf genes encode cytoplasmic protein serine / threonine kinases that play a critical role in cell growth and differentiation .
there are three raf genes , c - raf ( raf-1 ) , a - raf and b - raf .
the expression of a - raf and b - raf are known to be somewhat restricted .
structural and functional studies have shown that raf is composed of two distinct domains , an n - terminal ras interacting domain and a c - terminal serine / threonine kinase domain .
the gtp - bound form of ras directly interacts with n - terminal region of raf-1 ( figure 1 ) .
raf-1 has been shown to be phosphorylated on tyrosines 340 and 341 , as well as on serines 43 , 259 , 499 , and 621 and threonine 269 .
another recent study suggested that phosphorylation of threonine 491 and serine 494 , two phosphorylation sites within the catalytic domain of raf-1 , may be required for its activation , but not inhibition .
while these phosphorylation events positively regulate raf-1 activity , phosphorylation induced by protein kinase a and possibly erk may negatively impact raf-1 functions .
recently , alavi et al . demonstrated that bfgf and vegf utilize the same target , i.e. raf-1 , kinase with distinct specificity .
in this elegant study , alavi et al . examined the role of bfgf - and vegf - induced activation of p21-activated protein kinase-1 ( pak-1 ) and src kinase in the activation of raf-1 kinase .
both cytokines induced activation of focal adhesion kinase ( fak ) and erk , but only bfgf - induced phosphorylation of serines 338 and 339 promoted endothelial cell survival .
this pathway requires the action of bfgf through pak-1 kinase , which directly phosphorylates the serine 338 and 339 residues of raf-1 and targets it to the mitochondria ( figure 1 ) . once in the mitochondria , bcl-2 mediated anti - apoptotic mechanisms
in contrast to the effects of bfgf , vegf induced src - mediated phosphorylation of tyrosines 340 and 341 .
this required activation of mek-1 and erk1/2 , and conferred protection of endothelial cell against apoptosis induced by exposure to tumor necrosis factor ( extrinsic pathway ) .
it appears that a loss - of - function mutant form of raf-1 i.e. , raf-1 ss338/339aa+yy340/341ff blocks both bfgf- and vegf - mediated protection of endothelial cells against ' intrinsic ' and ' extrinsic ' apoptotic events .
based on this study , it will be of considerable interest to investigate the regulation of raf-1 kinase in response to egf , igf and platelet - derived growth factor ( pdgf ) , and to examine how these growth factors affect endothelial cell survival .
furthermore , it will also be important to evaluate the role of adhesion receptors v3 and 51 integrins in the regulation of raf-1 kinase activity .
a number of cytokines and growth factor polypeptides have been shown to act as survival factors during angiogenesis , including the acidic and basic fibroblast growth factors ( fgfs ) and vascular endothelial growth factor ( vegf ) .
basic fgf ( bfgf ) and vegf are two of the cytokines that have been most widely studied , because of their ability to induce many physiological responses , including survival and tumor growth . both in vitro and in vivo studies suggest that these mediators play a role in angiogenesis .
furthermore , these factors and their receptors are up - regulated under ischemic conditions in vivo , and administration of these proteins in vivo enhances capillary morphogenesis .
the intracellular signaling components regulated by these cytokines have been studied both in cultured cells and in vivo .
the production of vegf by tumors is known occur in response to various upstream factors , including hypoxia , elevated concentrations of bfgf , epidermal growth factor ( egf ) , insulin - like growth ( igf ) and hydrogen peroxide ( h2o2 ) ( figure 1 ) .
several lines of evidence show that vegf is one of the most important factors in tumor cell survival and neovascularization .
for example , deletion of vegf or its receptor in mice results in the loss of functional blood vessels and early embryonic lethality .
furthermore , blocking vegf or vegf receptor functions can induce regression of tumor vasculature in vivo .
the answer may lie in the fact that tumors secrete multiple cytokines and growth factors .
the apoptotic signals exerted by " intrinsic " and " extrinsic " pathways could be rescued by bfgf and vegf .
these cytokines activate the pak-1 and src kinases , and phosphorylation of specific amino acid residues ( indicated by the lightning symbols ) within the raf-1 kinase signals it to be targeted to the mitochondria , which promotes endothelial cell survival [ adapted from alavi , a. et al . ,
vegf and bfgf act through specific cell surface receptor tyrosine kinases , which both utilize the canonical ras / raf / mitogen - activated protein kinase ( map ) / extracellular - signal - regulated kinase ( erk ) signaling events that link growth factor receptors to nuclear events .
the raf signaling pathway has been highly conserved throughout evolution , and activation of the raf protein kinase is considered to be a primary event in the ras signaling pathway .
depending on the specific stimulus and cell type involved , this signaling pathway can promote cell survival , proliferation , or apoptosis .
the raf genes encode cytoplasmic protein serine / threonine kinases that play a critical role in cell growth and differentiation .
there are three raf genes , c - raf ( raf-1 ) , a - raf and b - raf .
the expression of a - raf and b - raf are known to be somewhat restricted .
structural and functional studies have shown that raf is composed of two distinct domains , an n - terminal ras interacting domain and a c - terminal serine / threonine kinase domain .
the gtp - bound form of ras directly interacts with n - terminal region of raf-1 ( figure 1 ) .
raf-1 has been shown to be phosphorylated on tyrosines 340 and 341 , as well as on serines 43 , 259 , 499 , and 621 and threonine 269 .
another recent study suggested that phosphorylation of threonine 491 and serine 494 , two phosphorylation sites within the catalytic domain of raf-1 , may be required for its activation , but not inhibition .
while these phosphorylation events positively regulate raf-1 activity , phosphorylation induced by protein kinase a and possibly erk may negatively impact raf-1 functions .
recently , alavi et al . demonstrated that bfgf and vegf utilize the same target , i.e. raf-1 , kinase with distinct specificity . in this elegant study ,
alavi et al . examined the role of bfgf - and vegf - induced activation of p21-activated protein kinase-1 ( pak-1 ) and src kinase in the activation of raf-1 kinase .
both cytokines induced activation of focal adhesion kinase ( fak ) and erk , but only bfgf - induced phosphorylation of serines 338 and 339 promoted endothelial cell survival .
this pathway requires the action of bfgf through pak-1 kinase , which directly phosphorylates the serine 338 and 339 residues of raf-1 and targets it to the mitochondria ( figure 1 ) .
in contrast to the effects of bfgf , vegf induced src - mediated phosphorylation of tyrosines 340 and 341 .
this required activation of mek-1 and erk1/2 , and conferred protection of endothelial cell against apoptosis induced by exposure to tumor necrosis factor ( extrinsic pathway ) .
it appears that a loss - of - function mutant form of raf-1 i.e. , raf-1 ss338/339aa+yy340/341ff blocks both bfgf- and vegf - mediated protection of endothelial cells against ' intrinsic ' and ' extrinsic ' apoptotic events .
based on this study , it will be of considerable interest to investigate the regulation of raf-1 kinase in response to egf , igf and platelet - derived growth factor ( pdgf ) , and to examine how these growth factors affect endothelial cell survival .
furthermore , it will also be important to evaluate the role of adhesion receptors v3 and 51 integrins in the regulation of raf-1 kinase activity .
previously hood et al . showed that the nanocrystal - aided targeting of the neovasculature with mutant raf-1 exerts anti - angiogenic effects . taken together with the recent work by alavi et al . , these studies suggest new possibilities for targeting the tumor neovasculature with small molecule drugs directed against raf-1 that could promote the apoptosis of endothelial cells and cause regression of tumor vasculature .
these studies also suggest opportunities for inducing therapeutic angiogenesis , in tissues where unwanted apoptosis could be prevented by promoting the translocation of activated raf-1 kinase into the mitochondria . taken together , these studies clearly bring nanotechnology - aided anti - angiogenic molecular therapeutics a step closer to reality .
the author ( k.k.w ) acknowledges research support obtained from american heart association , national council .
k.k.w is a member of mission connect ( tirr foundation ) ; cardiovascular research institute ( cvri ) , texas a & m university system health science center ; and u.t . & | a recent study demonstrated that vascular endothelial growth factor ( vegf ) and basic fibroblast growth factor ( bfgf ) activate raf-1 kinase in an experimental neovasculature system .
the study showed that bfgf and vegf activate p21-activated protein kinase-1 ( pak-1 ) and src kinase , respectively .
pak-1 and src kinases phosphorylate specific serine and tyrosine residues within the activation loop of raf-1 kinase .
their findings further suggest that phosphorylation at these sites protects endothelial cells from apoptosis induced by both intrinsic and extrinsic factors .
the tumor neovasculature provides specific molecular markers or " zip codes " .
this group of investigators has previously shown that nanosphere - aided targeting of the neovasculature with mutant raf-1 causes regression of the tumor vasculature .
thus , nanoparticles coated with " zip code "-
specific homing biomolecules may be useful for delivering anti - angiogenic molecules that can induce tumor regression . |
the notions of quantum liquids and their instabilities are paradigmatic for condensed matter physics @xcite . for multicomponent fluids , an important set of instabilities is associated with interactions between components .
a classic example is the cooper instability of a spin-@xmath0 fermi liquid : even an infinitesimal attractive coupling between fermions of opposite spins drives a phase transition into the bardeen - cooper - schrieffer superconductor @xcite .
a one - dimensional ( 1d ) counterpart of the fermi liquid , the spinful luttinger liquid , has a similar instability , where an attractive inter - spin coupling opens a gap in the spin channel @xcite .
traditionally , the bulk of the discussion on two - species liquids assumed the su(2 ) spin symmetry .
the recent years have witnessed a growing availability of experimental studies of mixtures of unlike particles .
this includes loading ultracold atoms to spin - dependent optical lattices @xcite , and trapping atoms of different masses @xcite or even different statistics @xcite .
while most of experimental progress so far is in the domain of ultracold atoms , we stress that the relevance of such _ asymmetric _ mixtures is not confined to the realm of cold gases : dealing with more traditional solid state systems , one faces an asymmetric mixture situation as soon as the fermi level spans several bands ( which _ a priori _ need not be equivalent ) .
this setup is typical for such diverse materials as semi - metallic compounds , mixed - valence materials , organic superconductors @xcite , small radius nanotubes @xcite , and even graphene - based heterostructures @xcite
. a generic question immediately arises : given a two - component mixture , what is the role of ( the lack of ) su(2 ) symmetry ? or , more precisely , does the symmetry between components limit the set of instabilities of a liquid ? clearly , the answer might depend on the universality class of the liquid , and on the particular way the symmetry is broken .
the simplest albeit non - trivial way of breaking the symmetry is to assume species - dependent masses of the particles .
even if we consider the few - body problem , this is known to bring new physics like the efimov phenomenon : while for an equal - mass fermi liquid the only allowed bound state is a cooper pair , three - body bound states ( trimers ) appear once the mass ratio exceeds a certain threshold @xcite .
the atom - dimer scattering is strongly affected by the mass asymmetry @xcite and the ultimate fate of a fermi liquid in presence of the efimov effect is currently an open question being actively investigated @xcite .
all these theoretical considerations are strongly motivated by cold - atom experiments which have recently achieved degeneracy of fermi gases with different masses @xcite and spin - imbalanced two - component fermionic gases @xcite .
the physics of 1d quantum many - body systems offers powerful methods @xcite , both analytical and numerical , to have quantitative predictions on the fate of the luttinger liquid in the presence of perturbations .
the role of mass asymmetry for a two - component luttinger liquid has been investigated in the renormalization group ( rg ) framework originally in the context of solid state physics @xcite , and recently revisited mostly in the context of cold atoms @xcite , and supplemented by numerical investigations @xcite .
overall the consensus was that the only new instability arising due to asymmetry is the collapse ( demixing ) instability for large asymmetry and/or strong interspecies attraction ( repulsion ) .
recently , a novel family of instabilities was predicted @xcite to exist due to the interplay between _ polarization and asymmetry _ : these instabilities only take place for _ polarized _ mixtures of either statistics , and are characterized by the locking of the ratio of the densities to a _ rational _ value .
subsequent work in ref . elucidated the relation of these instabilities and existence of few - body bound states . a qualitative picture of the mode - locking mechanism and the strong - coupling limit of the trimer formation is given in fig .
[ fig : trimers ] .
the latter regime recalls another approach to multi - particle bound - states which is the use of many - colors ( @xmath1-component ) fermions @xcite with which the physics of the trimers share qualitative features .
this paper is divided in two main parts : the first one investigates in detail the bosonization approach and the mode - locking mechanism mentioned above , while the second is dedicated to the specific but important example of the 1d asymmetric hubbard model using the density - matrix renormalization group ( dmrg ) technique @xcite .
the predictions of the first part account for most of the numerical data , but a more phenomenological bose - fermi picture is proposed as a complementary analysis .
other important questions such as the effect of a trapping potential or the emergence of crystal phases are eventually addressed .
in this section , we describe the salient features of the effective bosonic field theory appropriate to a 1d mixture of two distinct fermionic ( or bosonic ) atoms .
the aim of this section is to give a bosonization interpretation for the formation of few - body bound states and their effective behavior through a mode - locking mechanism between the two species .
predictions on the nature of the resulting phase are then made .
the theory is a priori valid for models in the continuum or the continuous version of lattice models at generic ( i.e. non - commensurate ) densities .
the effects of the presence of the lattice on certain commensurate densities will be briefly discussed in section [ sec : lattice ] .
notation conventions are standard and taken from ref . .
the two species are labeled by a pseudo - spin index @xmath2 and their corresponding densities @xmath3 such that @xmath4 is the total density .
each species can be described by a scalar field @xmath5 and its dual @xmath6 .
the creation operators can be expressed as a function of these fields , with , for fermions @xmath7 and , for bosons , @xmath8 we have included all higher harmonics : as a consequence , the summation is over all integers @xcite .
the `` fermi momenta '' , @xmath9 are a priori not equal to each other , corresponding to a spin - imbalanced situation . the density operators @xmath10 read @xmath11 the effective low - energy hamiltonian can be written in terms of the fields @xmath5 and their canonically conjugate momentum @xmath12 . in the case of absence of inter - species interactions
the effective bosonic theory is given by @xmath13 , where @xmath14\ ; , \label{h0}\ ] ] where @xmath15 is the sound velocity and @xmath16 the so - called luttinger parameter , which is equal to one in the free fermions or free hard - core bosons cases .
taking into account density - density interactions between species , of the generic form @xmath17 , changes the effective theory and brings new kinds of terms : zero - momentum terms in the density representation couples the two spin species through a bilinear operator @xmath18 where @xmath19 is a forward scattering constant , and higher harmonics terms involving multiples of the spatial frequencies @xmath20 : @xmath21}\end{aligned}\ ] ] where @xmath22 are non - universal coupling constants .
clearly , if the generalized commensurability condition @xmath23 is satisfied ( with @xmath24 and @xmath25 coprime integers ) , and provided these terms are relevant , they will tend to lock the up and down fields together .
when the densities are fine - tuned to the definite commensurability , then all other cosine operators in the sum are oscillating , in which case they do nt contribute in the continuum limit ( or they are less relevant for multiples of @xmath24 and @xmath25 ) .
the remaining important operator in the sum is thus the sine - gordon term @xmath26 with the combination @xmath27 for attractive interactions @xmath28 ( we will argue below that this choice favors the relevance of the term ) , energy will be minimized when the field is pinned to @xmath29 .
notice again that the above argument on the mode - locking mechanism does not rely on the presence of a lattice .
lastly , the cosine locks a combination of the bosonic modes but at a generic total density @xmath30 , there remains another bosonic mode leaving the full excitation spectrum gapless .
we will see that the latter describes the effective behavior of the bound - states . in the following ,
we dub @xmath31 this massless bosonic mode .
we can draw a last remark on the operators : they have high scaling dimensions near the free fermion fixed point and are expected to be irrelevant apart from some special circumstances which are the object of this work . in the fermionic language , they involve @xmath32-body interactions of the form : @xmath33 where the summation over @xmath34 runs over all combinations of @xmath35 momenta due to the total momentum conservation law : @xmath36 .
such interactions appear at high order in perturbation theory in a hubbard model for example or after several steps of a rg treatment . for practical purposes
it is simpler to work with the bosonic formulation given by eq . , and
this is what we do from now on . in the following , we assume that the densities are commensurate via the condition , and analyze the simplified effective theory written in terms of the @xmath37 and @xmath38 fields @xmath39 where the velocities @xmath15 and luttinger parameters @xmath16 are determined by the intra - species interactions
. the quadratic part @xmath40 can be diagonalized by a bogoliubov transformation @xcite which could give a starting point for a perturbative rg calculation @xcite . due to the velocity asymmetry ,
additional couplings are generated and velocities are renormalized along the flow .
the discussion of the nature of the gapped phases and their correlations remains unclear .
in particular , diagonalizing the quadratic part @xmath41 of the hamiltonian does not give , apart from special choice of the parameters , the combination that appears in the cosine term @xmath42 .
in the next section , we take the following strategy : we look for the conditions under which the quadratic part and the cosine term are simultaneously diagonalizable . at the price of a restriction on the parameters
range , the analysis can be done safely both for the criteria of relevance of the cosine and for the correlation functions in the single - mode phase . in spite of the limitation of the approach
, we believe the scenario does occur without this restriction : as shown numerically in sec .
[ sec : trimers ] on a realistic model , the single - mode multimer phase can span a wide region of the phase diagram .
lastly , we notice that , similarly to the phase separation criteria in two - component mixtures ( when one of the mode velocities vanishes ) , the single - mode phase will undergo a phase separation instability when the gapless mode velocity @xmath43 vanishes .
we thus expect to find the single - mode phase surrounded with the two - mode phase and a demixed phase .
we have qualitatively discussed the fact that the physics should generically be described by two fields @xmath44 where @xmath45 , with @xmath46 being the one entering in the cosine term . in general ,
it is hard to have a complete form for the transformation between the @xmath44 and the @xmath47 .
such a transformation is important both for the rg analysis and the calculation of physical correlators which are naturally expressed in terms of the @xmath48 fields .
below , we discuss a special case where the transformation can be performed and its range of validity . the simplest transformation , and yet rather general , one can work with is a linear combination of the fields with coefficients that are independent of the position : @xmath49 when @xmath50 , excitations corresponding to the eigenmodes @xmath51 carry both spin and charge modes which are respectively the sum and the difference of the @xmath37 and @xmath38 modes .
as the transformation must preserve the commutation relations @xmath52 & = i\pi\delta_{\sigma\sigma ' } \delta(x - x ' ) \,,\\ [ \phi_{\sigma}(x ) , \nabla \theta_{\sigma'}(x ' ) ] & = i\pi\delta_{\sigma\sigma ' } \delta(x - x ' ) \,,\end{aligned}\ ] ] we get that @xmath53 .
then , we have @xmath54 with the determinant @xmath55 in a shortened version , we have @xmath56 , where @xmath57 is the matrix of the @xmath58 , and @xmath59 .
if @xmath57 is unitary , the @xmath60 and the @xmath61 undergo the same transformation .
we now impose that @xmath62 which gives @xmath63 as we want to cancel the cross - terms in eq .
, we require that : @xmath64 which can be rewritten as : @xmath65 there exists a non - zero solution only if the condition : @xmath66 is satisfied .
when this condition is satisfied , we have a one - parameter family of transformations with the desirable property of having only one eigenmode in the argument of the cosine operator .
the parameter is just the choice of scale of the field @xmath31 : in 1d , we can change the scale of the bose field provided we change accordingly its luttinger parameter @xmath67 . here , we choose the scale of @xmath31 so that : @xmath68 the condition strongly reduces the range of applicability of the transformation : for a given coupling @xmath19 , the luttinger parameters and velocities of each species must satisfy the above relation .
when the transformation can be used , splits into a free boson field for @xmath69 and a sine - gordon model for @xmath70 : @xmath71 with @xmath72 . in this case ,
the new velocities and luttinger parameters associated with the @xmath73 modes are given by the following relations : @xmath74 where we have defined : @xmath75 with our definition of @xmath76 and provided the sine - gordon description is applicable , the requirement for the cosine to be relevant , and thus to enter the single - mode phase , is simply @xmath77 one qualitatively observes that a velocity much smaller than the other favors a small @xmath78 and that large attractive interactions @xmath79 will help increase @xmath80 and reduce @xmath78 . in the following ,
we consider limiting cases in which the discussion simplifies in order to identify how the parameters would favor the formation of a gap in the @xmath70 sector .
the limit @xmath81 ( attained with attractive interactions ) signals the transition to the phase - separated or falicov - kimball regime from the multimer phase .
when @xmath82 , the condition imposes that either ( i ) @xmath83 or ( ii ) @xmath84 .
the transformation and new velocities and luttinger parameters then take a simple form : in case ( i ) , we have @xmath85 and : @xmath86 while in case ( ii ) , we have @xmath87 and @xmath88 in both cases , having @xmath89 would require a very small @xmath90 ( assuming @xmath91 for example ) .
this could be realized with long - range intra - species interactions but may not be easily achievable .
notice a peculiarity of the formula for the massive mode @xmath31 in : while @xmath92 and @xmath93 are length - scale dependent ( in the rg sense ) , the expression in holds on all length - scales . in order to identify the influence of the velocities ratio on @xmath78
, one can introduce the dimensionless quantities @xmath94 , @xmath95 and @xmath96 . then , and are rewritten as @xmath97 if one takes into account only , @xmath78 vanishes in the limit of large velocity ratio @xmath98 or @xmath99 and passes through a maximum in between so that there are two windows of @xmath100 such that @xmath89 .
the smaller the maximum , the wider these windows are so , clearly , negative and large interactions ( @xmath101 ) favor the mode - locking mechanism . yet , imposes another constraint and we just consider the @xmath102 limit for simplicity . there
, this limit is possible provided @xmath103 , i.e. in the case of attractive interaction only . as a consequence , this analysis shows that we should expect the formation of multimer in the attractive and large interaction regime , favored by large asymmetry .
deep in the massive-@xmath70 phase , one can make a crude quadratic approximation to the cosine operator in by replacing it with a mass term @xmath104 .
this leads to approximate expressions for the velocity and luttinger parameter of the remaining mode @xmath69 : @xmath105 which reduces to the correct result for equal velocities . in one - dimensional models ,
the classification of the groundstates is determined by their dominant correlations .
one can break discrete symmetries ( for instance translational symmetry on a lattice model ) but order parameters associated with continuous symmetries are always zero .
the naming of a phase then corresponds to the connected equal - time correlator with the slowest decay in space .
quite generally , these correlators are asymptotically decaying either algebraically or exponentially .
such algebraic correlations are usually referred to as a quasi long - range order ( qlro ) . the slowest decay ( or smallest exponent of algebraic correlations )
criteria is based on a rpa argument by considering a set of weakly coupled luttinger liquids @xcite which shows that order will build up provided the exponent of the correlator is smaller than two , and that the main instability is associated with the smallest exponent .
however , if the green s function , associated with @xmath106 which is not an order parameter , has the slowest decaying exponent , a rg analysis shows that coupling the luttinger liquids yield a fermi liquid phase ( provided that the decay exponent is smaller than two again ) .
if all physical correlators are exponentially decaying ( apart from the density one which always keep , at least , a quadratic decay ) , the term liquid is often used .
this approach yet remains phenomenological as the higher - dimension situation and is much more involved . in this section ,
we follow the standard practice and consider the correlation functions of various observables to discuss the nature of the phases that are realized in the single - mode and two - mode regimes .
the asymptotic decay of the connected correlation functions associated with the order parameter @xmath107 typically reads @xmath108 with some exponent @xmath109 . in order to compute the correlators ,
we only keep the first harmonics in eq . and begin with the richer case of fermions where we use the representation in terms of right and left movers : @xmath110 we use the results that when a field @xmath76 is pinned , @xmath111 and its dual @xmath112 is disordered , leading to an exponential decay . in the case of algebraic correlations ,
the decay exponents are obtained using the result that , for a field @xmath61 described by @xmath113 , the equal - time correlator associated with @xmath114}$ ] behaves asymptotically as @xmath115 we now give the leading contributions of the order parameters as a function of the @xmath73 fields , assuming general transformation coefficients of the @xmath37 and @xmath38 modes : @xmath116 } & \text{green 's function}\\ \label{eq : density } \hat{n}_{\sigma}(x ) & \sim -\mathfrak{p}_{a\sigma}\nabla\phi_a - \mathfrak{p}_{b\sigma}\nabla\phi_b + \lambda^{-1}\cos(2k_{\sigma}x-2(\mathfrak{p}_{a\sigma}\phi_a + \mathfrak{p}_{b\sigma}\phi_b ) ) & \text{density}\\ \label{eq : singlet } \psi_{\uparrow}(x ) \psi_{\downarrow}(x ) & \sim e^{i(k_{{\uparrow}}-k_{{\downarrow}})x } e^{-i[(\mathfrak{p}_{a{\downarrow}}- \mathfrak{p}_{a{\uparrow}})\phi_a + ( \mathfrak{p}_{b{\downarrow}}-\mathfrak{p}_{b{\uparrow}})\phi_b+ ( \mathfrak{t}_{a{\downarrow}}+\mathfrak{t}_{a{\uparrow}})\theta_{a}+(\mathfrak{t}_{b{\downarrow}}+ \mathfrak{t}_{b{\uparrow}})\theta_{b } ] } & \text{singlet pairing}\\ \label{eq : triplet } \psi_\sigma(x ) \psi_\sigma(x ) & \sim e^{2i[\mathfrak{t}_{a\sigma}\theta_{a}+ \mathfrak{t}_{b\sigma}\theta_{b } ] } & \text{triplet pairing}\end{aligned}\ ] ] where @xmath117 is a short - range cutoff . among the multiple combinations of right and
left movers , we have chosen the ones which should lead to the lowest decay exponents , by having the lowest @xmath118 and @xmath30 constant .
they usually correspond to the smallest wave - vector . in the two mode - regime ,
all correlators are algebraic and the leading one will strongly depend on the actual coefficients of the transformation .
the expression of eqs . are here understood with general transformation coefficients of the @xmath37 and @xmath38 modes as one does not necessarily have to impose the restriction since @xmath76 does not here identify with .
the transformation coefficients can be computed exactly @xcite in the absence of the cosine term .
the correlation functions can as well be computed directly using a green s function approach @xcite . in the presence of , the coefficients will be renormalized in this two - mode phase to unknown values .
this regime is rather generic and , depending on the interaction and densities , with many competing orders among which are a fermi liquid - like phase , a superconducting singlet or triplet fflo phase @xcite ( pairing correlations displaying the typical @xmath119 ) , a spin - density wave ( sdw ) or charge - density wave ( cdw ) phase .
the case of equal densities , @xmath120 has the dominant channels @xcite among the superconducting , cdw , and sdw fluctuations . in the cases where
spin and charge degrees of freedom separate , cdw and sdw states are mutually exclusive .
furthermore , for su(2)-symmetric models , @xmath121- , @xmath122- and @xmath123-components of the sdw order parameter are degenerate .
these last remarks are no longer valid in our situation .
another regime corresponds to the case where the cosine in eq .
is relevant in the rg sense .
then , the system has a massive mode @xmath76 given by , and a massless mode @xmath31 .
the massless mode is described in the low - energy limit by a free bosonic with a velocity @xmath124 and a luttinger parameter @xmath125 . in this single - mode luttinger liquid
, algebraic decays will be ruled by this @xmath67 luttinger parameter when they occur .
when the parameters of the problem satisfy eq . , then the massless mode can be found explicitly . in this section
we use these results to discuss in details the behavior of the correlation functions . when @xmath76 gets pinned , we see that the above correlators are all exponential because the presence of @xmath112 in their expression , with the exception of the density one . in particular , all two - body pairing channels are suppressed , even in the presence of attractive interactions . in order to construct an operator which has algebraic correlations , the prefactor in front of @xmath112 must vanish .
this is realized by taking the @xmath32-mer combination @xmath126 ( bound states of @xmath24 @xmath38-fermions with @xmath25 @xmath37-fermions ) which has the prefactor @xmath127 which is clearly zero from : @xmath128 } \;,\\
\intertext{and , in the special case of trimers , } \label{eq : trimer } \psi_{\uparrow}(x)\psi_{\downarrow}(x)\psi_{\downarrow}(x ) & \sim e^{ik_{{\uparrow}}x } e^{i[(\mathfrak{t}_{a{\uparrow}}+ 2\mathfrak{t}_{a{\downarrow}})\theta_{a } + ( \mathfrak{t}_{b{\uparrow}}+2\mathfrak{t}_{b{\downarrow}})\theta_{b } - \mathfrak{p}_{a{\uparrow}}\phi_a - \mathfrak{p}_{b{\uparrow}}\phi_b]}\;,\end{aligned}\ ] ] in which @xmath129 and @xmath130 , @xmath131 are integers accounting for the combination of left and right movers .
we have used a somewhat symbolic notation : by @xmath132 , we mean @xmath133 , where @xmath134 where @xmath117 is the short - range cutoff .
we stress that the family of operators is different from the `` polaronic '' operators introduced in ref .
: the latter are constructed specifically for minimizing the decay exponents in the massless phase of . on the contrary ,
the family arises naturally in the massive phase of eq .
, as a many - body consequence of a existence of @xmath32-body bound states in the microscopic counterpart of . the effective theory of this @xmath32-mer object is then governed by the gapless mode @xmath69 .
remarkably , as @xmath135 , the exponent is parametrized only by @xmath67 , @xmath136 and @xmath137 . in order to have the smallest exponent , we have to select the combination @xmath138 which minimizes the coefficient in front of @xmath31 ( one can not have the combination @xmath139 ) and which is proportional to @xmath140 .
we list below the coefficients and corresponding wave - vectors for the simplest commensurabilities : + [ cols="^,^,^,^",options="header " , ] + the exponent of the propagator of the @xmath32-mer then reads @xmath141 with the effective luttinger parameter @xmath142 in this phase , the connected density correlations @xmath143 remain algebraic with the following dominant contributions : @xmath144 where @xmath145 are non - universal amplitudes .
the main remarks are that ( i ) the ratio of the zero - momentum fluctuations is exactly @xmath146 while the ratio of the density is @xmath147 and ( ii ) the wave - vectors are different since @xmath148 and @xmath149 as well as their exponents which ratio should be @xmath146 exactly . notice that for the sine - gordon model , the ratio of the amplitudes @xmath150 are _ exponentially _ small in @xmath151 @xcite . when @xmath152 , we see that the multimer is effectively behaving as a spinless fermion (
as expected from the combination of a total odd number of fermions ) which fermi level is @xmath153 and luttinger exponent @xmath154 . for instance , trimers belong to this ensemble .
the effective interaction between these spinless fermions , which are spatially extended objects , is highly non - trivial and certainly depends on the distance , density and microscopic parameters ( a discussion of such interactions in the case of a boson mixture can be found in ref . ) .
however , its overall effect can be captured by @xmath154 with effective repulsion expected when @xmath155 ( dominant cdw fluctuations ) , and effective attraction expected if @xmath156 ( dominant trimer - pairing fluctuations ) .
the latter turns out to be a superfluid phase of trimers . by associating an even total number of fermions
, one should effectively expect build a bosonic - like multimer .
yet , we see that , in the propagator of the multimer , one can not suppress the contribution from the @xmath31 field ( as @xmath157 ) and the exponent is not simply @xmath158 and thus not simply related to the one of the density correlations as one would get for a simple bosonic propagator . furthermore ,
while the momentum distribution of a boson would usually have a peak at zero - momentum , we see that this observable will be here diverging at @xmath159 .
as previously mentioned , the effective theory under study can be as well applied to the situation where the particles are bosons . in the single - mode phase ,
a bosonic multimer phase will emerge under the mode - coupling mechanism and the motivation of this small section is to discuss the form of the corresponding correlators . we assume repulsive interactions for the intra - species channels ( for stability reasons and also to lower the @xmath160 to be able to fulfill the @xmath89 requirement ) but attractive interactions in the inter - species channel ( as for the fermions ) .
the boson creator operators are bosonized as @xmath161 ( dropping the higher harmonics term of eq . ) which immediately yields @xmath162}\;.\ ] ] the @xmath32-mer is then a true bosonic molecule with an effective luttinger parameter which is exactly given by .
the density correlations do not depend on the statistics and still have the form of .
so far , we have only considered two - component fluids in the continuum limit which is expected at generic densities on a lattice or in continuum space . in this section ,
we briefly discuss the additional effects arising from the presence of a lattice . ] . an underlying lattice with period @xmath163
can be viewed as a periodic external potential , in which particles have a momentum being only defined modulo the reciprocal lattice vector @xmath164 . therefore , umklapp processes with momentum transfer of a multiple of @xmath164 are allowed at low energy .
if a fermi momentum @xmath20 of a species @xmath165 , is itself a multiple of @xmath164 , i.e. if a density of species @xmath165 is commensurate with the lattice , @xmath166 , with an integer @xmath167 , an additional term @xmath168 appears in the low - energy hamiltonian .
the effects stemming from such a cosine operator _
alone _ are well known : for @xmath169 the cosine is relevant in the rg sense and the system undergoes a mott transition into a density wave state with the unit cell of @xmath167 lattice sites . in a two - component system , it is possible to have two operators of this sort , one for each species . furthermore
, if the densities are such that @xmath170 is an integer ( we set @xmath171 from now on ) for some integers @xmath167 and @xmath172 , there is yet another term in the low - energy hamiltonian , namely @xmath173 ( cf .
eq . ) . here
, we analyze a simple special case where @xmath174 or @xmath175 and @xmath176 with the integers @xmath24 , @xmath25 , @xmath177 and @xmath178 . given
, eq . yields the hamiltonian in the form @xmath179 with @xmath180 where @xmath181, ,@xmath182 are non - universal amplitudes
. interpretation of eqs . is straightforward : eq .
stems from the condition and is thus insentive to the presence of the lattice ( cf .
sec.[sec : modecoupling ] ) ; eqs . and
favor the mott localization of the species @xmath38 and @xmath37 , respectively . on the other hand , operator is unique to two - component lattice systems and owes its existence to the peculiar commensurability condition .
the physical meaning of is clear : by analogy with sec .
[ subsec : corr ] , it favors the quasi long - range ordering of the operator @xmath183 . , @xmath184 , i.e. @xmath185 and @xmath186 .
for @xmath187 the interaction with the lattice leads to a formation of a trimer crystal state . for larger(smaller ) values of @xmath93 the system has a phase transition from a massless phase into a mott insulator of @xmath37(@xmath38)-component .
the trimer operator @xmath188 is always subdominant . ] in the following , for sake of simplicity , we assume equal velocities of the two - components and drop the @xmath189 term . the dominant instability of the massless theory @xmath190 is due to the operator with largest positive scaling dimension .
depending on the values of @xmath92 and @xmath93 , the following inequalities define which of the operators is relevant : @xmath191 respectively . in figs .
[ latt : fig_tommaso ] and [ latt : fig_nup15ndown25 ] we plot the @xmath192 diagrams corresponding to eqs .
for two values of the densities .
we see that which instability takes place depends on the values of the bare luttinger parameters @xmath92 and @xmath93 , and thus on microscopic details of an underlying lattice model .
numerically , a crystal phase has been reported @xcite in a two - component bosonic hubbard model and a similar result is presented in the fermionic counterpart in sec .
[ sec : crystal ] for the commensurabilities discussed in fig .
[ latt : fig_tommaso ] .
these phases do correspond to the locking of several combinations of the modes according to eqs . but they are achieved for very large asymmetry .
consequently , the above criteria determined for equal velocities are not directly applicable in these situations .
the quantitative predictions of could be relevant to the case of strongly renormalized @xmath160 , for instance with long - range intra - species interactions .
a striking feature of the phase diagrams [ latt : fig_tommaso ] and [ latt : fig_nup15ndown25 ] is the appearance of the multicritical points where several instabilities compete . in the above treatment we have only considered an effect of various operators _
alone_. an interplay between different operators is non - trivial and may lead to consequences not captured by the simple power counting of eqs . . hence , applicability of the above analysis in the vicinities of the multicritical points is not granted .
there are several possible scenarios of the phase transitions at such multicritical points .
for one thing , it is easy to construct fine - tuned theories where two continuous transitions occur simultaneously .
an other possibility is a first order transition , as been observed in numerical simulations of higher - dimensional bosonic systems @xcite .
detailed analysis of these multicritical points is beyond the scope of the present paper . for @xmath193 , @xmath194 .
in this case , eqs. allow _ two _ sets of solutions : ( a ) @xmath195 and @xmath196 , and ( b ) @xmath197 and @xmath198 , with @xmath185 and @xmath91 in both cases .
solution ( a ) is always subdominant , while ( b ) dominates in the window @xmath199 . for @xmath200 ,
the dominant instability is the formation of a luttinger liquid of trimers . ]
in this second part , we study the emergence of a trimer phase on a particular microscopic model : the 1d asymmetric attractive hubbard model . after defining the model and providing its phase diagram as a function of the parameters , we discuss some limitations of the bosonization approach to this model and an alternative phenomenological description that completes the interpretation of the obtained data .
we consider two species of fermions which internal degree of freedom is denoted by a spin index @xmath165 .
they hop on a lattice with a spin - dependent amplitudes @xmath201 ( which would experimentally corresponds to different optical lattices for each species ) and interact locally only in the inter - species channel with a hubbard term @xmath202 which we take negative , as suggested by the arguments of sec .
[ sec : boso ] and as a natural choice to favor bonding between particles .
the hamiltonian is then : @xmath203 + u\sum_i n_{i,{\uparrow}}n_{i,{\downarrow}}\;.\ ] ] one of the key parameter for the physics is the ratio between the hoppings @xmath204 . in order to have the possibility of forming trimers , we take the commensurate condition @xmath205 but the total density @xmath30 varies freely and is another important parameter of the physics . using the notations of sec .
[ sec : boso ] , we thus have @xmath185 and @xmath91 ( the simplest new combination one can have ) . the fermi momenta are @xmath206 and free fermions fermi velocities read @xmath207 .
since @xmath208 , the maximum total density one can have for this commensurability is @xmath209 .
the above hamiltonian has been widely studied in the case of balanced @xcite and imbalanced densities @xcite but the special commensurability where trimers emerge has only been investigated for one set of data in ref . , showing that the pairing correlations were indeed suppressed , in agreement with the bosonization approach . when the asymmetry is very large , one species behaves quasi - classically ( they get localized ) and the model is in the regime of the falicov - kimball ( fk ) model @xcite where there exists a lot of quasi - degenerate states at low - energies , analogue to a phase separation regime .
we expect generically a first - order transition to this segregated ( or demixed ) phase when lowering @xmath210 in the phase diagrams .
the fk regime can display rather rich physics recently investigated in ref . and which will not be analyzed here : our aim is only to draw the boundary of this regime .
numerically , the transition to the fk is rather sharp and all observables clearly display segregation . for @xmath211 , the arguments of sec .
[ sec : boso ] suggest that the two - mode regime will be generically realized .
qualitatively , in a strong - coupling picture where two spin-@xmath38 fermions are localized on neighboring sites , the delocalization of a spin-@xmath37 electron on these sites will be favored by attractive interactions , forming a very local trimer state .
this picture will be correct at small enough densities and actually not too large @xmath202 and too small @xmath210 , otherwise such bound states will agglomerate with other spin-@xmath37 and @xmath38 fermions , leading to the fk regime .
we thus expect the formation of the trimer phase in the vicinity of the fk but at both finite @xmath202 and finite @xmath210 . within the framework of sec .
[ sec : boso ] and considering that the starting point of bosonization are free fermions , the ratio between the velocities @xmath212 supports that small @xmath210 clearly favors the formation of trimers while small densities should not . ) for a fixed interaction and density .
the magnitude of the gap ( in units of @xmath213 ) is small in comparison to @xmath202 and @xmath213 .
the grey areas are estimates of the transition points .
_ inset : _ finite size extrapolations of the gap .
the upper dashed curve shows the behavior for @xmath214 when entering in the fk regime . ]
the phase diagrams of model are numerically determined using standard dmrg with open - boundary conditions ( obc ) and keeping up to @xmath215 states . in order to discriminate between the different possible regimes
, we use both `` global '' probes and local observables and correlation functions . among `` global '' probes
, one can use the trimer gap @xmath216 associated with the formation of the bound state .
it can be defined following ref . as : @xmath217 with @xmath218 the ground - state energy with @xmath219 fermions .
results as the function of the asymmetry @xmath210 for an incommensurate density @xmath220 and large interaction @xmath221 have been extrapolated to the thermodynamical limit and are given in fig .
[ fig : gap ] .
the slow opening of the trimer gap is _ qualitatively _ compatible with the sine - gordon behavior of sec .
[ sec : boso ] although the transformation is not directly applicable for any @xmath210 .
notice that the whole system remains gapless .
the slow opening of the gap makes it difficult to precisely locate the transition point .
in such situation , a usual approach would be to use the prediction on the critical luttinger parameter @xmath222 at the transition point .
furthermore , the determination of @xmath78 using correlators in the two - mode phase is very difficult as it would require to know , and then to disentangle , the complicated expression of the exponents as a function of @xmath78 and @xmath67 to extract them independently . and
@xmath223 , as expected . ]
therefore , we use another global approach to the distinction between the two - mode and single - mode phases , which is particularly well - suited for this model , and more generally in a similar context .
using universal results on the entanglement entropy ( ee ) , the central charge @xmath224 of the model can be extracted which directly gives access to the number of bosonic modes , without further information on their nature .
hence , we expect @xmath225 in the two - mode regime while @xmath226 in the single - mode trimer phase .
this stair - like expectation in the thermodynamical limit will be smoothed out by finite - size effects .
the central charge is obtained on finite - systems using the following ansatz for the ee between a left block of size @xmath121 and the right block of length @xmath227 with obc : @xmath228 where @xmath229 is the cord function @xmath230 and @xmath231 is the local kinetic energy on bound @xmath232 ( obtained numerically ) , and @xmath233 are fitting parameters .
the first log term is the leading and universal one @xcite while the second accounts for finite - size oscillations due to obc and which can have a significant magnitude @xcite .
it is thus essential to take them into account to improve the quality of the fits . in the end
, there are only three parameters in the procedure and typical examples in both the two - mode and single - mode phases are given in fig .
[ fig : eeexamples ] .
systematic fits on finite - size systems provide an estimate of @xmath224 as a function of the parameters . as seen in fig .
[ fig : centralchargeu-4 ] , the @xmath234 curves crosses around the transition point .
although we do not have any quantitative prediction for the finite - size corrections of @xmath234 obtained in this way , we can argue that if @xmath235 is smaller than the correlation length associated with the trimer gap , @xmath234 will be larger than one as the system is effectively in a two - mode regime .
thus , @xmath234 should decrease with @xmath235 towards one in the single - mode phase , as observed . in the two - mode phase
, there is no obvious discussion : we only expect that the larger the system , the better the agreement with the continuous limit .
one can also check the effect of the number of kept states on the fits and see that they do nt have the dominant effect in this model which converges well numerically .
we have estimated the transition point by extrapolating the crossing points between successive sizes ( see fig . [
fig : centralchargeu-4]*(a ) * ) as a function of the inverse size ( see fig . [
fig : centralchargeu-4]*(b ) * ) . from this approach and the opening of the gap
, we get a critical value @xmath236 for the mass asymmetry on this cut .
although the gap is rather small , the trimer region appears to be rather wide .
vs interaction @xmath202 and asymmetry @xmath210 for a system with @xmath237 at four different densities . for @xmath238 ,
the lines with error bars are the ones estimated from figs .
[ fig : gap ] and [ fig : centralchargeu-4 ] .
the @xmath239 cuts correspond to data obtained with a very low but non - zero value @xmath240 . ] using the central charge calculation , one can map out the phase diagram in the @xmath241 plane for a fixed density , or in the @xmath242 plane for a fixed interaction @xmath202 .
results are gathered in fig .
[ fig : phasediagn ] and [ fig : phasediagu ] respectively .
these diagrams display bare data for a given system with a rather large size @xmath237 and the previous estimate of the cut is given as error bars .
these diagrams show that a wide trimer phase can be achieved at large enough interactions , small enough @xmath210 , as expected , and also that low densities strongly favors their formation . at large densities @xmath243 ,
the trimer region vanishes within our grid resolution so that it is at most confined to a very tiny region between the two - mode phase and the fk regime .
while the large-@xmath244 situation is rather clear , the competition between the three regimes at small @xmath202 is more involved .
indeed , two scenarios can occur in the @xmath241 plane : either the trimer phase always separates the fk and two - mode regimes , corresponding to two boundaries starting from the @xmath245 corner , or there is a critical @xmath246 above which the trimer phase emerges , corresponding to a tricritical point @xmath247
. we could not numerically discriminate between both scenarios , but we do find a small trimer region at relatively small @xmath202s ( @xmath248 ) for most densities : we do not have evidence for a tricritical point with a large @xmath249 . as the density plays a central role in the stabilization of the trimer phase , we give in fig .
[ fig : phasediagu ] the central charge map for a fixed interaction @xmath221 as a function of the total density @xmath30 and mass asymmetry . a similar question about a intervening trimer phase between the two - mode and the fk regimes can be raised .
while the two - mode and fk are clearly separated at small densities , we found that if a trimer intermediate phase exists at large densities up to the @xmath250 point , its extension will be particularly small ( not seen within our numerical calculations ) .
in addition to the three main phases , commensurability effects are also present in this diagram . when the maximum density @xmath251 is reached , the @xmath38-band is completely filled while the @xmath37-band is half - filled , leading to a single - mode phase well - captured by the central charge approach .
lastly , as it will be discussed in sec . [ sec : crystal ] ,
a crystal phase ( fully gapped ) exists for the commensurate density @xmath252 at very small @xmath210 and is pointed on fig .
[ fig : phasediagu ] .
other commensurabilities could yield additional crystal - like phases in this diagram but this is beyond the scope of this study . vs asymmetry @xmath210 and total density @xmath30 for fixed interaction @xmath221 on a system with @xmath253 .
the lines with error bars are the ones estimated from figs .
[ fig : gap ] and [ fig : centralchargeu-4 ] . ] in this section , we give the behavior of several observables in order to see how they are affected by the entrance into the trimer phase or the fk regime and , as well , to investigate the effective behavior of the trimer fermion . at total density @xmath238 .
the non - interacting expectation ( @xmath254 line ) has been subtracted in order to unveil the effect of the interaction . ] first , we select a set of local correlators ( living on sites or on bonds ) which illustrate the phenomenological picture of the different parts of the phase diagram .
we compute the local double occupancy @xmath255 , the trimer local operator as @xmath256 ( since the light particle is in principle delocalized above two heavier ) , and the density correlators @xmath257 and @xmath258 .
this choice of local correlators is well suited to a strong - coupling picture as pairs or trimers should in principle correspond to a narrow bound - state , spread over only a few lattice sites .
these local correlators should then pick up a reasonable weight of the local bound - state .
the results are averaged over all lattice sites and plotted in fig .
[ fig : local_maps ] .
the expectation value at @xmath254 has been subtracted so that the reference state is the free fermions limit at a given @xmath210 ( the pairing or trimer local correlators defined above are obviously non - zero even in the free fermions limit ) .
[ fig : local_maps ] display behaviors in qualitative agreement with the picture we have on the trimer formation : the @xmath259 density correlator is nearly zero everywhere but in the fk regime , signaling phase separation . on
the contrary @xmath260 density correlator increases significantly in the region corresponding to the trimer phase , surrounding the fk pocket , and together with the local trimer density @xmath261 .
lastly , we see that the double occupancies , or pairs , acquire a strong weight with negative @xmath202 everywhere in two - mode and single - mode regions : they are either `` independent '' or embedded in the trimer bound - state . their coherence can yet be probed only by measuring correlations as discussed below . and entering the trimer phase along a cut at @xmath221 in the phase diagram ( absolute values are displayed ) .
the inset of * ( a ) * shows the same data but in log - linear scale to highlight the exponential decay . ]
we now turn to the behavior of correlation functions across the phase diagram . from sec .
[ subsec : corr ] , and as already observed for a particular point in refs . , the pairing correlations change from algebraic to exponential decay when entering into the trimer phase .
these correlations are here computed in the singlet channel and for local pairs @xmath262 .
in addition , we compute the trimer correlator using the local trimer operator @xmath263 defined on neighboring sites .
the associated correlation functions @xmath264 and @xmath265 are computed with @xmath266 taken at the center of the chain .
increasing the mass asymmetry along the same cut at @xmath221 as in previous figures , the suppression of pairing correlations is clearly seen in fig . [ fig : tpcorrelations]*(a)*. on the contrary , trimer correlations , which are subdominant in the two - mode regime , are boosted by smaller @xmath210 , both in amplitude ( as for the local correlators previously evoked ) and in the decay exponent which gets smaller ( see fig . [
fig : tpcorrelations]*(b ) * ) . notice that the wave - vector is the same for both correlators since @xmath267 for this commensurability .
we have tentatively extracted the decay exponents of both correlators by fitting the functions using a power - law modulated by a cosine oscillations .
the correlation length @xmath268 of the pairing correlator in the trimer phase is obtained using an exponential envelope @xmath269 .
the results are gathered in fig .
[ fig : exponent ] , showing the evolution in both phases .
we must stress that the data computed on a finite - size system display a transition at a lower @xmath210 than in the thermodynamical limit . from sec .
[ subsec : corr ] , we expect that the decay exponent of the trimer propagator is of the form @xmath270 while the @xmath37-density correlations have a decay exponent of @xmath271 .
a first consequence is that the trimer exponent should always be larger than one which is not reproduced for the lowest @xmath210s , and which we attribute to numerical inaccuracies of the fits of trimer correlations . besides
, inverting @xmath270 to get @xmath154 is subjected to strong errors when @xmath272 and does not tell whether @xmath156 or @xmath155 which is essential for the effective behavior of the trimers . in order to get a better estimate of @xmath154 , we rather use friedel oscillations of the @xmath37-density operator
which decay exponent @xmath273 is equal to @xmath154 in the trimer phase according to sec .
[ subsec : corr ] .
finite - size system under study .
the exponent of the friedel oscillations of @xmath274 is also displayed , together with the expected trimer exponent derived from it ( see text for discussion ) . ]
even though the approach of sec .
[ subsec : corr ] is not applicable for most parameters , the fact that there exists an effective luttinger exponent @xmath154 describing the physics of the fermionic trimer with a propagator @xmath270 and with friedel oscillations with @xmath154 is more general : the limitation of the bosonization approach is rather that @xmath154 will not take the form of eq . .
local observables are believed to have less numerical errors associated with a finite number of kept states than correlations @xcite .
thus , we fit the friedel oscillations of the @xmath37-density using the following symmetric ansatz @xmath275^{\alpha}}\ ] ] with @xmath276 and only four fitting parameters @xmath277 , @xmath278 , @xmath25 and @xmath273 and @xmath279 but at low densities , it is usually better to take @xmath25 and @xmath277 as free independent fitting parameters due to the depletion of the density at the edges which effectively increases it in the bulk .
for instance , one has @xmath280 for free spinless fermions on a finite system with @xmath1 fermions , which is not exactly the average density @xmath281 , particularly at small @xmath281 . ] . in the trimer phase ,
we expect @xmath282 .
some typical fits are plotted in fig .
[ fig : density]*(a)*. from them , we extract the decay exponent @xmath273 and plot it on fig .
[ fig : density]*(b ) * as a function of the total density . a cusp is found around @xmath283 signaling the transition from the two - mode regime to the trimer phase .
we have seen that a low density favors the formation of the trimer phase .
this figure shows that , in the trimer phase , we have @xmath155 for the larger densities , corresponding to a repulsive effective interaction between trimers ( dominant cdw order of trimers ) .
we observe that the exponent increases with decreasing density , compatible with the fact that at low densities in the trimer regime , the trimers should be close to free spinless fermions having @xmath272 .
bare data for the smallest density on a system with @xmath284 even display an exponent @xmath285 larger than one .
-density for @xmath221 , @xmath286 and @xmath284 for various densities @xmath30 .
full lines are fits using eq . .
* ( b ) * decay exponents obtained from the fits as a function of the density .
the cusp at @xmath287 roughly corresponds to the transition from the two - mode to the single - mode regime . *
( c ) * large exponents at low densities , close to the fk regime when lowering @xmath210 : increasing the size tends to reduce the exponent below one . ] interestingly , the trimers in this model are necessarily objects with a finite extension of at least two sites and two close trimers may have the possibility to overlap by delocalizing their @xmath37 electrons . the distance dependence and sign of the effective interaction between trimers is non - trivial a perturbation theory to derive it looks challenging as it involves many sites and degrees of freedom . yet , since the trimer phase is found close to the fk regime , we can expect the effective interaction to become attractive close to this boundary , leading to @xmath156 .
such physics would correspond to a superfluid phase of trimers .
this would be physically very remarkable since the microscopic hamiltonian would contain both the formation of bound - states , or molecules , and their effective superfluid behavior .
however , the behavior close to a phase separation at small densities is numerically involved .
indeed , increasing the system size @xmath235 shows that @xmath273 actually tends to decrease below one , as reported in fig .
[ fig : density]*(c ) * , or one enters the fk regime for larger sizes .
we did not find clear evidence of a stabilization of @xmath156 in the thermodynamical limit and understand the observed @xmath156 as finite size effects .
a superfluid droplet picture can be qualitatively put forward .
starting from the fk regime and looking at the local density pattern , one sees that the fermions are clustered into droplets while other parts of the box are empty . approaching the trimer phase from the fk regime tends to increase the size of these droplets to gain kinetic energy .
when a box confinement is present ( finite system with obc ) , it naturally favors the overlap between trimers , by depleting the edges , and can prevent the droplet to form ( for instance if their typical size is larger than the box size ) . increasing further the size of the box ( at constant density ) can lead to droplets formation .
this is a possible interpretation of the data observed in fig .
[ fig : density]*(c)*. in addition , we must stress that there is a lot of competing low - energy states in the fk regimes so dmrg , as an essentially variational methods , can be trapped into metastable states .
even though the thermodynamical limit is unclear , it is experimentally motivating to have signatures of superfluidity on mesoscopic confined systems as one has in cold - atoms setups .
we further mention that a recent careful study of the t - j model on a chain @xcite which could qualitatively contain a similar phenomenon as pair - clustering did not find evidences for such clustering . .
lastly , the comparison of density - density correlations in the @xmath37 and @xmath38 channels is another interesting point of this model .
in fact , the bosonization results of sec . [ subsec : corr ] predicts that the exponent of @xmath288 should be four times larger than the exponent of @xmath289 ( if both remains smaller than two ) , and that the dominant wave - vector should differ by a factor two .
numerically , the typical behavior for a rather large interaction @xmath221 is given on fig .
[ fig : densitycorr ] for two values of the asymmetry @xmath210 in the single - mode and two - mode regimes .
we see that in the two - mode phase ( fig .
[ fig : densitycorr]*(b ) * ) , the two fluctuations have a slightly different exponent , that the amplitude are quite different ( including the natural factor four ) . yet , the dominant wave - vectors are both @xmath290 . in the trimer phase ,
the disagreement with the bosonization picture is even worse since the two densities are locked together , and nearly identical ( fig .
[ fig : densitycorr]*(a ) * ) .
this latter fact can not be explained by the @xmath291 decay since the leading term is the oscillating one , with an exponent clearly smaller than two .
it is yet physically not surprising in the strong coupling picture of fig .
[ fig : trimers ] : trimers are local bound - states separated by the typical distance @xmath292 which does correspond to the @xmath293 fluctuations but can not be accounted by any of the harmonics for the @xmath38-density operator of eq .
( we work at an incommensurate filling ) .
this short - distance binding can not be captured by the bosonization results of sec .
[ sec : boso ] but a phenomenological bose - fermi approach described in the next section can account for this strong - coupling regime .
lastly , the same comment can be made on friedel oscillations on the @xmath38-component : they are locked to the @xmath37-component in the strong - coupling picture .
one might argue that there could be a crossover from the weak - coupling to strong - coupling picture of fig .
[ fig : trimers ] as @xmath244 increases , so that the bosonization results could be valid in the small @xmath202s region . however , the trimer region is very sharp at small @xmath202s and we could not find evidence for such a weak - coupling behavior in our numerical data , although we can not exclude such a possibility . we here propose a simple picture that reconciles the numerical observation and a bosonization approach at the prize of a strong assumption , physically reasonable at large negative @xmath202 , but difficult to justify rigorously starting from the microscopic model .
this picture has been for instance proposed at large interaction and low - density limit @xcite .
bose - fermi mixtures of 1d models have extensively studied in recent years @xcite and a similar picture emerges in certain regimes of three - component fermi - gases @xcite .
when @xmath246 is large , @xmath37 and @xmath38 fermions naturally form onsite pairs which are effectively hard - core bosons which we label @xmath69 .
we phenomenologically assume that the system is equivalent to a luttinger liquid of hard - core bosons with density @xmath294 , an effective velocity @xmath43 and luttinger parameter @xmath67 , while the remaining unpaired @xmath38 fermions behave as a luttinger liquid of fermions labeled by @xmath295 and with parameters @xmath296 , @xmath297 and @xmath298 .
these two luttinger liquids interact through an effective interaction which will contain terms such as @xmath299 } \;,\ ] ] which have the tendency to lock the fields @xmath300 and @xmath31 together ( with @xmath301 for attractive interaction ) , provided @xmath302 .
such an effect has already been discussed in the context of bose - fermi mixtures @xcite .
clearly , the latter relation is the same as the trimer commensurability condition @xmath303 ( because bosons carry two particles ) so that the formation of trimer is now interpreted as a bound - state between the bosons and the fermions .
following the same reasoning as in sec .
[ sec : fieldtransfo ] , we can introduce a general transformation of the @xmath304 fields into two new fields @xmath305 where @xmath306 .
writing the matrix transformation from the @xmath307 to the @xmath308 as @xmath309 for the @xmath61s and @xmath310 for the dual @xmath60s , we have @xmath311 which is slightly different from eq . .
yet , the 2@xmath312-like fluctuating part of the density correlators for the fermions and the bosons will have the leading contributions ( dropping the @xmath313 terms ) : @xmath314 which have both the same wave - vector associated with the fermi levels @xmath315 and same decay exponents since @xmath316 from the canonical transformation relations . in this picture , the trimer is simply a bound - state between the bosons and the fermions so its propagator reads @xmath317}\,.\end{aligned}\ ] ] from eq . and the determinant of the transformation matrix , which gives that @xmath318
, we obtain that the propagator is of the spinless fermionic type with an effective luttinger parameter @xmath319 .
clearly , both the @xmath295-fermions and @xmath69-bosons propagators become short - range , the latter corresponding to the pairing correlations in the native fermionic model .
consequently , we recover the physics of the trimer phase developed in sec . [
sec : boso ] , with a better agreement with the numerical observations in the strong coupling regime .
however , splitting the initial gas of @xmath38 fermions into two parts can only be done phenomenologically and could be questionable in a microscopic derivation .
this highlights the limitation of the bosonization approach of sec .
[ sec : boso ] at short distances ( high energies ) . , @xmath286 , @xmath320 , and @xmath321 fermions .
* ( a ) * local densities profiles . * ( b ) * correlation functions from the center of the trap . _
inset _ : same in log - linear scale . ] in this section , we briefly discuss the condition to favor the trimer phase in the presence of a parabolic confinement , as used in cold - atoms experiments .
our goal is only to exhibit some parameters for which the trimer phase is stabilized and to give some qualitative comments .
the trapping potential is taken into account by adding the quadratic term @xmath322 to eq .
[ eq : hubbard ] , with the trapping frequency @xmath323 and the center of the lattice @xmath324 . according to a local - density approximation ( lda ) picture and using the phase diagram of fig .
[ fig : phasediagu ] , the trimer phase is likely to be found at small enough densities and not too small @xmath210 to prevent the occurrence of the fk regime .
however , we find that the average density of the trapped system is strongly dependent on the hamiltonian parameters . at fixed number of particles @xmath1 and trap size @xmath323 , changing @xmath202 and @xmath210 strongly affects the radius of the cloud and the density in the middle .
we only exhibit in fig .
[ fig : trapped ] parameters for which the main features of the trimer phase are reproduced in the presence of a parabolic confinement .
the density profiles of fig .
[ fig : trapped]*(a ) * illustrate the locking of the @xmath37 and @xmath38 densities ( up to exactly a factor two ) , and the emergence of an appreciable density of local trimers ( each local maximum roughly corresponding to a trimer ) . in fig .
[ fig : trapped]*(b ) * , the pairing and trimer correlations are strongly different from that of a superfluid phase : we have dominating trimer correlations and exponential pairing correlations as in the homogeneous counterpart . in agreement with a lda picture ,
since we have seen that @xmath154 decreases with density , the trimer correlations are boosted at long distances .
similarly , the pairing correlations decrease slightly faster then an exponential close to the edge of the cloud .
these results are encouraging in the perspective of a possible achievement of the trimer phase in actual experiments .
( @xmath252 ) . * ( a ) * the short - range behavior of both pairing and trimer correlations . * ( b ) * ordering of the local density @xmath274 with the expected period of three sites . ] according to the analysis of sec .
[ sec : lattice ] , a crystalline phase of trimers can occur in this lattice model when the total density @xmath30 is commensurate .
evidence of this scenario together with a phase diagram for @xmath252 has been proposed in ref . in the case of a mixture of two - component bosons , for large enough asymmetry . as
the order parameter ( the density ) associated with this transition is independent of the statistics , we expect a similar scenario ( see sec .
[ sec : lattice ] ) and a similar location of the transition in the fermionic version of the model under study .
indeed , we give in fig . [
fig : crystal ] an example of the crystal phase .
notice that very small @xmath210 are required to stabilize such a phase .
we have not investigated the extension of the phase which should rather be small on the scale of the phase diagram of fig .
[ fig : phasediagu ] and its neighboring phases which could be either the two - mode ll or the trimer phase .
a crude argument can be proposed to understand this crystallization within the bose - fermi picture of sec .
[ subsec : bosefermi ] : when the mass asymmetry is very large ( very small @xmath210 ) , it is reasonable to say that the mass of the boson will be essentially the one of the heaviest particle , which is the same as the unpaired fermions so that @xmath325 . in terms of commensurability effects , one has @xmath326 so that standard umklapp terms at @xmath327 do not account for the crystallization .
one rather has to look for higher order terms with commensurabilities such as @xmath328 , which are typically associated with terms like @xmath329 in addition to the one of eq . .
such term can lock the field @xmath330 and make the system fully gapped .
lastly , we would like to stress that such commensurabilities are rather surprising in terms of the initial fermion densities as they belong to _ odd _ filling fractions @xmath331 and @xmath184 .
in summary , the consideration of unusual commensurability conditions in density - density interactions for 1d two - component gases leads to a very rich physics with the possibility of building bound - states of @xmath32-particles as the leading order .
such a mode - locking mechanism can be described within the framework of luttinger liquid theory which reveals the main ingredients to stabilize such a new phase . in particular , mass or velocity asymmetry
is shown to drive efficiently the transition into the multimer phase .
novel fully gapped phases are proposed when taking into account umklapp couplings specific to lattice models at commensurate densities .
these ideas are illustrated and confronted with the asymmetric 1d attractive hubbard model for the special commensurability @xmath332 for which the formation of trimers is found .
the features of the phase diagram are computed , displaying the important role of the density in favoring the trimer phase .
the effective behavior of trimers , which are effectively spinless fermionic objects , is very sensitive to the density and mass asymmetry .
although the model seems to have promising features to sustain a superfluid phase of trimers , we did not find clear evidence for it in the thermodynamical limit , while finite - size systems display a `` superfluid droplets '' physics .
notice that superfluidity of bound - states made of four fermions ( quartets ) can be achieved reliably in 1d with a four - color hubbard model @xcite .
there , the bound - states are bosons which natural `` free '' regime ( attained in the low - density limit ) is a superfluid phase
. a superfluid phase of trimers would in this respect be even more exotic but is in strong competition with phase separation .
lastly , we found that a trapping confinement supports the trimer phase for reasonably high densities and that surprising crystal phases can emerge at commensurate densities .
we would like to thank giuliano orso for earlier collaborations on related problems .
g. r. thanks franois crpin , fabian heidrich - meisner and alexei kolezhuk for fruitful discussions .
e. b. gratefully acknowledges the hospitality of lptms , where the majority of this work was done .
we have benefited from the supports of the _ institut francilien de recherche sur les atomes froids _ ( ifraf ) and anr under grant 08-blan-0165 - 01 .
j. levinsen , t. g. tiecke , j. t. m. walraven , and d. s. petrov , phys . rev . lett . * 103 * , 153202 ( 2009 ) ; f. werner and y. castin , arxiv e - prints , 1001.0774 ; d. blume and k. m. daily , phys .
* 105 * , 170403 ( 2010 ) ; y. castin , c. mora , and l. pricoupenko , phys .
lett . * 105 * , 223201 ( 2010 ) . g. b. partridge , w. li , r. i. kamar , y. a. liao , and r. g. hulet , science * 311 * , 503 ( 2006 ) ; g. b. partridge _
et al . _ ,
lett . * 97 * , 190407 ( 2006 ) ; y. an liao _ et al .
_ , nature * 467 * , 567 ( 2010 ) . c. wu , phys
lett . * 95 * , 266404 ( 2005 ) ; p. lecheminant , e. boulat , and p. azaria , phys .
lett . * 95 * , 240402 ( 2005 ) ; s. capponi , g. roux , p. azaria , e. boulat , and p. lecheminant , phys . rev .
b * 75 * , 100503 ( 2007 ) ; a. rapp , g. zarnd , c. honerkamp , and w. hofstetter , phys . rev
. lett . * 98 * , 160405 ( 2007 ) ; p. azaria , s. capponi , and p. lecheminant , phys . rev . a * 80 * , 041604 ( 2009 ) ; t. sogo , g. rpke , and p. schuck , phys
c * 81 * , 064310 ( 2010 ) .
k. yang , phys .
b * 63 * , 140511 ( 2001 ) ; h. hu , x .- j . liu , and p. d. drummond , phys .
* 98 * , 070403 ( 2007 ) ; g. orso , phys .
98 * , 070402 ( 2007 ) ; a. e. feiguin and f. heidrich - meisner , phys .
b * 76 * , 220508 ( 2007 ) ; m. tezuka and m. ueda , phys
* 100 * , 110403 ( 2008 ) ; g. g. batrouni , m. h. huntley , v. g. rousseau , and r. t. scalettar , phys . rev
. lett . * 100 * , 116405 ( 2008 ) ; a. lscher , r. m. noack , and a. m. luchli , phys . rev . a * 78 * , 013637 ( 2008 ) ; f. heidrich - meisner , g. orso , and a. e. feiguin , phys .
a * 81 * , 053602 ( 2010 ) .
n. laflorencie , e. s. srensen , m .- s .
chang , and i. affleck , phys .
96 * , 100603 ( 2006 ) ; i. affleck , n. laflorencie , and e. s. srensen , j. phys .
a * 42 * , 504009 ( 2009 ) ; j. cardy and p. calabrese , j. stat . mech . * 2010 * , p04023 ( 2010 ) . | we consider two - component one - dimensional quantum gases at special imbalanced commensurabilities which lead to the formation of multimer ( multi - particle bound - states ) as the dominant order parameter .
luttinger liquid theory supports a mode - locking mechanism in which mass ( or velocity ) asymmetry is identified as the key ingredient to stabilize such states . while the scenario is valid both in the continuum and on a lattice , the effects of umklapp terms relevant for densities commensurate with the lattice spacing are also mentioned .
these ideas are illustrated and confronted with the physics of the asymmetric ( mass - imbalanced ) fermionic hubbard model with attractive interactions and densities such that a trimer phase can be stabilized .
phase diagrams are computed using density - matrix renormalization group techniques , showing the important role of the total density in achieving the novel phase .
the effective physics of the trimer gas is as well studied .
lastly , the effect of a parabolic confinement and the emergence of a crystal phase of trimers are briefly addressed .
this model has connections with the physics of imbalanced two - component fermionic gases and bose - fermi mixtures as the latter gives a good phenomenological description of the numerics in the strong - coupling regime . |
fetal alcohol spectrum disorders ( fasd ) are characterized by a broad range of physical and behavioral impairments , including poorer learning and memory ( burden et al . , 2005 ; jacobson et al . , 1993 ; mattson et al . , 2011
) and lower iq ( jacobson et al . , 2004 ; mattson et al . , 1997 )
fetal alcohol syndrome ( fas ) , the most severe fasd , is characterized by a distinctive craniofacial dysmorphology , including a flat philtrum , thin upper lip and small palpebral fissures , smaller head circumference and growth retardation ( hoyme et al . , 2005 ) .
a partial fas ( pfas ) diagnosis requires the presence of at least two of the facial features as well as either small head circumference , retarded growth , or neurobehavioral deficits and confirmation that the mother drank during pregnancy . heavily exposed ( he )
nonsyndromal children may also exhibit neurobehavioral and attention deficits but are more difficult to identify because they lack the characteristic facial features ( hoyme et al . , 2005 ) .
in the 5-year follow - up assessment of the cape town longitudinal cohort , we found a remarkably striking deficit in eyeblink conditioning performance in children with prenatal alcohol exposure ( jacobson et al . , 2008 ) , findings subsequently confirmed in a school - aged cohort ( jacobson et al . , 2011a ) .
none of the children in the longitudinal cape town sample with full fas met criterion for delay conditioning at the end of three training sessions at 5 years , compared to 75% of the healthy controls . children who blinked in anticipation of the air puff in at least 40% of the trials in a given session
only 33.3% of the children with pfas and 37.9% of the he nonsyndromal children met criterion for conditioning .
eyeblink conditioning is a nonverbal elemental learning paradigm , in which a conditioned stimulus ( cs ) , typically a pure tone , is presented 500 ms before a brief air puff to the eye ( unconditioned stimulus ( us ) ) that elicits a reflexive blink .
after repeated pairings , the tone comes to elicit a conditioned eyeblink response just prior to the puff , as the subject is able to use the cs to anticipate the timing of the onset of the air puff .
the cerebellar - brain stem neural pathways that mediate eyeblink conditioning have been studied extensively in animal models ( christian and thompson , 2003 ; lavond and steinmetz , 1989 ) .
successful conditioning relies on a well - functioning cerebellar - mediated internal timing mechanism in order to produce responses with millisecond accuracy .
alcohol - related eyeblink conditioning deficits have also been demonstrated in rodents and sheep ( goodlett et al . , 2000 ;
stanton and goodlett , 1998 ) and in another human study ( coffin et al . , 2005 ) .
the cerebellum has been identified as playing a key role in producing and maintaining timed movements with millisecond accuracy ( ivry et al .
, 2003 ; tesche and karhu , 2000 ) . ivry and keele ( 1989 )
used a paced / unpaced finger tapping task during which subjects were required to maintain a rhythm after a pacing metronome terminated to compare performance among patients with parkinson 's disease , cerebellar- , cortical- and peripheral neuropathy , and healthy controls .
patients with cerebellar lesions performed worst of all , with a 50% increase in the standard deviation ( sd ) of the inter - tapping interval ( iti ) compared to controls .
subsequently , it was demonstrated that poor maintenance of rhythm in patients with lateral cerebellar lesions was attributable to deficits in the internal timing mechanism ( wing et al .
, 1984 ) , whereas in patients with medial cerebellar lesions it was attributable to impaired motor response ( ivry et al . ,
, it was confirmed that cerebellar patients showed greater temporal variability during rhythmic discrete movements , but no timing deficits during continuous finger movement ( spencer et al . , 2003 ) .
key areas identified as being involved in timed movements in adults using functional mri ( fmri ) include superior vermis and cerebellar lobules v / vi , all of which show greater activation during discrete finger flexion / extension compared to continuous movements ( spencer et al . , 2007 ) .
( 2005 ) performed a conjunction analysis to localize brain regions involved in timing , independent of the effector used .
six tasks were performed by the subjects , including sequential bilateral finger tapping , bilateral isochronous finger tapping , and sequential and isochronous silent speech paced by auditory stimuli .
fmri results showed increased ipsilateral activation in vermis v / vi and lateral lobule vi during timed activity .
neuroimaging studies have indicated that children often activate different or more extensive neural circuitry when performing simple tasks , compared with adults ( davis et al . , 2009 ; konrad et al . , 2005 ; meintjes et al . ,
similarly , children have been shown to activate more cerebellar regions than adults during unpaced rhythmic finger tapping , including right lobule viib and ix , bilateral crus ii and vermis vi , viib , viii and crus ii ( de guio et al . ,
we were interested in examining whether the impaired eyeblink conditioning performance observed in children with fasd may , in part , be attributed to a deficit in the internal timing mechanism in these children and whether children prenatally exposed to alcohol recruit areas involved in the maintenance of timed responses with millisecond accuracy to the same extent as controls .
we used fmri in children prenatally exposed to alcohol and healthy non- or minimally - exposed controls during a finger tapping task , which interleaves blocks of rhythmic and non - rhythmic tapping in response to an auditory cue , to examine differences in cerebellar blood oxygen level dependent ( bold ) activations related to timing in these children .
we hypothesized that significant differences in activation between rhythmic and non - rhythmic conditions will be seen between the children prenatally exposed to alcohol and the control children in areas involved in the maintenance of timed responses in control children .
pregnant women from the cape coloured ( mixed ancestry ) community in cape town , south africa , were recruited between 1999 and 2002 at their first visit to an antenatal clinic ( jacobson et al . , 2008 ) .
the incidence of fasd in this population is among the highest reported in the world ( may et al . , 2000 , 2007 ) . the cape coloured population , comprised of descendants of white european settlers , malaysian slaves , khoi - san aboriginals , and black africans , historically constituted the large majority of workers in the wine - producing region of the western cape .
the high prevalence of fas in this community is attributable to very heavy maternal drinking during pregnancy ( croxford and viljoen , 1999 ; jacobson et al .
, 2006 ; jacobson et al . , 2008 ) , due to poor psychosocial circumstances and residual impact of the now - outlawed dop system , in which farm laborers were paid , in part , with wine . all pregnant women who reported consuming at least 14 standard drinks / week or engaging in binge drinking ( 5 drinks / occasion ) during pregnancy were invited to participate in the study .
in addition , pregnant women who abstained or drank minimally during pregnancy were invited to participate as controls .
women younger than 18 years of age , as well as women with diabetes , epilepsy , or cardiac problems requiring treatment , and religiously observant muslim women , whose religious practices prohibit alcohol consumption , were excluded from the study .
infant exclusionary criteria were major chromosomal anomalies , neural tube defects , multiple births , and seizures .
maternal alcohol consumption was assessed using a timeline follow - back approach ( jacobson et al . , 2002 ) .
at recruitment the mother was interviewed regarding the incidence and amount of her drinking on a day - by - day basis during a typical 2-week period at time of conception .
she was also asked whether her drinking had changed since conception ; if so , when the change occurred and how much she drank on a day - by - day basis during the preceding 2-week period .
this procedure was repeated in mid - pregnancy and again at 1 month postpartum to provide information about drinking during the latter part of pregnancy .
volume was recorded for each type of beverage consumed each day , converted to absolute alcohol ( aa ) using multipliers proposed by bowman et al .
( 1975 ) , and averaged to provide three summary measures of alcohol consumption at conception and during pregnancy : average ounces of aa consumed / day , aa / drinking day ( dose / occasion ) and frequency ( days / week ) . the number of cigarettes smoked on a daily basis , as well as the frequency of marijuana and other drug use were also recorded .
each child was examined for growth and fas dysmorphology by two u.s .- based expert dysmorphologists following the revised institute of medicine criteria ( hoyme et al . , 2005 ) during a 6-day clinic in 2005 ( jacobson et al . , 2008 ) .
four children who did not attend the clinic ( 1 fas , 2 he and 1 control ) were examined by a cape town - based dysmorphologist with expertise in fas diagnosis .
there was substantial agreement among the dysmorphologists on the assessment of all dysmorphic features , including the three principal fetal alcohol - related characteristics philtrum and vermilion measured using the lip - philtrum guide ( astley and clarren , 2001 ) and palpebral fissure length ( median r = 0.78 ) .
each of the children was assigned to one of the following diagnostic groups at a case conference ( conducted by heh , lkr , swj , cdm , and jlj ) : fas , pfas , nonsyndromal he , or control .
the mother and child were transported to our university of cape town ( uct ) child development research laboratory by a staff driver and research nurse for the iq and eyeblink conditioning ( ebc ) assessments , which were administered by an ma - level neuropsychologist .
iq data were collected from the children on the wechsler intelligence scale for children - iv ( wisc - iv ) at 10 years ( diwadkar et al . , 2013 ; jacobson et al . , 2011b ) .
in the 5-year follow - up of the children from our longitudinal cohort , we administered the junior south african individual scales ( jsais ; madge et al . , 1981 )
, which is available in afrikaans and english and has been normed for south african children .
159 of those children were administered the wechsler intelligence scales for children , 4th ed .
iq scores from the jsais were strongly correlated with the wisc scores , r = 0.73 , p < 0.001 , confirming the validity of our translation of the wisc for use with this population ( jacobson et al . , 2011a ) .
ebc assessments were administered using a commercially available human ebc system ( model # 2325 - 0145-w , san diego instruments , san diego , ca ; see jacobson et al . , 2008 , 2011a ) .
facing a monitor displaying a video , the child wore a light - weight headgear , which supported a flexible plastic tube that delivered an air puff to the right eye and a photodiode which measured eyelid closure .
two 50-trial sessions were administered on the same day about 2 h apart with two more sessions on a second day within the same week . in delay
ebc , the air puff was administered during the last 100 ms of the 750 ms tone .
the trace conditioning procedure , which was administered 1.31.8 years after the delay task , was the same as in the delay task except that a 500-ms stimulus - free interval occurred between the offset of the 750-ms tone and the onset of the air puff .
eyeblinks executed within 350 ms prior to the air puff onset were considered crs .
ebc performance was assessed here in terms of percent conditioned responses during the third ebc session .
mothers and children were transported on a separate day to the cape universities brain imaging centre ( cubic ) for neuroimaging .
82 ( 10 fas , 19 pfas , 29 he , 24 controls ; 47 boys ) right - handed children were scanned on the 3 t allegra ( siemens , erlangen , germany ) mri scanner at cubic between january 2009 and december 2011 ( mean age standard deviation ( sd ) = 10.7 0.6 years , age range 9.512.0 ) .
we acquired high - resolution structural images and functional mri data during rhythmic and non - rhythmic finger tapping .
all examiners were blind regarding prenatal alcohol exposure history and fasd diagnosis during the uct and cubic assessments , except for a few severe cases .
the experimental tasks were programmed using e - prime software ( psychology software tools , inc . , pittsburgh , usa ) and
were presented through a wave guide in - line with the bore of the magnet in the rear wall of the scanner room using a data projector and a rear projection screen mounted at the end of the magnet bore .
the child was able to talk to the examiner using an intercom that is built into the scanner and could stop the scan at any time by squeezing a ball held in his / her left hand .
all children were accompanied into the scanner room by a research nurse / assistant who stayed with them throughout the scan .
all children practiced the task before the scan to ensure that they understood the instructions and could perform the task .
children also lay down in a mock scanner prior to the scan to listen to a recording of the scanner noises , which helped reduce anxiety .
the experimental task was designed to distinguish between brain regions activated during rhythmic tapping compared to non - rhythmic tapping .
1 ) used by lutz et al . ( 2000 ) , employed an auditory rather than a visual stimulus .
each block comprises two different active conditions ( rhythmic and non - rhythmic finger tapping ) interleaved with rest blocks .
the children are instructed to press a button with their right index finger every time they hear a tone . the first block is preceded by a rest block of 8 s , during which four dummy scans are acquired and an instruction to get ready is displayed . during the rhythmic blocks , tones are equally spaced ( sd = 0 ms ) with an inter - stimulus interval ( isi ) of 736 ms .
the non - rhythmic blocks comprise tones at irregular intervals ( mean isi = 736 ms , sd = 256 ms ) .
both the rhythmic- and non - rhythmic blocks last for 16 s and are interleaved with 10 s of rest between active blocks .
each set of blocks ( rhythmic , rest , non - rhythmic , rest ) is repeated four times .
the principal performance measure is rhythmicity of tapping , determined by averaging for each condition the sds of the inter - tap intervals ( itis ) within each block of that condition .
high - resolution t1-weighted structural mr images were acquired using a 3d echo planar imaging ( epi ) navigated ( tisdall et al . , 2009 )
multi - echo mprage ( van der kouwe et al . , 2008 ) sequence that had been optimized for morphometric analyses using freesurfer software .
imaging parameters were : fov 256 256 mm ; 128 sagittal slices , tr 2530 ms ; te 1.53/3.21/4.89/6.57 ms ; ti 1100 ms ; flip angle 7 ; voxel size 1.3 1.0 1.3 mm .
the 3d epi navigator provided real - time motion tracking and correction , which served to substantially reduce the presence of any motion artifacts in structural imaging data , despite significant subject motion .
a t2 * -weighted gradient echo , epi sequence was used to acquire 114 functional volumes that are sensitive to bold contrast ( tr 2000 ms , te 30 ms , 34 interleaved slices , 3 mm slice thickness , gap 1.5 mm , fov 200 200 mm , in - plane resolution 3.125 3.125 mm ) while the children performed the task . despite the low resolution of the fmri data , this analysis succeeded in resolving the complex geometry of the cerebellum and its respective lobules .
all procedures were performed according to protocols that had been approved by the institutional review board of wayne state university and the faculty of health sciences human research ethics committee at the university of cape town .
all parents / guardians provided informed written consent , and all children provided oral assent . to ensure that only data from blocks in which the child was fully engaged in the task
were included in the fmri data analysis , we applied performance criteria based on inspection of the distribution of the sds of the itis in the rhythmic and non - rhythmic blocks .
sds displayed a bimodal distribution and the local minimum was used to select thresholds for each condition . in the rhythmic tapping condition , only blocks with sds less than 150 ms , mean itis between 500 and 1000 ms , and 6 or fewer missed taps were included in the analyses .
itis during the rhythmic blocks that exceeded 1200 ms were assumed to occur due to one or more missed taps , which occasionally occurred when a child did not press the button firmly enough . in such instances , for the purposes of computing sd
, additional taps were inserted with an iti as close to 736 ms as possible to ensure that missed taps were interpolated with the appropriate rhythm .
inserted taps were counted as missed in determining whether to include the block in the analysis .
non - rhythmic tapping blocks were included in the analysis only if their sds were greater than 170 ms and if the difference between the number of tones presented and the number of button presses did not exceed 9 .
blocks that did not meet inclusion criteria were labeled as bad blocks and treated as separate predictors in the general linear model ( glm ) . only children who met behavioral performance criteria for two or more blocks in each condition
were included in the analysis as only these children were considered to be fully engaged in the task .
fmri data analyses were performed in brain voyager qx ( brain innovation , maastricht , the netherlands ) .
pre - processing included motion correction relative to the first volume that was acquired during the functional scan , linear scan time correction , temporal filtering with a high pass filter of 3 cycles / point , and linear trend removal .
scans with motion exceeding 3 mm translation or 3 rotation within a functional run were excluded from all further analyses .
whole - brain group analyses were performed with a random effects analysis of variance using the general linear model with predictor time courses for the successful rhythmic and non - rhythmic tapping blocks convolved by the standard hemodynamic response function .
the six motion correction parameters were z - transformed and added as predictors of no interest together with the predictors for the excluded ( bad ) rhythmic and non - rhythmic tapping blocks .
beta maps were created for each subject for the contrast comparing bold activation during rhythmic and non - rhythmic finger tapping .
the beta maps were exported into analyze format for second level analyses using the spatially unbiased atlas template ( suit ) toolbox ( diedrichsen et al . , 2009 ) in spm5 ( statistical parametric mapping ) to obtain more detailed information on activation patterns in the cerebellum .
this atlas , which is based on the structural data of 20 healthy individuals , has been shown to significantly improve the alignment of individual fissures in the cerebellum when compared to normalization to the mni whole - brain template ( diedrichsen et al . , 2009 ) .
each subject 's cerebellum was initially isolated in the structural images by calculating the probability of each voxel belonging to the cerebellum or brain - stem .
the isolation maps were then used to transform each subject 's cerebellum to the suit template in the subsequent step , which normalized the data .
manual correction was applied using mricron ( rorden and brett , 2000 ) for each subject to eliminate contamination from the visual cortex .
the functional data for the cerebella were then resliced according to the isolated and normalized structural data for each subject to render the data in the suit atlas space .
a one - sample t - test was used to identify clusters where percent signal change values comparing rhythmic and non - rhythmic tapping were significantly different from zero in the control children .
cluster size correction with a cluster defining threshold of 0.05 on the normalized group images was applied to reduce the risk of multiple comparisons and a minimum cluster size of 193 mm was found to be statistically significant . to determine whether normalizing the children 's cerebella to an adult template would lead to excessively small effective regions of interest ( rois ) , cerebellar volumes generated by freesurfer ( version 5.1.0 , http://surfer.nmr.mgh.harvard.edu )
woodruff - pak et al . ( 2001 ) calculated cerebellar volumes ranging from 122.73 to 142.37 ml in eight adults ( age 2135 years ) and luft et al .
( 1999 ) found a range of 99.86170.6 ml in 48 adults ( age 19.873.1 years ) .
the children included in our functional study had an average cerebellar volume of 130.85 13.03 ml ( range 107.18170.11 ml ) , which is within the limits of the aforementioned studies .
the children from all diagnostic groups were included in this analysis , as previous studies have shown that children prenatally exposed to alcohol have reduced cerebellar volumes ( archibald et al . , 2001 ; mattson et al . , 1994 ) .
it was , therefore , necessary to establish whether the volumes in these children were also comparable to the cerebellar volumes of adults .
the overall effect of normalization to an adult template was , therefore , deemed negligible .
rois were defined with radius 3 mm , centered on the peak coordinates , in these regions . due to the large cluster sizes in the vermal lobules ,
percent signal changes were extracted around the center of mass instead of the peak voxels in these two clusters .
mean percent signal change values were extracted in these rois for each child and exported to spss ( version 20 ; ibm , new york , usa ) to examine differences in activation in these regions as a function of diagnosis as well as associations with the extent of prenatal alcohol exposure . differences between diagnostic groups in each roi were examined using analysis of variance .
eight control variables were considered as potential confounders : child 's sex , age at assessment , postnatal lead exposure , iq and cerebellar volume ; maternal education , smoking ( cigarettes / day ) during pregnancy and age at delivery .
pearson correlations were used to examine the relations of the mean percent signal change values in the rois to each of the potential confounders .
all control variables related to a given outcome at p < 0.10 were considered possible confounders .
these variables were entered into an analysis of covariance ( ancova ) to determine whether group differences in the rois remained significant after controlling for these measures .
although the continuous measures of the control group were essentially all zero , the data for these children were included in the correlation analyses to avoid artificially truncating the range of exposure .
the alcohol measure was entered in the first step of each analysis for each outcome .
all control variables related to the outcome at p < 0.10 were entered in the second step to determine if the effect of the continuous alcohol measure on activation patterns continued to be significant after statistical adjustment for potential confounders .
pearson correlations were used to examine the relation between bold activations in the rois and ebc performance .
pregnant women from the cape coloured ( mixed ancestry ) community in cape town , south africa , were recruited between 1999 and 2002 at their first visit to an antenatal clinic ( jacobson et al . , 2008 ) .
the incidence of fasd in this population is among the highest reported in the world ( may et al . , 2000 , 2007 ) . the cape coloured population , comprised of descendants of white european settlers , malaysian slaves , khoi - san aboriginals , and black africans , historically constituted the large majority of workers in the wine - producing region of the western cape .
the high prevalence of fas in this community is attributable to very heavy maternal drinking during pregnancy ( croxford and viljoen , 1999 ; jacobson et al .
, 2006 ; jacobson et al . , 2008 ) , due to poor psychosocial circumstances and residual impact of the now - outlawed dop system , in which farm laborers were paid , in part , with wine . all pregnant women who reported consuming at least 14 standard drinks / week or engaging in binge drinking ( 5 drinks / occasion ) during pregnancy were invited to participate in the study .
in addition , pregnant women who abstained or drank minimally during pregnancy were invited to participate as controls .
women younger than 18 years of age , as well as women with diabetes , epilepsy , or cardiac problems requiring treatment , and religiously observant muslim women , whose religious practices prohibit alcohol consumption , were excluded from the study .
infant exclusionary criteria were major chromosomal anomalies , neural tube defects , multiple births , and seizures .
maternal alcohol consumption was assessed using a timeline follow - back approach ( jacobson et al . , 2002 ) .
at recruitment the mother was interviewed regarding the incidence and amount of her drinking on a day - by - day basis during a typical 2-week period at time of conception .
she was also asked whether her drinking had changed since conception ; if so , when the change occurred and how much she drank on a day - by - day basis during the preceding 2-week period .
this procedure was repeated in mid - pregnancy and again at 1 month postpartum to provide information about drinking during the latter part of pregnancy .
volume was recorded for each type of beverage consumed each day , converted to absolute alcohol ( aa ) using multipliers proposed by bowman et al .
( 1975 ) , and averaged to provide three summary measures of alcohol consumption at conception and during pregnancy : average ounces of aa consumed / day , aa / drinking day ( dose / occasion ) and frequency ( days / week ) . the number of cigarettes smoked on a daily basis , as well as the frequency of marijuana and other drug use were also recorded .
each child was examined for growth and fas dysmorphology by two u.s .- based expert dysmorphologists following the revised institute of medicine criteria ( hoyme et al . , 2005 ) during a 6-day clinic in 2005 ( jacobson et al . , 2008 ) .
four children who did not attend the clinic ( 1 fas , 2 he and 1 control ) were examined by a cape town - based dysmorphologist with expertise in fas diagnosis .
there was substantial agreement among the dysmorphologists on the assessment of all dysmorphic features , including the three principal fetal alcohol - related characteristics philtrum and vermilion measured using the lip - philtrum guide ( astley and clarren , 2001 ) and palpebral fissure length ( median r = 0.78 ) .
each of the children was assigned to one of the following diagnostic groups at a case conference ( conducted by heh , lkr , swj , cdm , and jlj ) : fas , pfas , nonsyndromal he , or control .
the mother and child were transported to our university of cape town ( uct ) child development research laboratory by a staff driver and research nurse for the iq and eyeblink conditioning ( ebc ) assessments , which were administered by an ma - level neuropsychologist .
iq data were collected from the children on the wechsler intelligence scale for children - iv ( wisc - iv ) at 10 years ( diwadkar et al . , 2013 ; jacobson et al . , 2011b ) .
in the 5-year follow - up of the children from our longitudinal cohort , we administered the junior south african individual scales ( jsais ; madge et al . , 1981 )
, which is available in afrikaans and english and has been normed for south african children .
159 of those children were administered the wechsler intelligence scales for children , 4th ed .
iq scores from the jsais were strongly correlated with the wisc scores , r = 0.73 , p < 0.001 , confirming the validity of our translation of the wisc for use with this population ( jacobson et al . , 2011a ) .
ebc assessments were administered using a commercially available human ebc system ( model # 2325 - 0145-w , san diego instruments , san diego , ca ; see jacobson et al . , 2008 , 2011a ) .
facing a monitor displaying a video , the child wore a light - weight headgear , which supported a flexible plastic tube that delivered an air puff to the right eye and a photodiode which measured eyelid closure .
two 50-trial sessions were administered on the same day about 2 h apart with two more sessions on a second day within the same week . in delay
ebc , the air puff was administered during the last 100 ms of the 750 ms tone .
the trace conditioning procedure , which was administered 1.31.8 years after the delay task , was the same as in the delay task except that a 500-ms stimulus - free interval occurred between the offset of the 750-ms tone and the onset of the air puff .
eyeblinks executed within 350 ms prior to the air puff onset were considered crs .
ebc performance was assessed here in terms of percent conditioned responses during the third ebc session .
mothers and children were transported on a separate day to the cape universities brain imaging centre ( cubic ) for neuroimaging .
82 ( 10 fas , 19 pfas , 29 he , 24 controls ; 47 boys ) right - handed children were scanned on the 3 t allegra ( siemens , erlangen , germany ) mri scanner at cubic between january 2009 and december 2011 ( mean age standard deviation ( sd ) = 10.7 0.6 years , age range 9.512.0 ) .
we acquired high - resolution structural images and functional mri data during rhythmic and non - rhythmic finger tapping .
all examiners were blind regarding prenatal alcohol exposure history and fasd diagnosis during the uct and cubic assessments , except for a few severe cases .
the experimental tasks were programmed using e - prime software ( psychology software tools , inc . , pittsburgh , usa ) and were presented through a wave guide in - line with the bore of the magnet in the rear wall of the scanner room using a data projector and a rear projection screen mounted at the end of the magnet bore .
responses were recorded using a lumitouch response system ( photon control inc . , burnaby , canada ) .
the child was able to talk to the examiner using an intercom that is built into the scanner and could stop the scan at any time by squeezing a ball held in his / her left hand .
all children were accompanied into the scanner room by a research nurse / assistant who stayed with them throughout the scan .
all children practiced the task before the scan to ensure that they understood the instructions and could perform the task .
children also lay down in a mock scanner prior to the scan to listen to a recording of the scanner noises , which helped reduce anxiety .
the experimental task was designed to distinguish between brain regions activated during rhythmic tapping compared to non - rhythmic tapping .
1 ) used by lutz et al . ( 2000 ) , employed an auditory rather than a visual stimulus .
each block comprises two different active conditions ( rhythmic and non - rhythmic finger tapping ) interleaved with rest blocks .
the children are instructed to press a button with their right index finger every time they hear a tone .
the first block is preceded by a rest block of 8 s , during which four dummy scans are acquired and an instruction to get ready is displayed . during the rhythmic blocks , tones are equally spaced ( sd = 0 ms ) with an inter - stimulus interval ( isi ) of 736 ms .
the non - rhythmic blocks comprise tones at irregular intervals ( mean isi = 736 ms , sd = 256 ms ) .
both the rhythmic- and non - rhythmic blocks last for 16 s and are interleaved with 10 s of rest between active blocks .
each set of blocks ( rhythmic , rest , non - rhythmic , rest ) is repeated four times .
the principal performance measure is rhythmicity of tapping , determined by averaging for each condition the sds of the inter - tap intervals ( itis ) within each block of that condition .
high - resolution t1-weighted structural mr images were acquired using a 3d echo planar imaging ( epi ) navigated ( tisdall et al . , 2009 )
multi - echo mprage ( van der kouwe et al . , 2008 ) sequence that had been optimized for morphometric analyses using freesurfer software .
imaging parameters were : fov 256 256 mm ; 128 sagittal slices , tr 2530 ms ; te 1.53/3.21/4.89/6.57 ms ; ti 1100 ms ; flip angle 7 ; voxel size 1.3 1.0 1.3 mm .
the 3d epi navigator provided real - time motion tracking and correction , which served to substantially reduce the presence of any motion artifacts in structural imaging data , despite significant subject motion .
a t2 * -weighted gradient echo , epi sequence was used to acquire 114 functional volumes that are sensitive to bold contrast ( tr 2000 ms , te 30 ms , 34 interleaved slices , 3 mm slice thickness , gap 1.5 mm , fov 200 200 mm , in - plane resolution 3.125 3.125 mm ) while the children performed the task .
despite the low resolution of the fmri data , this analysis succeeded in resolving the complex geometry of the cerebellum and its respective lobules .
all procedures were performed according to protocols that had been approved by the institutional review board of wayne state university and the faculty of health sciences human research ethics committee at the university of cape town .
all parents / guardians provided informed written consent , and all children provided oral assent .
to ensure that only data from blocks in which the child was fully engaged in the task were included in the fmri data analysis , we applied performance criteria based on inspection of the distribution of the sds of the itis in the rhythmic and non - rhythmic blocks .
sds displayed a bimodal distribution and the local minimum was used to select thresholds for each condition . in the rhythmic tapping condition , only blocks with sds less than 150 ms , mean itis between 500 and 1000 ms , and 6 or fewer missed taps were included in the analyses .
itis during the rhythmic blocks that exceeded 1200 ms were assumed to occur due to one or more missed taps , which occasionally occurred when a child did not press the button firmly enough . in such instances , for the purposes of computing sd
, additional taps were inserted with an iti as close to 736 ms as possible to ensure that missed taps were interpolated with the appropriate rhythm .
inserted taps were counted as missed in determining whether to include the block in the analysis .
non - rhythmic tapping blocks were included in the analysis only if their sds were greater than 170 ms and if the difference between the number of tones presented and the number of button presses did not exceed 9 .
blocks that did not meet inclusion criteria were labeled as bad blocks and treated as separate predictors in the general linear model ( glm ) . only children who met behavioral performance criteria for two or more blocks in each condition
were included in the analysis as only these children were considered to be fully engaged in the task .
fmri data analyses were performed in brain voyager qx ( brain innovation , maastricht , the netherlands ) .
pre - processing included motion correction relative to the first volume that was acquired during the functional scan , linear scan time correction , temporal filtering with a high pass filter of 3 cycles / point , and linear trend removal .
scans with motion exceeding 3 mm translation or 3 rotation within a functional run were excluded from all further analyses .
whole - brain group analyses were performed with a random effects analysis of variance using the general linear model with predictor time courses for the successful rhythmic and non - rhythmic tapping blocks convolved by the standard hemodynamic response function .
the six motion correction parameters were z - transformed and added as predictors of no interest together with the predictors for the excluded ( bad ) rhythmic and non - rhythmic tapping blocks .
beta maps were created for each subject for the contrast comparing bold activation during rhythmic and non - rhythmic finger tapping .
the beta maps were exported into analyze format for second level analyses using the spatially unbiased atlas template ( suit ) toolbox ( diedrichsen et al . , 2009 ) in spm5 ( statistical parametric mapping ) to obtain more detailed information on activation patterns in the cerebellum .
this atlas , which is based on the structural data of 20 healthy individuals , has been shown to significantly improve the alignment of individual fissures in the cerebellum when compared to normalization to the mni whole - brain template ( diedrichsen et al .
each subject 's cerebellum was initially isolated in the structural images by calculating the probability of each voxel belonging to the cerebellum or brain - stem .
the isolation maps were then used to transform each subject 's cerebellum to the suit template in the subsequent step , which normalized the data .
manual correction was applied using mricron ( rorden and brett , 2000 ) for each subject to eliminate contamination from the visual cortex .
the functional data for the cerebella were then resliced according to the isolated and normalized structural data for each subject to render the data in the suit atlas space .
a one - sample t - test was used to identify clusters where percent signal change values comparing rhythmic and non - rhythmic tapping were significantly different from zero in the control children .
cluster size correction with a cluster defining threshold of 0.05 on the normalized group images was applied to reduce the risk of multiple comparisons and a minimum cluster size of 193 mm was found to be statistically significant . to determine whether normalizing the children 's cerebella to an adult template would lead to excessively small effective regions of interest ( rois ) , cerebellar volumes generated by freesurfer ( version 5.1.0 , http://surfer.nmr.mgh.harvard.edu )
were compared to values reported in adult studies ( luft et al . , 1999 ;
woodruff - pak et al . ( 2001 ) calculated cerebellar volumes ranging from 122.73 to 142.37 ml in eight adults ( age 2135 years ) and luft et al .
( 1999 ) found a range of 99.86170.6 ml in 48 adults ( age 19.873.1 years ) .
the children included in our functional study had an average cerebellar volume of 130.85 13.03 ml ( range
the children from all diagnostic groups were included in this analysis , as previous studies have shown that children prenatally exposed to alcohol have reduced cerebellar volumes ( archibald et al .
it was , therefore , necessary to establish whether the volumes in these children were also comparable to the cerebellar volumes of adults .
the overall effect of normalization to an adult template was , therefore , deemed negligible .
rois were defined with radius 3 mm , centered on the peak coordinates , in these regions . due to the large cluster sizes in the vermal lobules ,
percent signal changes were extracted around the center of mass instead of the peak voxels in these two clusters .
mean percent signal change values were extracted in these rois for each child and exported to spss ( version 20 ; ibm , new york , usa ) to examine differences in activation in these regions as a function of diagnosis as well as associations with the extent of prenatal alcohol exposure . differences between diagnostic groups in each roi were examined using analysis of variance .
eight control variables were considered as potential confounders : child 's sex , age at assessment , postnatal lead exposure , iq and cerebellar volume ; maternal education , smoking ( cigarettes / day ) during pregnancy and age at delivery .
pearson correlations were used to examine the relations of the mean percent signal change values in the rois to each of the potential confounders .
all control variables related to a given outcome at p < 0.10 were considered possible confounders .
these variables were entered into an analysis of covariance ( ancova ) to determine whether group differences in the rois remained significant after controlling for these measures .
although the continuous measures of the control group were essentially all zero , the data for these children were included in the correlation analyses to avoid artificially truncating the range of exposure .
the alcohol measure was entered in the first step of each analysis for each outcome .
all control variables related to the outcome at p < 0.10 were entered in the second step to determine if the effect of the continuous alcohol measure on activation patterns continued to be significant after statistical adjustment for potential confounders .
pearson correlations were used to examine the relation between bold activations in the rois and ebc performance .
after applying exclusion criteria , we report data for 50 ( 30 male , 20 female ) right - handed children ( mean age 10.7 0.6 year ) , including 7 children with full fas , 10 with pfas , 17 nonsyndromal he children , and 16 non- or minimally - exposed controls .
the data for 8 children were excluded due to excessive motion ( 1 pfas , 2 he , 5 controls ) , as were data from 24 children who did not meet performance criteria ( 3 fas , 8 pfas , 10 he , 3 controls ) . due to the smaller number of children with fas , the fas and pfas groups were combined in the data analysis .
the children in the he group were slightly older than children in the other two groups .
the low iq scores of all of the children reflect the highly disadvantaged backgrounds and poor education of the children in this community ; nevertheless , as expected , the lowest scores were seen in the fas / pfas group .
mothers reported that none of their children ever received medication for adhd , and only four children had been given over - the - counter medications ( e.g. , aspirin for headache ; antihistamine for allergy ) .
the mothers of the children in the fas / pfas group were older , as has been reported in previous studies ( jacobson , 1998 , 2004 ; may , 1991 ) , and had completed fewer years of formal education .
prenatal alcohol exposure was very high , averaging 8.2 standard drinks / occasion for the fas / pfas group and 5.4 for the nonsyndromal he group across pregnancy .
all but 1 ( 93.8% ) of the 16 control mothers abstained from drinking during pregnancy , and that mother drank only 2 drinks on 3 occasions .
no group differences were found for maternal smoking during pregnancy or lead exposure . in accordance with previous findings ( archibald et al . , 2001 ; mattson et al . , 1994 )
, significant differences in cerebellar volumes were seen between the diagnostic groups and post hoc analyses showed that this result was driven by the significantly reduced cerebellar volume of the most heavily exposed children compared to both the he and control groups . after exclusions , the groups did not differ on performance during rhythmic or non - rhythmic tapping ( table 2 ) .
prior to exclusions , the only significant group differences were greater variability ( f = 4.05 , p = 0.02 ) in the rhythmic tapping blocks by the he group compared to controls ( p < 0.01 ) and increased number of missed taps ( f = 6.01 , p < 0.01 ) in the rhythmic tapping blocks by the he group compared to the fas / pfas ( p = 0.02 ) and control ( p < 0.01 ) children .
since this study focuses on effects of prenatal alcohol exposure on functional activation , the behavioral results were used only to identify children who were able to perform adequately on the task , as evidenced from the absence of group differences in table 2 .
four regions in the cerebellum showed greater activation during rhythmic tapping compared to non - rhythmic tapping in the control children ( table 3 and fig .
2 ) . table 4 summarizes mean percent signal change values in rois defined in these regions for each group .
a significant group difference was detected in right crus i. post hoc analyses showed that the activation in right crus i was significantly higher in control children than in both the fas / pfas ( p < 0.01 ) and he ( p = 0.01 ) groups , with no difference between the fas / pfas and he groups ( p > 0.20 ) . a group difference falling short of statistical significance ( f = 2.68 ,
v , due to lower activation in the fas / pfas group compared with the controls ( post hoc p = 0.05 ) .
pearson correlation analyses identified two potential confounding variables . girls showed greater activations in right crus i ( r = 0.32 , p < 0.05 ) , while maternal smoking during pregnancy was associated with lower activations in vermis iv v ( r = 0.26 , p < 0.10 ) .
the group difference in right crus i remained significant ( f = 5.47 , p = 0.01 ) after adjustment for sex , and the effect on vermis iv
v was not reduced after adjustment for maternal smoking ( f = 2.63 , p = 0.06 ) .
none of the control variables were related to activations in vermis vi or right lobule vi .
relations of extent of prenatal alcohol exposure to differences in activation between rhythmic and non - rhythmic finger tapping in the four cerebellar rois are summarized in table 5 .
greater prenatal alcohol exposure was associated with smaller differences in brain activation between rhythmic and non - rhythmic finger tapping in right crus i. the strongest association was with frequency of drinking across pregnancy ( fig .
3 ) , a correlation that was also evident when the controls were omitted from the analysis , r = 0.42 , p = 0.013 .
multiple regression analyses showed that the relation in right crus i remained significant after controlling for sex .
in right lobule vi , greater absolute alcohol consumed per occasion , both around conception and across pregnancy , was associated with smaller differences in activation between rhythmic and non - rhythmic tapping ( see fig .
4 ) , a correlation that was also seen when the controls were omitted from the analysis , r = 0.43 , p = 0.011 .
greater alcohol consumption per drinking occasion around conception and during pregnancy was also associated with lower percentage signal change in both vermal regions .
multiple regression analysis showed that the effect of drinking per occasion across pregnancy on activation in vermis iv
v continued to be significant after adjustment for maternal smoking . at 9 years higher levels of delay and trace eyeblink conditioning ( measured by % conditioned responses during session 3 ) were associated with lower levels of activation in right lobule vi in the control group ( table 6 ) .
by contrast , there were no significant associations between activation of these regions and ebc performance for the exposed children .
after applying exclusion criteria , we report data for 50 ( 30 male , 20 female ) right - handed children ( mean age 10.7 0.6 year ) , including 7 children with full fas , 10 with pfas , 17 nonsyndromal he children , and 16 non- or minimally - exposed controls .
the data for 8 children were excluded due to excessive motion ( 1 pfas , 2 he , 5 controls ) , as were data from 24 children who did not meet performance criteria ( 3 fas , 8 pfas , 10 he , 3 controls ) . due to the smaller number of children with fas , the fas and pfas groups were combined in the data analysis .
the children in the he group were slightly older than children in the other two groups .
the low iq scores of all of the children reflect the highly disadvantaged backgrounds and poor education of the children in this community ; nevertheless , as expected , the lowest scores were seen in the fas / pfas group .
mothers reported that none of their children ever received medication for adhd , and only four children had been given over - the - counter medications ( e.g. , aspirin for headache ; antihistamine for allergy ) .
the mothers of the children in the fas / pfas group were older , as has been reported in previous studies ( jacobson , 1998 , 2004 ; may , 1991 ) , and had completed fewer years of formal education .
prenatal alcohol exposure was very high , averaging 8.2 standard drinks / occasion for the fas / pfas group and 5.4 for the nonsyndromal he group across pregnancy .
all but 1 ( 93.8% ) of the 16 control mothers abstained from drinking during pregnancy , and that mother drank only 2 drinks on 3 occasions .
no group differences were found for maternal smoking during pregnancy or lead exposure . in accordance with previous findings ( archibald et al . , 2001 ; mattson et al . , 1994 )
, significant differences in cerebellar volumes were seen between the diagnostic groups and post hoc analyses showed that this result was driven by the significantly reduced cerebellar volume of the most heavily exposed children compared to both the he and control groups .
after exclusions , the groups did not differ on performance during rhythmic or non - rhythmic tapping ( table 2 ) .
prior to exclusions , the only significant group differences were greater variability ( f = 4.05 , p = 0.02 ) in the rhythmic tapping blocks by the he group compared to controls ( p < 0.01 ) and increased number of missed taps ( f = 6.01 , p < 0.01 ) in the rhythmic tapping blocks by the he group compared to the fas / pfas ( p = 0.02 ) and control ( p < 0.01 ) children .
since this study focuses on effects of prenatal alcohol exposure on functional activation , the behavioral results were used only to identify children who were able to perform adequately on the task , as evidenced from the absence of group differences in table 2 .
four regions in the cerebellum showed greater activation during rhythmic tapping compared to non - rhythmic tapping in the control children ( table 3 and fig .
2 ) . table 4 summarizes mean percent signal change values in rois defined in these regions for each group .
a significant group difference was detected in right crus i. post hoc analyses showed that the activation in right crus i was significantly higher in control children than in both the fas / pfas ( p < 0.01 ) and he ( p = 0.01 ) groups , with no difference between the fas / pfas and he groups ( p > 0.20 ) . a group difference falling short of statistical significance ( f = 2.68 ,
v , due to lower activation in the fas / pfas group compared with the controls ( post hoc p = 0.05 ) .
pearson correlation analyses identified two potential confounding variables . girls showed greater activations in right crus i ( r = 0.32 , p < 0.05 ) , while maternal smoking during pregnancy was associated with lower activations in vermis iv v ( r = 0.26 , p < 0.10 ) .
the group difference in right crus i remained significant ( f = 5.47 , p = 0.01 ) after adjustment for sex , and the effect on vermis iv
v was not reduced after adjustment for maternal smoking ( f = 2.63 , p = 0.06 ) .
none of the control variables were related to activations in vermis vi or right lobule vi .
relations of extent of prenatal alcohol exposure to differences in activation between rhythmic and non - rhythmic finger tapping in the four cerebellar rois are summarized in table 5 .
greater prenatal alcohol exposure was associated with smaller differences in brain activation between rhythmic and non - rhythmic finger tapping in right crus i. the strongest association was with frequency of drinking across pregnancy ( fig .
3 ) , a correlation that was also evident when the controls were omitted from the analysis , r = 0.42 , p = 0.013 .
multiple regression analyses showed that the relation in right crus i remained significant after controlling for sex . in right lobule vi
, greater absolute alcohol consumed per occasion , both around conception and across pregnancy , was associated with smaller differences in activation between rhythmic and non - rhythmic tapping ( see fig .
4 ) , a correlation that was also seen when the controls were omitted from the analysis , r = 0.43 , p = 0.011 . greater alcohol consumption per drinking occasion around conception and during pregnancy
multiple regression analysis showed that the effect of drinking per occasion across pregnancy on activation in vermis iv
v continued to be significant after adjustment for maternal smoking . at 9 years higher levels of delay and trace eyeblink conditioning ( measured by % conditioned responses during session 3 ) were associated with lower levels of activation in right lobule vi in the control group ( table 6 ) .
by contrast , there were no significant associations between activation of these regions and ebc performance for the exposed children .
this study used fmri to investigate differences in the neural circuitry involved in performing timed movements in children prenatally exposed to alcohol compared with healthy controls .
the controls showed increased bold activations during rhythmic tapping compared to non - rhythmic tapping in four cerebellar regions that have been implicated in the production of timed movements in previous studies with adults ( gerwig et al . , 2003 ; grodd et al .
our continuous measure of maternal alcohol intake per occasion during pregnancy was associated with reduced differences in activation between rhythmic and non - rhythmic tapping in all four regions .
when the children were compared by diagnostic group , both the fas / pfas and nonsyndromal he groups showed significantly less of an increase in brain activation during rhythmic tapping in right crus i compared with controls , while only the children with fas or pfas showed significantly smaller differences in activation between rhythmic and non - rhythmic tapping in vermis iv v than the controls .
vermis v and vi have been previously implicated in timing in a study in which these regions showed greater activation during discrete rhythmic finger extension / flexion than during continuous finger movements ( spencer et al . , 2007 ) .
this finding , with the addition of the involvement of hemispheric lobule vi , was corroborated by the aforementioned study by bengtsson et al .
in a recent study of paced / unpaced finger tapping in children , both these regions also showed increased activation during unpaced tapping compared with rest ( de guio et al . , 2012 ) . in a study of adults using a procedure very similar to our task , lutz et al .
( 2000 ) also found differences in activation in vermis vi , as well as in the right cerebellar nuclei , when comparing rhythmic vs. non - rhythmic finger tapping .
however , in contrast to the findings in the previous studies ( bengtsson et al . , 2005 ; spencer et al . , 2007 ) , as well as
( 2000 ) found more activity during non - rhythmic than rhythmic finger tapping in these regions .
( 2006 ) administered four timing conditions to adults in a rhythmic / non - rhythmic finger tapping task one regular and three irregular that ranged from low to high isi variability .
activation was generally higher in the anterior lobe and lateral lobule vi for the regular and most highly variable conditions , compared to the low and moderate variability conditions , indicating increased activation for processing both regular and highly irregular temporal patterns .
( 2006 ) studies suggest that the increased activation during the irregular tapping condition in vermis vi and lateral lobule vi may reflect greater effort to predict the timing of the onset of the next stimulus when the timing is irregular .
we did not see this increase during highly irregular tapping in the children in our study .
it is noteworthy that lateral lobule vi has been shown to be of major importance in eyeblink conditioning in numerous animal studies ( miller et al . , 2003 ;
steinmetz , 2000 ; yeo and hesslow , 1998 ) , as well as fmri studies in humans ( dimitrova et al . , 2002 ; ramnani et al .
, 2000 ) , including a recent study from our cohort ( cheng et al . ,
2014 ) . in the present study we found that the differences in activation between rhythmic and non - rhythmic tapping in ipsilateral lobule vi were most strongly related to alcohol consumed per drinking occasion , an exposure measure that predicted lower activations in all four regions identified in the control group .
this finding suggests that cerebellar timing is more sensitive to heavy episodic binge - like drinking than sustained moderate drinking around the time of conception and throughout pregnancy .
by contrast to vermis vi and lateral lobule vi , which have been most directly implicated in cerebellar - mediated timing , activations in vermis iv
v have been associated with the execution of intentional movements ( grodd et al . ,
2001 ) as well as somatosensory processing of motor response ( allen et al . , 1997 ; desmond et al .
, 1997 ; nitschke et al . , 1996 ) . the greater response in vermis iv
v during rhythmic compared to non - rhythmic tapping by the control children in our study may be attributable to greater somatosensory demands in the rhythmic condition .
our finding that heavier maternal drinking during pregnancy is associated with lower activation in this region is consistent with a previous report that this region is smaller in alcohol - exposed children ( sowell et al . , 1996 ) .
these data also suggest that the impaired eyeblink conditioning observed in children with fasd may involve both deficits in timing and impaired somatosensory function .
although activation in ipsilateral crus i has not been implicated in timing during finger tapping tasks in either adults ( jueptner et al . , 1995 ; lutz et al . , 2000 ) or children ( de guio et al .
, 2012 ) , it has been shown to play a role during both reflexive eyeblinks ( dimitrova et al . , 2002 ) and eyeblink conditioning ( cheng et al . , 2014 ; gerwig et al .
this area corresponds to trigeminal projection areas and blink reflex control areas that have been identified in animal studies ( hesslow , 1994 ; pellegrini and evinger , 1997 ) , suggesting a possible role in motor control and coordination for tasks that require millisecond accuracy .
this study was part of a larger study examining the neural bases of ebc ( jacobson et al .
we have previously reported that microstructural abnormalities in the cerebellar peduncles appear to partially mediate the effect of prenatal alcohol exposure on ebc performance ( fan et al .
2011 ) . in the present study we found a pattern of better delay and trace
eyeblink conditioning performance associated with smaller increases in activation during rhythmic tapping compared to non - rhythmic tapping in right lobule vi and right crus 1 among the control children , which was not seen in children with prenatal alcohol exposure .
thus , these data suggest that the timing required for successful ebc performance may be mediated by other , less efficient brain regions in the alcohol - exposed groups .
eyeblink conditioning is an elemental form of learning that is highly sensitive to prenatal alcohol exposure and requires precise millisecond timing . in this study
, we used an fmri finger tapping paradigm to examine effects of alcohol exposure on cerebellar timing with millisecond accuracy . increased maternal alcohol intake per drinking occasion during pregnancy
was associated with lower bold activation increases during rhythmic compared with non - rhythmic tapping in several cerebellar regions that have been implicated in millisecond timing in studies with adults .
in addition , in comparisons by fetal alcohol diagnostic group , children in the fas / pfas group , which is particularly affected in eyeblink conditioning , showed lower activation particularly in vermis iv v .
this region has been implicated in the execution of intentional movements and somatosensory processing of motor response , suggesting that a deficit in those aspects of function may be more pronounced in children with fas or pfas . in summary , these data provide evidence linking binge - like drinking during pregnancy to poorer function in specific cerebellar regions involved in timing and somatosensory processing . | objectivesclassical eyeblink conditioning ( ebc ) , an elemental form of learning , is among the most sensitive indicators of fetal alcohol spectrum disorders .
the cerebellum plays a key role in maintaining timed movements with millisecond accuracy required for ebc . functional mri ( fmri ) was used to identify cerebellar regions that mediate timing in healthy controls and the degree to which these areas are also recruited in children with prenatal alcohol exposure.experimental designfmri data were acquired during an auditory rhythmic / non - rhythmic finger tapping task .
we present results for 17 children with fetal alcohol syndrome ( fas ) or partial fas , 17 heavily exposed ( he ) nonsyndromal children and 16 non- or minimally exposed controls.principal observationscontrols showed greater cerebellar blood oxygen level dependent ( bold ) activation in right crus i , vermis iv vi , and right lobule vi during rhythmic than non - rhythmic finger tapping .
the alcohol - exposed children showed smaller activation increases during rhythmic tapping in right crus i than the control children and the most severely affected children with either fas or pfas showed smaller increases in vermis iv v .
higher levels of maternal alcohol intake per occasion during pregnancy were associated with reduced activation increases during rhythmic tapping in all four regions associated with rhythmic tapping in controls.conclusionsthe four cerebellar areas activated by the controls more during rhythmic than non - rhythmic tapping have been implicated in the production of timed responses in several previous studies .
these data provide evidence linking binge - like drinking during pregnancy to poorer function in cerebellar regions involved in timing and somatosensory processing needed for complex tasks requiring precise timing . |
the formation of stars is proceeding with the outflow of gas and the formation of disks .
the disks serve as reservoirs of matter for accretion onto the protostar and increasing its mass
. the remnants of the disk may be the home of protoplanetary systems . in this
process magnetic fields play a crucial role determining disk rotation and the rotation of the protoplanets , and the rate of accretion ( uchida & shibata 1985 , shu et al .
oh masers are very sensitive indicators of the magnetic fields due to the large lande _ g_-factor compared to other molecules detectable in protostellar objects .
previous observations have determined magnetic field strengths in oh - maser spots in the range from 1 to 8 milligauss ( mg ) ( fish et al .
one of the best studied star - forming regions is w75n which is embedded in a dense molecular cloud ( hunter et al .
1994 ) . in the core of the cloud
there are several young massive stars exciting compact h ii regions .
these are related to oh , h@xmath0o and ch@xmath1oh masers ( haschick et al . 1981 , hunter et al .
1994 , minier , conway & booth 2001 ) .
high angular resolution mapping of the masers has shown that the maser emission is concentrated in clusters which tend to be associated with the known ultra - compact h ii regions vla1 and vla2 ( nomenclature is from torrelles et al .
( 1997 ) ) .
most of the oh maser spots are associated with vla1 , forming an elongated arc with a velocity gradient , which can be modelled by a rotating disk ( slysh et al .
the magnetic field in these spots has a typical oh - maser value of several mg .
near vla2 only two 1665-mhz oh maser spots were found in 1998 . in 2000
a strong flare of oh maser emission from w75n was discovered .
it was detected in vlba and evn observations , as well as in our single dish observations in april 2001 , at the declining phase of the flare .
it is possible that this flare was a precursor of an even stronger , 1000-jy flare , which started two years later ( alakoz et al .
the oh maser w75n is unusual in showing several polarized spectral features with a high degree of linear polarization ( slysh et al .
2002 ) . here
we report on high angular resolution vlbi observations of the oh maser in w75n , in which several maser spots with a very strong magnetic field , were found to accompany a major flare of oh maser emission .
the new observations of the oh maser w75n were conducted on 2001 january 01 with the vlba in the snapshot mode of 6-min duration .
the velocity resolution was 0.176 km s@xmath2 , with 256 spectral channels covering 45 km s@xmath2 in each of the oh main lines at 1665 and 1667 mhz .
in addition , we reduced and analyzed the evn observations , with the same velocity resolution , from the evn archive ( project ep037b ) of 2000 september 27 , performed three months earlier , with a velocity coverage of 90 km s@xmath2 .
other relevant observations are available from the public vlba archive .
these are for 2000 november 22 and 2001 january 6 , and were recently published ( project bf064 , fish et al .
these observations are especially interesting because they were conducted two months before and 5 days after our observations .
the vlba archive observations had the same spectral resolution as ours but the velocity coverage was only 22.5 km s@xmath2 which is a factor of two less .
this velocity coverage was not sufficient for detecting widely separated zeeman pairs with strong magnetic field .
however at 1667 mhz where the _ g_-factor is 0.6 of 1665-mhz _ g_-factor the velocity coverge is large enough even for the zeeman pairs with the strong magnetic field which are the topic of the present study .
all three sets of data have been obtained during the maximum phase of the oh maser flare .
the data were reduced in the standard way using nrao software package aips .
images of w75n were constructed for all spectral channels which had enough signal . only those maser spots which were present in at least two spectral channels
were considered as detected .
gaussian fitting and beam deconvolution were carried out using the task sad of aips .
most of the maser spots were unresolved by the synthesized beam . the absolute position given in table 1
was measured through fringe rates using aips task frmap .
both the evn spectrum of 2000 september 27 and the vlba spectrum of 2001 january 1 , taken 96 days later , of w75n show all the spectral features which were identified in the 1998 spectrum by slysh et al .
1 shows the 1665-mhz spectra in stokes _ i_. the labels are the same as in the 1998 spectrum .
additionally , two new strong spectral features have appeared since the 1998 observations , at the low - velocity side of the spectrum .
they are _ p1 _ ( _ p_recursor ) , with the flux density of 120 jy becoming the strongest feature , and _
p2_. on the other hand , two relatively strong spectral features from the 1998 spectrum , _ j _ and _ k _ , became a factor of 3 weaker in 2000 - 2001 spectra .
in fact the feature _ k _ is so weak compared to _
p2 _ which is at nearly the same velocity , that it is not seen in the spectrum ; however it has been found on maps as is shown in fig .
it is also evident from the spectra of fig . 1 that the new flare features were rapidly evolving in about a three months time interval , between 2000 september 27 ( evn ) and 2001 january 1 ( vlba ) : _ p1 _
has increased by a factor of 2 , and _
p2 _ has strengthened even more , almost by an order of magnitude .
four months later , on 2001 april 12 _ p1 _ and _ p2 _ have become weaker by a factor of 5 and 2 respectively as observed with the bear lake 64-m single dish telescope ( alakoz et al .
2005 ) . in the same time interval
the rest of the spectral features , from a to h , remained unchanged .
all constant features are connected t0 the ultra - compact h ii region vla1 while the variable features are connected to vla2 ( see next subsection ) .
the radial velocities of _
p1 _ and _ p2 _ are only about 0.7 km s@xmath2 lower than those of _
j _ and _ k _ , respectively .
one possibility is that the flare occurred at the site of _ j _ and _ k_. mapping results however do show that _ p1 _ , _
p2 _ and _ j _ , _ k _ are separate features , and all of them are present in the 2001 image ( fig . 2 ) . both _
j _ and _ k _ remain at the same position as in 1998 , but are much weaker .
_ p1 _ , _ p2 _ and a weaker feature _ p3 _ have emerged not far from _ j _ and _ k _ , closer to the continuum source vla2 .
compared to the 1998 map ( slysh et al .
2002 ) several additional spots have been detected , partly due to a higher sensitivity of new observations .
one of the new features has completed a zeeman pair with the spot _ h _ ( see next subsection ) .
the other features are really new because they are related with the flare which took place between 1998 and 2000 .
also , more accurate absolute positions of oh spots were obtained and are given in table 1 , as well as the positions of the 1667 mhz spots relative to the 1665 mhz spots . the new map presented in fig .
2 shows the position of the oh maser spots relative to the position of continuum source vla2 ( adopted from shepherd et al .
( 2004 ) ) .
the combined error of the absolute positions of the continuum source vla2 and oh masers is estimated here as 40 milliarcseconds ( mas ) while the separation between vla2 and the nearest oh maser spots is 55 mas .
the relative position errors of the maser spots are typically less than 1 mas .
the seven zeeman pairs found in w75n 2000 - 2001 spectra , are given in table 1 .
all of them have been identified as pairs of zeeman components based on positional coincidence to within a fraction of the beam .
the pairs are @xmath3-components , with opposite sense of circular polarization .
the requirement of the positional coincidence between zeeman components is very stringent and is based on the nature of zeeman splitting .
each molecule is emitting all zeeman components simultaneously as the molecule is precessing in the magnetic field .
the precession causes modulation of the emitted wave which results in splitting of the monochromatic spectrum into several components with different amplitude , frequency , and polarization .
thus each group of molecules with the same orientation relative to the magnetic field direction emits the same pattern of zeeman spectrum . in the case of oh molecules
this means that every molecule emits both rcp and lcp @xmath3-components , at their respectively shifted frequencies . on the map positions of true @xmath3-components
must coincide . since in w75n at 1665 mhz most of the maser spots are unresolved by the vlba beam and appear as point - like sources , any observed position difference between @xmath3-components can be attributed to position measurement errors caused by low signal - to - noise ratio or to a misalignment between rcp and lcp beams . in this study
we adopt 5 mas as the maximum separation between zeeman pairs which is small enough to exclude chance coincidences , even in the presence of clustering of the maser spots . fig .
3 shows one of the new zeeman pairs @xmath4 in the 1665 mhz spectrum of w75n , composed of the lcp - spot _
p3 _ at 0.48 km s@xmath2 ( grey scale ) and the rcp - spot at 24.40 km s@xmath2 ( contours ) .
both spots are unresolved by the vlba beam , and their positions coincide within 1.6 mas which is comparable to the position measurement errors .
this pair of the maser spots can be regarded as a pair of true zeeman @xmath3-components as well as the rest of the pairs in table 1 . [ cols="^,>,>,>,>,>,^,^,^,^ " , ] in addition to zeeman pairs found in 1998 observations ( slysh et al .
2002 ) several new ones are reported here .
the first is the zeeman counterpart to spot h , at 7.24 km s@xmath2 corresponding to a magnetic field strength of 3.6 mg ( @xmath5 in table 1 ) .
this pair is associated with the ultra - compact h ii region vla1 as well as two other pairs @xmath6 and @xmath5 , in table 1 .
four new zeeman pairs are found in association with the ultra - compact h ii region vla2 .
the pairs @xmath7 associated with vla2 are quite remarkable .
they show a very strong magnetic field , from 36.3 to 42.5 mg .
another property of the vla2 zeeman pairs is the apparent uniformity of the magnetic field across at least 100 au ( 50 mas ) : the field strength is equal within 10 per cent in four spots , and has the same direction .
such a large field is unusual for oh masers , typically it does not exceed 8 mg ( fish et al .
two outstanding zeeman pairs in the survey of fish et al .
( 2005 ) in w51e2 have magnetic field strength of 20 mg which is only half of the field of the new zeeman pairs in w75n .
there have been several reports of flares for the high - velocity 1667-mhz masers .
for example , spectrum of w75n obtained in 1973 at onsala shows so called elldr transient in the right circular polarization in the velocity range from 20 to 30 km s@xmath2 ( yngvesson et al . 1975 ) .
the high - velocity instead may be a result of the large zeeman velocity shift in the strong magnetic field , such as reported in this paper .
hutawarakorn , cohen & brebner ( 2002 ) located a flare in 1986 near vla2 with the lower resolution of merlin .
in addition to zeeman pairs @xmath8 and @xmath9 listed in their table 7 there is probably one more zeeman pair in table 4 consisting of an rcp - component at 22.5 km s@xmath2 and an lcp - component(s ) at radial velocities 1.2 , 2.6 , or 4.2 km s@xmath2 , with a separation between 8 and 21 mas which is less than the estimated relative position error of 45 mas .
if real , this flaring zeeman pair has a magnetic field strength of about 55 mg .
the location of the flare is within 15 mas from our 1667-mhz zeeman pair @xmath10 in table 1 .
these authors made an interesting comment that a powerful magnetic field might be an explanation , among others , of the high velocities in the flare .
the flare was not present in our 1998 data .
similar to oh - masers , the water masers in w75n are located in two clusters around vla1 and vla2 .
torrelles et al .
( 2003 ) have found a shell of water masers around the ultra - compact h ii region vla2 with a radius of 160 au . the shell is expanding with a velocity of 28 km s@xmath2 , perhaps episodically , as one in a recurrent outflow .
the high magnetic field oh maser spots @xmath7 are located very close to vla2 ( fig .
2 ) , at a distance of 55 @xmath11 mas , or at the projected distance of 110 @xmath12 au . therefore ,
the oh masers may well be located in the same shell as the water masers .
the magnetic field in water masers associated with star - forming regions is typically around 100 mg ( sarma et al .
. somewhat higher magnetic field , up to 500 mg , was measured in cepheus a water maser ( vlemmings et al .
. for w75n fiebig & gsten ( 1989 ) give an upper limit of the line - of - sight magnetic field strength of 34 mg for the maser line at 12.3 km s@xmath2 . with a single - dish telescope
they could not relate this line with a particular ultra - compact h ii region .
both vla1 and vla2 might have a line with such radial velocity ( see torrelles et al .
1997 ) . assuming that 100 mg is a typical value for the magnetic field strength of water masers , which is an order of magnitude higher than in typical oh masers , we find that the field is of the same order as in the oh maser flare of w75n reported here . the ultra - compact h ii regions vla1 and vla2 are excited by massive b - stars of about 10 m@xmath13 ( shepherd et al .
apparently , the exciting star for vla2 is more active than that of vla1 .
the appearance of new , strong maser features _
p1 _ and _ p2 _ near vla2 , as well as the almost simultaneous dimming of nearby features _ j _ and _ k _ was interpreted as a passage of an mhd shock from _
j _ and _ k _ to _ p1 _ and _ p2 _ ( alakoz et al .
the shock was probably generated by the exciting star of vla2 , and was propagating in the gas of the stellar wind .
later , the shock reached another site of oh molecule concentration and produced an even more powerful flare of the maser emission , with a flux density of 1000 jy ( fig . 5 in alakoz et al .
( 2005 ) ) .
the results of observations of this flare will be a subject of a separate publication . the magnetic field strength measured in oh maser features may have its origin in the 10 m@xmath13 exciting stars of vla1 and vla2 .
the flare features with a magnetic field strength of 40 mg are located at the projected distance of 110 au from vla2 .
the star may be emitting a stellar wind which has @xmath14 distance density dependence .
if the magnetic field energy density @xmath15 has a similar dependence on distance as would be the case of the energy equipartition then the magnetic field @xmath16 would scale with distance as @xmath17 .
hence , 40 mg at 110 au from the star would correspond to 110 g at the base of the stellar wind , at a distance 6@xmath18 cm from the centre of the star , close to the star surface .
if the true distance is larger than the projected distance the surface magnetic field of the star would be accordingly stronger , say 500 g. such a field seems to be quite reasonable for young massive stars .
recently , a magnetic field of 1300 g was measured in one of the orion trapezium cluster o - stars @xmath19 orionis c ( wade et al . 2006 ) .
other ( non - flare ) zeeman pairs which show weaker magnetic field strengths , are located farther away from the stars , at the projected distance of about 2000 au . at this distance
the stellar wind s magnetic field strength is a factor of 20 weaker , that is about 2.5 mg .
this is a typical value for the magnetic field strength in the non - flare oh maser features .
therefore , the high value of the magnetic field strength in the flare features is due to their proximity to the exciting star .
another possible source for the strong magnetic field in oh masers is a shock compression of gas .
the zeeman pairs are located in the common shell with water masers .
if we assume the magnetic field strength in water masers to be 100 mg and a gas density of @xmath20 @xmath21 then for oh masers with a magnetic field strength of 40 mg the gas density must be 1.6@xmath22 @xmath21 , if the density scales as _
@xmath23_. this is a factor 50 to 100 higher than theoretical density estimates for oh masers , although gray , hutawarakorn & cohen ( 2003 ) used an even higher density of 4.95@xmath22 @xmath21 in their model of polarized oh maser emission in w75n .
the magnetic field of the maser spots is the general interstellar field enhanced by gas compression .
both direct and reverse mhd shocks may be generated by recurrent outflows leading to the density enhancement at maser spot locations .
in such a model one is not expecting to observe a large physical motion for the maser spots in proper motion measurements .
rather , the shock motion causes consecutive brightening of the maser spot objects encountered by the shock .
this is the christmas tree model of maser emission variations , with lights flashing at fixed positions , as opposed to a model of physical motion for the maser spots .
the maser spot objects can be pre - existing gas condensations which are excited by the passing shock
. the magnetic field of the maser spots can also be intrinsic to the maser spot objects .
every maser spot object may have its own magnetic field originating in a dense magnetized , solid or liquid core such as a rotating planet orbiting the central star .
the maser emission is generated in an extended water
methanol gas envelope which is formed by sublimation from the solid icy cover of the planet .
oh molecules are produced from water by dissociation as in comets .
such a model of masers was proposed by slysh et al .
these envelopes can be energized by mhd shocks from the stellar wind in the same way as in previous model .
the maser emission is generated in those planetary envelopes which have been impacted by the shock at a particular moment . in this model
the masers are identified as icy planets rotating around young massive stars in the proto - planetary disk .
the disk is shielded from ionizing radiation of the star by the ultra - compact h ii region which rests inside the disk and absorbs all uv - photons , and by a cold dense dust core which absorbs visible and near - ir photons .
the core reemits all the absorbed energy in the far - infrared .
this emission may serve as a pump for the masers .
the masers in star forming regions are found to be associated with protostars or young massive stars of zams types o and b. although more than 200 extrasolar planets were discovered around solar - type stars , little is known about existence of planets orbiting young massive stars .
planets around a pulsar perhaps may serve as an indirect evidence of planets belonging to more massive stars ( wolszczan & frail 1992 ) .
dusty disks with possible crystalline grains were discovered around two very massive stars in lmc ( kastner et al .
2006 ) , at the distance between 120 and 2500 au from the stars .
the kuiper belt - like structures composed of debris may exist around these stars , and formation of larger objects planets can not be excluded .
a very strong magnetic field of 40 mg has been detected in several oh maser spots which have appeared during a flare of oh maser emission in 2000 , within 110 au from the ultra - compact h ii region .
the magnetic field probably originates in the exciting star where its intensity is about 500 g , or from the compression of interstellar gas by mhd shock , or in icy planetary bodies serving as nuclei for the maser spot emission .
more frequent high angular resolution observations of future flares may help to distinguish between these models .
we acknowledge nrao and evn for providing efficient access to vlbi archive data .
nrao is a facility of the nsf operated under cooperative agreement by associated universities , inc .
the european vlbi network is a joint facility of european , chinese , south african and other radio astronomy institutes funded by their national research councils .
the work of vis was supported by rfbr grant no 04 - 02 - 17057a .
99 alakoz a.v . ,
slysh v.i . ,
popov m.v .
, valtts i.e. , 2005 , astron .
letters , 31 , 375 baart e.e . ,
cohen r.j . , davies r.d . ,
norris r.p . ,
rowland p.r . , 1986 , mnras , 219 , 145 fiebig d. , gsten r. , 1989 , a&a , 214 , 333 fish v.l . ,
reid m.j . ,
argon a.l . , zheng x .- w . , 2005 ,
apjs , 160 , 220 gray m.d . , hutawarakorn b. , cohen j.r .
, 2003 , mnras , 343 , 1067 haschick a.d . , reid m.j . ,
burke b.f .
, moran j.m . ,
miller g. , 1981 , apj , 244 , 76 hunter t.r .
, taylor g.b .
, felli m. , tofani g. , 1994 , a&a , 284 , 215 hutawarakorn b. , cohen r.j .
, brebner g.c . , 2002 , mnras , 330 , 349 kastner j.h . , buchanan c.l . , sargent b. , forrest w.j . , 2006 , apj , 638 , l29 minier v. , conway j.e . , booth r.s . , 2001 , a&a , 369 , 278 sarma a.p .
, troland t.h . ,
crutcher r.m . ,
roberts d.a . , 2002 ,
apj , 580 , 928 shepherd d.s . , kurtz s.e . , testi l. , 2004 , apj , 601 , 952 shu f. , najita j. , ostriker e. , wilkin f. , ruden s. , lizano s. , 1994 , apj , 429 , 781 slysh v.i . , migenes v. , valtts i.e. , lyubchenko s.yu .
, horiuchi s. , altunin v.i . , fomalont e.b . , inoue m. , 2002 , apj , 564 , 317 slysh v.i . , valtts i.e. , kalensky s.v . , larionov g.m . , 1999 ,
reports , 43 , 657 torrelles j.m .
, gmez j.f . , rodrguez l.f . ,
, curiel s. , vzquez r. , 1997 , apj , 489 , 744 torrelles j.m .
et al . , 2003 ,
apj , 598 , l115 uchida y. , shibata k. , 1985 , pasj , 37 , 515 vlemmings w.h.t .
, diamond p.j . , van langevelde h.j . , torrelles j.m . , 2006 ,
a&a , 448 , 597 wade g.a . ,
fullerton a.w . ,
donati j .- f . ,
landstreet j.d . ,
petit p. , strasser s. , 2006 , a&a ( in press , astro - ph/0601623 ) wolszczan a. , frail d.a .
, 1992 , nat , 335 , 145 yngvesson k.s .
, cardiasmenos a.g . ,
shanley j.f . ,
rydbeck o.e.h .
, elldr j. , 1975 , apj , 195 , 91 | a flare of oh maser emission was discovered in w75n in 2000 .
its location was determined with the vlba to be within 110 au from one of the ultra - compact h ii regions , vla2 .
the flare consisted of several maser spots .
four of the spots were found to form zeeman pairs , all of them with a magnetic field strength of about 40 mg .
this is the highest ever magnetic field strength found in oh masers , an order of magnitude higher than in typical oh masers .
three possible sources for the enhanced magnetic field are discussed : ( a ) the magnetic field of the exciting star dragged out by the stellar wind ; ( b ) the general interstellar field in the gas compressed by the mhd shock ; and ( c ) the magnetic field of planets which orbit the exciting star and produce maser emission in gaseous envelopes .
[ firstpage ] magnetic fields masers polarization stars : formation ism : individual : w75n ism : disks . |
per mile driven , adults over age 65 are more likely to be involved in motor vehicle collisions than are younger experienced drivers , and declines in attention are related to older driver impairment [ 2 , 3 ] .
when a walking or balance task is combined with a cognitively challenging secondary task ( e.g. , memorizing a list of words ) , performance decrements are found for both tasks relative to performing each task separately .
these dual - task costs suggest that walking competes for shared attentional resources , as predicted by resource models of attention [ 59 ] .
older adults often have increased difficulty when multitasking , including paradigms that involve balancing or walking [ 1012 ] .
for example , older adults show larger dual - task costs on walking while memorizing task than do younger adults [ 1316 ] . such declines in multitasking ability
approximately 30% of community - dwelling older adults experience one or more falls annually [ 17 , 18 ] .
age - related declines in the ability to multitask are related to an increase in falls risk .
for example , performance on a counting while walking task predicts falls in older adults (; see also [ 2022 ] ) .
similarly , older adults at high risk for falls are less successful than low falls risk adults when crossing the street in a simulated environment while talking on a hands - free cell phone .
differences in multitasking ability that have been associated with falls risk are theorized to result from declines in executive control , the functions which select , schedule , and coordinate task processes .
low falls risk older adults outperform high falls risk adults on tasks theorized to index executive control abilities [ 2426 ] .
this suggests that older adults with poorer executive control are worse at managing complex task demands pertaining to balance or gait and are , therefore , more likely to fall .
executive control is also important for other real - world tasks , such as driving .
drivers must attend to several areas of the environment and plan and execute responses to avoid collisions .
indeed , poorer performance on executive control tasks is predictive of retrospective crashes in a sample of older male drivers , and models of crash risk often comprise multidisciplinary factors , including physical ability , attention , and health [ 2931 ] .
a history of falls is associated with older driver crashes , and some models of crash risk actually incorporate falls risk among other physical and cognitive measures .
importantly , however , the results linking falls risk and driving performance rely on crash reports and subjectively rated driving performance and do not identify the behaviors related to unsafe driving in high falls risk older adults .
the goal of the present study was to explore the relationship between falls risk and driving in older adults in greater detail using a high - fidelity driving simulator , which allowed us to place drivers in potentially dangerous situations and to collect objective performance measures .
we also included a battery of cognitive tasks to examine the relationship between falls risk , cognition , and simulated driving . given previous findings of heightened crash risk , we predicted that low falls risk drivers would outperform high falls risk drivers on our simulator driving assessments .
we further predicted that high falls risk drivers would show the greatest performance decrements in simulated driving performance under high multitasking load ( i.e. , when responding to unexpected events ) .
finally , we predicted that low falls risk older adults would outperform high falls risk adults on a desktop computer dual - task paradigm .
36 independent - living older adults were recruited from the urbana - champaign community and paid $ 8 per hour for participating .
all participants demonstrated normal or corrected - to - normal visual acuity ( 20/30 or better using a snellen chart ) and normal color vision ( ishihara color vision test ) and scored above 28 ( of 30 ) on the folstein minimental state exam .
the beckman institute driving simulator at the university of illinois ( http://isl.beckman.illinois.edu/ ) was used to assess simulated driving performance .
viewing distance was approximately 77 cm for all tasks , although participants were free to move their heads .
participants completed a falls history questionnaire ( i.e. , have you fallen in the last 6 months ? how many times ? ) .
thus , we classified participants as high or low falls risk based on scores from the physiological profile assessment ( ppa ) , as described by lord and colleagues .
the ppa is a composite falls risk score based on measures of edge contrast sensitivity , hand reaction time , proprioception , leg muscle strength , and sway , which have shown to reliably predict falls in community and institutional settings [ 36 , 37 ] .
we set an a priori cutoff score of 0.6 to classify high and low falls risk ( i.e. , high falls risk 0.6 ) .
fifteen participants were classified as high falls risk ( mean age = 75.8 , age range = 7180 ) , and 15 participants were classified as low falls risk ( mean age = 74.4 , age range = 6780 ) .
the high and low falls risk groups were statistically similar in age , driving experience , and current driving habits .
ppa scores were significantly correlated with times on the tuag test ( r = .61 , p = .001 ) .
for one task , participants determined whether a letter was an a or b and pressed corresponding keys with their right hand . in the second task
, participants determined whether a number was a 2 or 3 and pressed a corresponding key with their left hand . on single - task trials ( 50% ) , participants performed only one task . on dual - task trials ( 50% ) , they performed both tasks .
participants completed single - task and dual - task practice trials , followed by a block of forty intermixed single- and dual - task test trials .
participants searched for a white triangle within a circle among square distracters in a briefly ( 44 ms ) presented display .
targets were presented with equal probability on one of 8 radial spokes at eccentricities of 10 , 20 , and 30 from fixation .
the search display was followed by a 100 ms mask consisting of random black and white lines and shapes .
this task is similar to the peripheral localization subtask of the useful field of view , and we theorized that this measure might be predictive of older drivers ' ability to respond to peripheral events .
stimuli were 80 pairs of photographs of real driving scenes taken from the driver 's perspective .
each pair of images differed in one detail ( e.g. , a car in one image was removed from the other image ) . on each trial , participants saw a repeating cycle of 4 images , first image ( 240 ms ) , a gray mask screen ( 80 ms ) , the modified image ( 240 ms ) , and a gray mask screen ( 80 ms ) , and pressed a key when they detected the change . the screen froze , and participants clicked on the change location with the mouse .
participants had 30 seconds to respond and completed 1 practice trial followed by 40 test trials .
drivers followed a lead vehicle ( lv ) along a straight , two - lane highway for approximately 15 minutes .
participants were instructed to maintain a 5-second gap from the lv , which traveled at 45 mph . during the practice drive
at 20 random times during the test drives , the lv 's brake lights illuminated and its speed decreased .
when the driver pressed the brake , the lv accelerated back to 45 mph .
performance measures included response time to lv braking , following distance , and lane keeping .
drivers responded to potentially hazardous events as they drove along a straight , two - lane urban road for approximately 15 minutes .
ambient traffic and pedestrians were randomly generated such that there was a constant stream of traffic in the opposite lane , and the sidewalks were crowded with pedestrians .
hazards comprised pedestrians crossing the roadway and cars on the right shoulder beginning to pull out and stopping ( figure 1 ) .
the order of the task conditions was counterbalanced across participants . in the drive - only condition , participants drove without secondary task distraction . in the drive + 1-back condition , participants performed cognitively demanding secondary task , a continuous 1-back task where they heard a letter every 3 seconds and indicated whether the letter was the same as or different from the previous letter via buttons on the steering wheel , while driving .
session 1 consisted of a screening drive for simulator sickness , descriptive measures , and falls risk assessment ( 6 potential participants showed signs of simulator sickness and were not included in the study ) . in sessions 2 and 3 , participants completed the three computer - based cognitive tasks , followed by practice with the secondary task and two driving assessments in the simulator .
three participants ( 2 high falls risk and 1 low falls risk ) did not complete the cognitive battery ( due to technical issues ) and were not included in analyses of the cognitive tasks .
dual - task cost was calculated by subtracting the single - task reaction time from the dual - task reaction time .
high falls risk participants had a significantly higher dual - task cost compared to the low falls risk group , f(1,23 ) = 6.88 , p < .05 .
single - task reaction times on the computer dual - task paradigm did not differ between the groups , f(1,23 ) = .19 , p = .67 , indicating that differences were not due to general slowing .
localization accuracy on the ffov task did not differ between the falls risk groups ( p > .70 ) , nor did reaction time or accuracy on the change detection task ( p 's > .35 ; see table 1 ) .
driving measures were entered into an anova with falls risk group ( high versus low ) as a between - subjects factor and task condition ( drive - only versus drive + 1-back ) as a within - subjects factor .
two participants ( 1 high falls risk and 1 low falls risk ) who had passed the screening drive showed signs of simulator sickness during the experimental drives , did not complete the study , and were excluded from analyses .
collisions were infrequent in both driving tasks ( table 1 ) , precluding statistical analysis .
response time ( rt ) was defined as the time it took a driver to press the brake pedal following the onset of the lv brake lights or the triggering of a hazard event ( figure 2 ) .
low falls risk drivers responded significantly faster than high falls risk drivers to lv braking events , f(1,26 ) = 11.28 , p < .01 .
low falls risk drivers also responded faster than high falls risk drivers to the onset of hazard events , f(1,26 ) = 9.32 , p < .01 . in the following task , performing the auditory 1-back task slowed responses , f(1,26 ) = 5.24 , p < .05 , though this was not the case for the hazard drive , f(1,26 ) = .083 , p = .78 .
we ran separate analyses using hand reaction time , contrast sensitivity , and the combination of hand reaction time and contrast sensitivity as covariates to investigate the impact of specific components of the ppa . with hand reaction time as a covariate ,
low falls risk drivers still had significantly faster rts than did high falls risk drivers ( p 's < .05 ) . when contrast sensitivity was included as a covariate , low falls risk drivers responded faster than did high falls risk drivers in the hazard ( f(1,26 ) = 4.90 , p < .05 ) but not the following ( f(1,26 ) = 1.21 , p > .10 ) simulated driving tasks . when both hand reaction time and contrast sensitivity were included as covariates , brake rt differences between groups were no longer significant ( p 's > .10 ) .
high and low falls risk drivers did not differ in average velocity or lane keeping performance ( all p 's > .10 ) . on the following task ,
headway distance was defined as the average distance between the driver 's vehicle and the lv .
drivers increased their headway in the drive + 1-back condition , f(1,26 ) = 4.078 , p = .05 .
however , performing the concurrent secondary task did not differentially impair high falls risk drivers ( p > .40 ) . to examine
whether performance on the computer dual - task paradigm predicted simulated driving , we computed the correlation between dual - task cost on the computer paradigm and rt in the driving tasks .
participants with a lower computer dual - task cost responded faster to both lv braking events ( r = .42 , p < .05 ) and to hazard events ( r = .45 , p < .05 ) .
conversely , single - task performance in the computer dual - task paradigm was not correlated with rt in the simulated driving assessments ( p 's > .20 ; figures 3(c ) and 3(d ) ) .
we compared accuracy on the auditory 1-back task during 1-back only ( during the last half of practice ) and 1-back + driving performances to examine whether there were costs to secondary task performance when driving ( figure 4 ) .
there was a significant cost to 1-back accuracy in both the following drive ( f(1,26 ) = 254.5 , p < .001 ) and the hazard drive ( f(1,26 ) = 173.0 , p <
.001 ) , though there was no difference between falls risk groups ( p > .15 ) . to determine
if a group difference existed when the driving task was most demanding , we divided 1-back accuracy into critical segments ( i.e. , during peripheral hazard or lv braking events ) and noncritical segments ( i.e. , periods between critical events ) .
high falls risk participants were marginally worse than were low falls risk participants during critical periods in both the following ( f(1,26 ) = 3.27 , p = .084 ) and the hazard drives ( f(1,26 ) = 3.57 , p = .071 ) .
there were no differences in 1-back accuracy between groups in the noncritical segments for either drive ( p 's > .70 ; see figure 3 ) .
this indicates that , when responding to critical events , high falls risk drivers showed larger costs to secondary task performance than did low falls risk drivers .
though this may have been a compensatory strategy , it did not eliminate group differences in rt to critical driving events .
the current study compared the driving performance of high and low falls risk older adults in a high - fidelity driving simulator .
of greatest importance is the finding that high falls risk drivers responded approximately 400 ms slower than did low falls risk drivers to critical events .
high falls risk drivers responded slower than did low falls risk drivers to both central lead vehicle braking events and to peripheral hazards .
such slower responses may be a contributing factor to heightened crash rates for high falls risk older adults reported elsewhere .
our data extend the literature that examines multitasking performance in older adults at high and low risk for falls [ 1923 ] . in our cognitive battery ,
high falls risk participants had greater dual - task costs on the computer paradigm than did low falls risk older adults , and , importantly , this was not due to differences in general slowing . in much the same way as walking , responding to critical events while driving requires the ability to multitask ; drivers must scan the environment and plan and execute evasive responses while controlling the vehicle . in our study , the ability to efficiently allocate attention among different tasks was most critical to responding to unexpected driving events in the simulator .
this is supported by the finding that performance on the computer dual - task paradigm predicted driving rts and further suggests that these multitasking differences between high and low falls risk older adults are somewhat general in nature .
previous research suggests that deficits in executive function likely underlie declines in multitasking performance and mediate the relationship between balance and falls [ 24 , 25 , 27 ] , as well as crash risk for older drivers .
changes in executive control likely contribute to the general multitasking differences shown in our cognitive battery and simulator driving assessments .
our results indicate that contrast sensitivity and response time were the most important components of the ppa relating to simulated driving rt .
previous work has found that contrast sensitivity and response time are important abilities in responding to driving hazards .
the present data suggest that these abilities are important to both walking and simulated driving .
we failed to find differences between high and low falls risk drivers on other simulator driving performance measures such as lane keeping .
previous research has shown that multitasking differences in older adults and differences between high and low falls risk older adults arise primarily at the highest levels of task demand [ 13 , 16 , 23 ] .
thus , in the present study , high and low falls risk drivers performed equally well during relatively low - demand driving intervals , but high falls risk drivers were more impaired in high - demand situations , resulting in slower responses . in the driving + 1-back condition , dual - task costs were found primarily in 1-back accuracy .
this may reflect a strategy whereby older adults compensated for higher task demands by sacrificing performance on the less safety - critical task . during responses to critical events ,
high falls risk drivers sacrificed 1-back accuracy more than did low falls risk drivers , which again suggests high falls risk participants struggled under high multitasking demand .
future work should explore the contribution of different components of executive control ( e.g. , switching and inhibition ) to deficits in real - world tasks such as walking and driving .
eye tracking techniques could inform as to whether high and low falls risk drivers differ in the way they deploy attention within a driving scene .
research should also explore other driving tasks where older adults are differentially involved in crashes , such as busy intersections .
the examination of on - road driving measures is also needed to validate that response time differences in the simulator translate to real - world driving . in summary ,
our results demonstrate that high falls risk older drivers respond slower than do low falls risk drivers when responding to potential dangers in a driving simulator .
a multidimensional approach that includes falls risk may be useful in more accurately assessing older driver impairment . | declines in executive function and dual - task performance have been related to falls in older adults , and recent research suggests that older adults at risk for falls also show impairments on real - world tasks , such as crossing a street .
the present study examined whether falls risk was associated with driving performance in a high - fidelity simulator .
participants were classified as high or low falls risk using the physiological profile assessment and completed a number of challenging simulated driving assessments in which they responded quickly to unexpected events .
high falls risk drivers had slower response times ( ~2.1 seconds ) to unexpected events compared to low falls risk drivers ( ~1.7 seconds ) .
furthermore , when asked to perform a concurrent cognitive task while driving , high falls risk drivers showed greater costs to secondary task performance than did low falls risk drivers , and low falls risk older adults also outperformed high falls risk older adults on a computer - based measure of dual - task performance .
our results suggest that attentional differences between high and low falls risk older adults extend to simulated driving performance . |
excessive bleeding during extensive endoscopic surgery of the paranasal sinuses can compromise the safety and efficiency of the surgical procedure . to ensure hemodynamic balance and patient safety ,
fortunately , the amount of blood lost during surgery can be easily determined by measuring the total quantity of blood suctioned from the operative field .
good surgical field visibility is one of the basic prerequisites for a precise and safe otolaryngological operation , and the main obstacle to good visibility is excessive perioperative bleeding .
several factors , including arterial blood pressure , heart rate and coagulation disorders , have a large impact on perioperative bleeding .
for this reason , every effort should be made to maintain these cardiovascular parameters at low levels .
one of the main methods of reducing perioperative bleeding during functional endoscopic sinus surgery ( fess ) is the use of controlled hypotension .
however , poorly controlled hypotension can lead to lower blood flow to organs that are sensitive to fluctuations in perfusion pressure .
the use of precisely dosed modern anaesthetic agents , together with proper patient positioning , allows us to manage haemodynamic parameters during surgery and thus to easily control hypotension .
although several such modern anaesthetic agents and methods are available , one method in particular total intravenous anaesthesia ( tiva)has become increasingly popular in recent years [ 26 ] . in our practice , we have been using tiva with great success for several years now .
the aim of the present study was to compare tiva to two other types of conventional anaesthesia delivery in order to assess the impact on perioperative bleeding control and on mean arterial pressure and heart rate in patients before , during and after endoscopic paranasal sinus surgery .
in the 2-year period from 2008 to 2010 , 502 patients ( 209 women [ 41.6 % ] and 293 men [ 58.4 % ] ) , aged 1885 years underwent fess at the department of otolaryngology of the military medical academy in lodz . of these 502 patients , 90 ( 30 women and 60 men )
the women ranged in age from 18 to 75 years ( mean 46.6 11.12 ) and the men from 20 to 85 years ( mean 54.4 13.29 ) .
the inclusion criteria were based on the american society of anesthesiologists ( asa ) physical status scale ratings and were the same for all analgesic procedures .
the patients were allocated to one of three groups ( 30 patients each ) based on the intended general anaesthesia approach ( i.e. intubation , mechanical ventilation , or life and ventilation parameters monitoring ) .
both inhalation anaesthesia ( sevoflurane for sedation ) and intravenous anaesthesia were used for patients in groups i and ii . the only difference between groups i and ii was in the intravenous anaesthetic agent used ( fentanyl in group i , remifentanil in group ii ) .
the third group ( group iii ) received anaesthesia administered solely via the intravenous route ( tiva ) , with propofol used for sedation and remifentanil for analgesia .
target - controlled infusion ( tci ) was used to deliver the anaesthetics in group iii , with the infusion pump programmed in the usual manner .
briefly , this is done by entering the value of the target plasma drug level along with the patient s age and weight .
the device contains the appropriate pharmacokinetic profile and , using these input data , calculates and adjusts ( several times per minute ) the correct speed for intravenous drug delivery to achieve the targeted plasma level .
the pharmacokinetic profile programmed in the pump is specific for each company and development of this profile is based on several thousand studies of plasma drug concentration during anaesthesia in patients of various ages and weights [ 6 , 7 ] . for intubation , the same muscle relaxant was used in all three groups .
controlled hypotension was applied to maintain the systolic blood pressure below 100 mm hg , with analgesic and sedative doses administered accordingly .
the following parameters were assessed : duration of anaesthesia , duration of surgery , total perioperative blood loss and perioperative blood loss rate ( ml / min ) . patients who met the criteria for surgery were randomly allocated to one of the three groups .
surgery was performed in the reverse trendelenburg position , at a 1520 angle , without use of a shaver .
inclusion criteria for surgery were as follows : arterial blood pressure 140/90 mm hg and general anaesthesia risk class 2 on the asa scale .
patients took antihypertensive agents up to the day of the surgery to ensure a stable blood pressure during the perioperative period . on hospital admission ,
blood pressure was measured and a clinical examination was performed to assess cardiovascular status and the possible need for additional treatment .
in addition , the following laboratory tests were performed : blood cell counts , coagulogram , esr ( erythrocyte sedimentation rate ) and crp ( c - reactive protein ) .
all patients underwent a ct scan of the paranasal sinuses and , depending on comorbidities and blood type , additional tests were performed as necessary .
all patients received premedication ( benzodiazepine ) to minimize the impact of the sympathetic nervous system on the cardiovascular system .
all surgical interventions were conducted under thermally induced voltage alterations with remifentanil and propofol ( anaesthesia induction ) .
cardiovascular system parameters were monitored every 15 min for at least 4 h following surgery . the mean arterial blood pressure
( map ) and heart rate ( hr ) values , as well as standard deviations , were calculated .
results were analysed with chi - squared tests , with a significance level of p < 0.05 . the student s t test was used for inter - group comparisons .
pearson s correlation coefficient was used to assess the strength of linear relationship between the variables .
results were analysed with chi - squared tests , with a significance level of p < 0.05 . the student s t test was used for inter - group comparisons .
pearson s correlation coefficient was used to assess the strength of linear relationship between the variables .
group i consisted of 8 ( 26.7 % ) women and 22 ( 36.7 % ) men ; group ii 9 ( 30.0 % ) women and 21 ( 35.0 % ) men ; and group iii 13 ( 43.3 % ) women and 17 ( 28.3 % ) men .
the statistical analysis showed no significant differences between men and women in total blood loss or blood loss rate . at baseline , 55 of the 90 patients ( 61.1 % ) had normal arterial blood pressure , 25 ( 27.8 % ) were receiving treatment for hypertension and 10 ( 11.1 % ) had untreated hypertension .
1 ) : group i , 108.7 20.8 min ; group ii , 112.6 22.2 min ; and group iii , 103.7 17.5 min .
2 ) : group i , 71.3 16.7 min ; group ii , 78.8 24.2 min ; and group iii , 66.5 15.5 min . tukey s hsd post hoc test revealed a statistically significant association between anaesthesia type and surgical duration : surgical time was significantly less in group iii versus groups i and ii ( p < 0.05 ; coefficient of determination , r = 6.7 % ) .
2mean surgery duration ( min ) in individual anaesthesia types by patient gender mean anaesthesia duration ( min ) in individual anaesthesia types by patient gender mean surgery duration ( min ) in individual anaesthesia types by patient gender mean blood loss during surgery was as follows ( fig .
3 ) : group i , 365.0 176.2 ml ; group ii , 340.0 150.5 ml ; and group iii , 225.0 91.7 ml . mean blood loss was significantly lower in group iii compared to groups i and ii ( p < 0.00 ; coefficient of determination , r = 15.7 % ) ; no significant differences between groups i and ii were observed for this variable .
mean blood loss rates were as follows : group i , 5.1 2.4 ml / min ; group ii , 4.5 2.2 ml / min ; and group iii , 3.4 1.1 ml / min ( fig .
the rate of blood loss / min was significantly lower in group iii versus groups i and ii ( p < 0.005 ; coefficient of determination , r = 11.6 % ) .fig .
3mean values of blood loss ( ml ) in particular anaesthesia types by patient genderfig .
4rate of blood loss ( ml / min ) in individual anaesthesia types by patient gender mean values of blood loss ( ml ) in particular anaesthesia types by patient gender rate of blood loss ( ml / min ) in individual anaesthesia types by patient gender map values before , during and after surgery are shown in table 1 for all three study groups . in patients with normal blood pressure at baseline , differences in map values were not statistically significant ( p > 0.05 ) . in patients with
treated hypertension , blood pressure decreased ( in both men and women ) during surgery , although these values returned to pre - surgical levels after surgery .
notwithstanding these changes , the observed differences in map were not statistically significant ( p > 0.05 ) . in patients with untreated hypertension , blood pressure normalized ( i.e. became non - hypertensive ) during surgery in both men and women , but returned to pre - surgical levels after surgery ; however , this variation in blood pressure was not significant ( p > 0.05).table 1mean arterial pressure ( map ) in the pre- , peri- and postoperative periods ( mm hg)arterial pressurepreoperative periodperioperative periodpostoperative periodsystolicdiastolicsystolicdiastolicsystolicdiastolicmapsdmapsdmapsdmapsdmapsdmapsdfmfmfmfmfmfmfmfmfmfmfmfmpatients with normal blood pressure107.4115.220.214.667.080.49.58.5120.9121.415.314.471.873.611.29.9127.7125.515.713.674.381.012.09.5patients with treated hypertension136.3140.49.812.987.885.78.19.7121.0128.013.514.664.472.613.59.9141.6140.714.213.282.383.09.29.9patients with untreated hypertension143.0139.112.411.584.786.77.39.8126.2128.414.215.677.378.712.211.5140.8144.117.516.278.786.79.49.8 mean arterial pressure ( map ) in the pre- , peri- and postoperative periods ( mm hg ) as table 1 shows , map after surgery was higher than the baseline values before surgery ( regardless of the type of anaesthesia ) ; however , these differences were not significant ( p > 0.05 ) .
the greatest differences in map values occurred in patients with treated and untreated hypertension ( table 2).table 2mean heart rate ( hr ) in the pre- , peri- and postoperative periodsheart ratepreoperative periodperioperative periodpostoperative periodhrsdhrsdhrsdfmfmfmfmfmfmpatients with
normal blood pressure77.179.18.510.480.481.79.710.473.875.912.610.2patients with treated hypertension81.984.213.412.981.783.511.88.676.477.114.211.8patients with untreated hypertension83.585.810.412.284.184.212.616.278.480.210.69.6 mean heart rate ( hr ) in the pre- , peri- and postoperative periods
all of the outcome variables evaluated in this study surgical time , anaesthesia time , total blood loss and mean blood loss rate were lower in the group treated with tiva .
these findings strongly suggest that tiva is superior to conventional anaesthetic techniques for controlling perioperative bleeding and is a strong argument for the use of tiva in endoscopic paranasal sinus surgery .
surgical treatment of massive mucosal inflammation of the nose and paranasal sinuses with polyps is particularly challenging , even when endoscopic methods are used .
a two - stage surgical procedure is recommended , especially in patients who have previously had polyps removed from the nasal meatus ( due to the possibility of adhesions ) , or when topographical identification of the surgical field is impeded .
local bleeding , which is difficult to control due to anatomical and pathological characteristics , affects the visibility of the surgical field during fess .
excessive bleeding which can be evaluated by the quantity of blood suctioned from the surgical field
therefore , it is important that anaesthesiology teams pay close attention to the hemodynamic parameters of the cardiovascular system , in addition to the usual monitoring of appropriate ventilation and analgesic procedures . according to the united states national library of medicine , controlled hypotension
is defined as a pharmacologically induced reduction of the systolic blood pressure to 8090 mm hg , a reduction of map to 5065 mmhg or a 30 % reduction in baseline map .
controlled hypotension can be induced by patient - controlled epidural anaesthesia and a number of different hypotensive drugs . of the various hypotensive medications available , vasoactive drugs ( nitroglycerin , beta blockers , calcium channel antagonists ) and clonidine and ace inhibitors
however , controlled hypotension is best achieved with precisely dosed , modern anaesthetic drugs , which have an immediate impact on hemodynamic parameters [ 1012 ] .
the combinations of remifentanil and propofol or remifentanil and inhaled anaesthetics ( isoflurane , desflurane or sevoflurane ) seem to be ideal due to their good safety profile and the ease with which the precise dose can be prepared and delivered .
moreover , these drugs do not accumulate in the body and have no impact on post - anaesthetic recovery .
in addition , their use ensures that consciousness and psychomotor functions are recovered quickly upon termination of anaesthesia administration .
fentanyl , which is also commonly used , has an important drawback : it does not allow for a precise manipulation of haemodynamic parameters and can accumulate in the body depending on the dose applied [ 1315 ] .
a major limitation of the conventional administration of intravenous drugs is that even though the total drug dose administered is determined by the anaesthetist , its concentration in the brain depends on the volume and rate of distribution , the drug s affinity for brain tissue and the speed of drug elimination from the body .
moreover , it is difficult to determine the appropriate drug infusion rate to ensure the appropriate level of sedation [ 16 , 17 ] . due to the disadvantages associated with
the conventional intravenous techniques described above , tiva and similar techniques have become popular in recent times and offer a viable alternative to inhalational anaesthesia .
the emergence and feasibility of tiva is due to advances in the pharmacokinetic and pharmacodynamic properties of newer drugs such as propofol .
similarly , new computer technology has allowed for the development of sophisticated delivery systems ( target - controlled infusion ) that enable anaesthetists to easily control intravenous delivery .
control and modification of cardiovascular system parameters ensure that arterial blood pressure and heart rate remain stable , thus facilitating good visibility of the surgical field , which in turn enables surgeons to work with greater confidence during surgery . in poland ,
nearly all laryngological centres perform surgery under general anaesthesia , and use of this type of general anaesthesia has become so widespread that it can now be considered the standard in laryngological surgery .
evaluated the relationship between mean arterial pressure and perioperative bleeding during fess in patients with a low heart rate .
they found that intraoperative bleeding is largely a function of map and hr : when the hr is maintained at 60 beats / min , there is no need , in many cases , to intensively reduce bleeding to achieve optimal surgical conditions .
surgery performed with drug - induced hypotension ( arterial blood pressure of 5060 mm hg ) does not always assure the desired effect of reducing perioperative bleeding caused by the diastole of peripheral vessels and automatic tachycardia , conditions that ultimately increase bleeding .
therefore , it is essential to prevent recurrent automatic tachycardia and to maintain heart rate at 60 beats / min [ 18 , 19 ] . in that same study , siekiewicz and
colleagues used the fromm and boezzart scale to evaluate perioperative bleeding and surgical field visibility .
however , generalized use of such a low blood pressure might be risky depending on the age of the patient ; consequently , an individualized approach should be used .
theoretically , the decreased heart rate extends the diastolic duration and increases filling in the vessels , which ultimately results in increased cardiac output and bleeding in the operative field [ 18 , 19 ] . a postoperative increase in arterial blood pressure , as occurred in our study ,
is usually associated with pain and for this reason analgesic treatment should be implemented in the intensive care unit ( icu ) . in our case
, we did this by devising an algorithm to administer postoperative analgesics according to pain intensity levels . using a visual analogue scale ( vas ) ranging from 0 ( no pain ) to 10 scores ( very severe pain ) ,
the intensity of the postoperative pain was divided into four categories , as follows : i slight pain ( vas < 4 ) ; ii moderate pain ( vas 46 , pain up to 3 days ) ; iii severe pain ( vas 46 , pain for over 3 days ) ; and iv very severe pain ( vas > 6 ) . depending on the patient s medical history ( hepatic failure , renal insufficiency , asthma , gastrointestinal disorders and blood clotting ) , we first administer non - opioid drugs , followed by opioids if the pain is long lasting .
the use of technologically advanced dosing techniques ( tci ) during entirely intravenous general anaesthesia in group iii provides better control of hypotension , leading to less bleeding in the operative field , and a shorter operating time .
blood pressure in patients with hypertension ( whether treated or not ) should be pharmacologically normalized by an anaesthetist during the operation
. however , this is often not possible and in such cases intensive bleeding will likely hamper the operation . | the aim of the study was to assess the effect of three different types of anaesthesia on perioperative bleeding control and to analyse the mean arterial blood pressure and heart rate in patients undergoing endoscopic paranasal sinus surgery .
ninety patients ( 30 women and 60 men , aged 1885 years ) scheduled to undergo functional endoscopic sinus surgery in the years 20082010 were identified as candidates for inclusion in the study .
patients were randomly assigned to one of three groups ( 30 patients each ) according to the type of general anaesthesia to be administered .
groups i and ii both received inhalation anaesthesia ( sevoflurane for sedation ) and intravenous anaesthesia ( fentanyl in group i , remifentanil in group ii ) .
anaesthesia was delivered solely via intravenous route ( tiva ) in group iii , with propofol used for sedation and remifentanil for analgesia .
blood pressure and heart rate were monitored during surgery and post - surgically for 4 h. mean anaesthesia duration in groups i , ii and iii was 108.7 20.8 , 112.6 22.2 and 103.7 17.5 min and the surgery duration was 71.3 16.7 , 78.8 24.2 and 66.5 15.5 min , respectively .
mean blood loss during surgery was 365.0 176.2 , 340.0 150.5 and 225.0 91.7 ml , with a mean blood loss rate of 5.1 2.4 , 4.5 2.2 and 3.4 1.1 ml / min in groups i , ii and iii , respectively .
technologically advanced control of the drug dose with the tiva technique allows for better control of perioperative bleeding . |
the study population consisted of individuals who had a comprehensive health examination at baseline ( 2003 ) and were reexamined 5 years later ( 2008 ) at kangbuk samsung hospital , college of medicine , sungkyunkwan university , south korea .
initially 15,638 participants were identified and 416 were excluded for having type 2 diabetes at baseline ( based on any one or more of self - reported , medical histories and fasting plasma glucose results ) .
individuals with data missing at baseline for the following variables were also excluded : plasma glucose ( n = 1 ) , serum insulin ( n = 1,346 ) , bmi ( n = 26 ) , alcohol consumption ( n = 399 ) , smoking ( n = 361 ) , education ( n = 581 ) , and exercise ( n = 309 ) .
after all the exclusions , 12,853 participants were eligible for this analysis from which 223 participants were diagnosed with diabetes by 2008 .
questionnaires were used to ascertain information regarding alcohol consumption ( g / day ) , smoking ( never , ex- , current ) , duration of education ( school 12 years , college 1314 years , university > 14 years ) , and frequency of exercise ( none , less than once a week , at least once a week ) .
blood samples for laboratory examinations were collected after an overnight fast . fasting plasma glucose , total cholesterol , triglyceride , and hdl cholesterol concentrations
were measured using bayer reagent packs on an automated chemistry analyzer ( advia 1650 autoanalyzer ; bayer diagnostics , leverkusen , germany ) .
insulin concentration was measured with an immunoradiometric assay ( biosource , nivelle , belgium ) with an intra- and interassay coefficient of variation of 2.14.5% and 4.712.2% , respectively . homeostasis model assessment ( homa ) index was calculated by the following equation ( homa - ir = [ fasting insulin ( iu / ml ) fasting glucose ( mmol / l)]/22.5 ) . since there are no population - specific thresholds to indicate ir in a korean population , we stratified the populations using the 75th centile to establish an insulin - resistant group ( homa - ir 75th centile ) , which was compared with a more insulin - sensitive group ( homa - ir < 75th centile ) .
abdominal ultrasonography ( logic q700 mr ; general electric , milwaukee , wi ) using a 3.5-mhz probe was performed in all subjects by experienced clinical radiologists , and fatty liver was diagnosed based on standard criteria , including hepatorenal echo contrast , liver brightness , and vascular blurring ( 20 ) .
continuous variables were expressed as mean sd for normally distributed variables or median ( interquartile range ) if not normally distributed .
continuous variables were compared using independent t tests , non - normally distributed variables were compared using mann- whitney u tests , and categorical variables were expressed as percentages and compared between groups using the test .
characteristics at baseline were compared between individuals who developed diabetes during follow - up and those remaining free from diabetes at follow - up .
comparisons between groups were also undertaken stratified by ir ( homa - ir 75th centile , homa 2.0 ) and overweight / obesity ( bmi 25 kg / m ) .
we used logistic regression to determine odds ratios ( ors ) for developing diabetes according to the presence of 1 ) a single baseline risk factor of interest , i.e. , insulin resistance , overweight / obesity , fatty liver ; 2 ) all combinations of two of these three baseline risk factors ; and 3 ) all three baseline risk factors compared with the group with none of these risk factors .
analyses were repeated after adjustment for age , sex , educational status , smoking status ( never , ex- , current ) , exercise frequency ( less than once a week or at least once a week ) , alcohol consumption ( g / day ) , alanine aminotransferase ( alt ) , and triglyceride levels . all data analysis was performed using spss , version 15.0 ( spss , chicago , il ) .
the study population consisted of individuals who had a comprehensive health examination at baseline ( 2003 ) and were reexamined 5 years later ( 2008 ) at kangbuk samsung hospital , college of medicine , sungkyunkwan university , south korea .
initially 15,638 participants were identified and 416 were excluded for having type 2 diabetes at baseline ( based on any one or more of self - reported , medical histories and fasting plasma glucose results ) .
individuals with data missing at baseline for the following variables were also excluded : plasma glucose ( n = 1 ) , serum insulin ( n = 1,346 ) , bmi ( n = 26 ) , alcohol consumption ( n = 399 ) , smoking ( n = 361 ) , education ( n = 581 ) , and exercise ( n = 309 ) .
after all the exclusions , 12,853 participants were eligible for this analysis from which 223 participants were diagnosed with diabetes by 2008 .
questionnaires were used to ascertain information regarding alcohol consumption ( g / day ) , smoking ( never , ex- , current ) , duration of education ( school 12 years , college 1314 years , university > 14 years ) , and frequency of exercise ( none , less than once a week , at least once a week ) . blood samples for laboratory examinations were collected after an overnight fast . fasting plasma glucose , total cholesterol , triglyceride , and hdl cholesterol concentrations were measured using bayer reagent packs on an automated chemistry analyzer ( advia 1650 autoanalyzer ; bayer diagnostics , leverkusen , germany ) .
insulin concentration was measured with an immunoradiometric assay ( biosource , nivelle , belgium ) with an intra- and interassay coefficient of variation of 2.14.5% and 4.712.2% , respectively . homeostasis model assessment ( homa ) index was calculated by the following equation ( homa - ir = [ fasting insulin ( iu / ml ) fasting glucose ( mmol / l)]/22.5 ) . since there are no population - specific thresholds to indicate ir in a korean population , we stratified the populations using the 75th centile to establish an insulin - resistant group ( homa - ir 75th centile ) , which was compared with a more insulin - sensitive group ( homa - ir < 75th centile ) .
abdominal ultrasonography ( logic q700 mr ; general electric , milwaukee , wi ) using a 3.5-mhz probe was performed in all subjects by experienced clinical radiologists , and fatty liver was diagnosed based on standard criteria , including hepatorenal echo contrast , liver brightness , and vascular blurring ( 20 ) .
continuous variables were expressed as mean sd for normally distributed variables or median ( interquartile range ) if not normally distributed .
continuous variables were compared using independent t tests , non - normally distributed variables were compared using mann- whitney u tests , and categorical variables were expressed as percentages and compared between groups using the test .
characteristics at baseline were compared between individuals who developed diabetes during follow - up and those remaining free from diabetes at follow - up .
comparisons between groups were also undertaken stratified by ir ( homa - ir 75th centile , homa 2.0 ) and overweight / obesity ( bmi 25 kg / m ) .
we used logistic regression to determine odds ratios ( ors ) for developing diabetes according to the presence of 1 ) a single baseline risk factor of interest , i.e. , insulin resistance , overweight / obesity , fatty liver ; 2 ) all combinations of two of these three baseline risk factors ; and 3 ) all three baseline risk factors compared with the group with none of these risk factors .
analyses were repeated after adjustment for age , sex , educational status , smoking status ( never , ex- , current ) , exercise frequency ( less than once a week or at least once a week ) , alcohol consumption ( g / day ) , alanine aminotransferase ( alt ) , and triglyceride levels . all data analysis was performed using spss , version 15.0 ( spss , chicago , il ) .
there were 223 cases of incident diabetes during follow - up , and the characteristics of these individuals compared with the remainder of the cohort are shown in table 1 .
the cohort was of working age with a preponderance of men . in the group with diabetes at follow - up , 69% of subjects had ir compared with 24% in the group remaining free from diabetes at follow - up ( p < 0.001 ) . in the group with diabetes at follow - up , 69% were overweight or obese and 68% had fatty liver at baseline , compared with 33% and 27% , respectively , for the group remaining free from diabetes ( p < 0.001 for all comparisons ) .
baseline characteristics in individuals with and without incident diabetes at follow - up
table 2 describes the characteristics of people in the following strata of bmi and insulin sensitivity
baseline characteristics stratified by overweight / obesity and ir normal weight and insulin sensitive ( group a ) normal weight and insulin resistant ( group b ) overweight / obese and insulin sensitive ( group c ) overweight / obese and insulin resistant ( group d ) the prevalence of fatty liver increased incrementally across these four groups .
the proportion of people with fatty liver in groups a , b , c , and d was 12 , 29 , 42 , and 68% , respectively .
we examined the association between each of the three risk factors of interest at baseline with incident diabetes at follow - up after adjustment for age , sex , educational status , smoking , alcohol , exercise , triglyceride , and alt .
each factor was independently associated with incident diabetes when all three were included in the model ( ir : or 3.92 [ 95% ci 2.865.37 ] , p < 0.0001 ; overweight / obesity : 1.62 [ 1.172.24 ] , p = 0.004 ; fatty liver : 2.42 [ 1.743.36 ] , p < 0.0001 ) .
next we examined the numbers of subjects ( with and without incident diabetes ) who had different combinations of the risk factors of interest at baseline .
there are seven potential combinations of the three risk factors of interest , and the ors for each of these combinations are shown in table 3 and are adjusted for 1 ) age and sex ; 2 ) age , sex , alcohol , smoking status , and exercise and educational levels ; and 3 ) age , sex , alcohol , smoking status , exercise and educational levels , and triglyceride and alt levels . adjustment for the factors in the second model had little effect but further adjustment for triglyceride and alt levels attenuated the ors slightly .
of the 223 incident cases of diabetes identified at follow - up , 26 people had none of the risk factors of interest , 37 had one , 56 had two , and 104 had three risk factors at baseline . in the fully adjusted model , the or ( 95% cis ) for incident diabetes for the presence of all three risk factors at baseline was 14.13 ( 8.9922.21 ) .
the data in table 3 also describe how the three factors of interest cluster together . among people with one or more risk factors of interest in the whole cohort , the largest proportion ( 34% )
had overweight / obesity alone compared with 28% with fatty liver and 25% with ir as single risk factors .
the least frequent combination of two risk factors , occurring among 3% of people , was the combination of ir and fatty liver in the absence of overweight / obesity .
all three factors occurred together in 10% of people in the whole cohort at baseline .
in contrast , in the group with incident diabetes , the cluster of all three risk factors together occurred in 104/223 ( 47% ) of subjects , whereas only 26/223 ( 12% ) had none of these risk factors of interest . or for incident diabetes at follow - up for different combinations of ir , overweight / obesity , and fatty liver
we have quantified for the first time the powerful impact of the combined presence of ir , overweight / obesity , and fatty liver on the odds of developing diabetes .
importantly , we have established that each of these factors is independently associated with incident diabetes after adjustment for the other two risk factors and other relevant factors .
almost half of the subjects with incident type 2 diabetes at 5-year follow - up had all three risk factors at baseline , but this cluster occurred in only approximately 10% of the population that did not develop diabetes .
only 12% of incident cases of diabetes at follow - up did not have any of these three risk factors at baseline compared with 47% in the general population .
thus , the presence of all three risk factors occurring together was common in subjects who develop diabetes , emphasizing the importance and the frequency of the clustering of these three risk factors for type 2 diabetes .
we have shown previously that fatty liver is a predictor of diabetes , independently of ir ( 11 ) , and others have shown that fatty liver is a risk factor for incident diabetes ( 2123 ) . in a study of japanese men of similar age to the participants in our study , shibata et al .
( 21 ) showed that fatty liver at baseline was associated with an age and bmi adjusted hazard ratio of 5.5 ( 95% ci [ 3.68.5 ] , p < 0.001 ) for incident diabetes at 4-year follow - up .
our results extend the work of these authors as we show that there is also an additional strong association between fatty liver and incident diabetes , independently of ir , and we have quantified the risk of having all three risk factors . a diagnosis of fatty liver can be established noninvasively using techniques such as magnetic resonance spectroscopy , computed tomography , or ultrasound but , recently , proxy markers for nonalcoholic fatty liver disease ( e.g. , the nonalcoholic fatty liver disease fatty liver score and the fatty liver index that are generated from anthropometric and biochemical measurements ) have also been found to be associated with incident diabetes independently of potential confounding factors ( 24 ) . of the three risk factors of interest ,
overweight / obesity had the weakest association with incident diabetes ( fully adjusted or for overweight / obesity alone : 1.29 [ 0.622.71 ] ) and ir had strongest association ( fully adjusted or for ir alone : 3.66 [ 1.897.08 ] ) .
it is possible that measures of central obesity such as waist circumference would have a stronger relationship with diabetes than bmi , but unfortunately waist measurements were not available for all cohort participants .
the or for incident diabetes was highest for the combination of ir , overweight / obesity , and fatty liver ( fully adjusted or 14.13 [ 8.9922.2 ] ) .
tests for interaction ( data not shown but available from authors ) showed no statistically significant superadditive or synergistic association of the three factors with incident diabetes , but this may reflect the limited power of the study to detect statistically significant interactions .
although the most frequent combination of risk factors among subjects that developed diabetes was the presence of all three factors , 56/223 ( 25% ) had only two of the three risk factors .
of the different possible combinations of two risk factors , the data suggested that the combination of overweight / obesity and fatty liver ( in the absence of ir ) was associated with the lowest odds of diabetes ( or 3.23 [ 95% ci 1.785.89 ] ) and the combination of ir and fatty liver had the strongest association with diabetes ( 6.73 [ 3.4912.73 ] ) , although cis are wide and overlap for these estimates .
fatty liver is emerging as an independent risk factor for diabetes , and our data suggest that its association with incident diabetes may be stronger than that of overweight / obesity and weaker than that of ir .
however , regardless of the relative strengths of these risk factors for diabetes , there was a striking and marked increase in odds of diabetes with the occurrence of all three risk factors .
the fact that they all have independent effects of each other suggests that targeted specific approaches to ameliorating the effects of each individual risk factor may have a considerable impact on decreasing risk of diabetes . in support of the notion that ir , obesity , and fatty liver each act via different mechanisms to increase risk of diabetes
, it has been shown recently that combined metformin and rosiglitazone treatment has discordant effects on central obesity , hepatic ir , and fatty liver ( 25 ) .
these investigators showed that although the rosiglitazone and metformin combination had no effect on central obesity , the combination has a transient effect on hepatic insulin sensitivity and a sustained effect on alt ( as a proxy marker for fatty liver ) .
overweight / obesity may increase fat accumulation in key insulin - sensitive tissues such as liver ( 26 ) and when fat accumulation occurs in liver , hepatic ir occurs via mechanisms that increase gluconeogenesis , decrease glycogen synthesis , and inhibit insulin signaling ( 15,16 ) .
physical inactivity is associated with hepatic ir ( 27 ) and modest increases in physical activity have recently been shown to be very effective in improving liver enzymes ( 28 ) and decreasing liver fat ( 2933 ) .
it is likely that relatively small increases in physical activity levels may decrease risk of type 2 diabetes in middle - aged individuals , not only through accepted improvements in improved glucose utilization and the promotion of weight loss , but also via a beneficial impact on liver fat and hepatic insulin sensitivity .
thus , the marked benefit on diabetes risk of increases in physical activity may be acting favorably to modify each of the three major risk factors that we have investigated in the current study .
we have used routine clinical data from an occupational cohort with a preponderance of men . although ultrasonography is a reasonably accurate technique for detecting modest amounts of liver fat ( > 30% liver fat infiltration ) , ultrasound has limited sensitivity to detect minor amounts of fatty infiltration .
oral glucose tolerance tests were not performed so subjects with isolated 2-h postchallenge hyperglycemia at follow - up have been identified .
data were not available on family history of diabetes , participants lifetime exposure to alcohol , or use of drugs known to be associated with increased risk of diabetes ( although heavy alcohol consumption and use of drugs of interest is likely to be present only in a small percentage of people in this middle - aged occupational cohort ) .
data on waist circumference and inflammatory markers were incomplete ( only available on approximately 18% of the cohort ) , and therefore we were unable to use these data . additionally , we only had basic self - reported information on physical activity levels in this cohort , and consequently it is likely that estimates are highly likely to be subject to measurement error .
the study is limited to one ethnic group , and the distribution of risk factors and their association with diabetes may differ by ethnic group .
our study was not large enough to investigate whether the identification of fatty liver provides a valuable addition to diabetes risk scores to improve risk prediction of diabetes , and further research in several populations is required to address this important issue . in conclusion , in a middle - aged occupational cohort study , we have shown that ir , overweight / obesity , and fatty liver commonly occur together and that each is independently associated with increased odds of developing type 2 diabetes .
we have quantified the cumulative impact of different combinations of ir , overweight / obesity , and fatty liver , and shown that the occurrence of all three risk factors together markedly increases the risk of developing diabetes .
further research is needed to understand the separate pathogenetic mechanisms by which ir , overweight / obesity , and fatty liver contribute individually to the development of type 2 diabetes .
it is also necessary to identify whether effectiveness of lifestyle and pharmaceutical interventions vary for people with different combinations of risk factors . | objectivethere is dissociation between insulin resistance , overweight / obesity , and fatty liver as risk factors for type 2 diabetes , suggesting that different mechanisms are involved .
our aim was to 1 ) quantify risk of incident diabetes at follow - up with different combinations of these risk factors at baseline and 2 ) determine whether each is an independent risk factor for diabetes.research design and methodswe examined 12,853 subjects without diabetes from a south korean occupational cohort , and insulin resistance ( ir ) ( homeostasis model assessment - ir 75th centile , 2.0 ) , fatty liver ( defined by standard ultrasound criteria ) , and overweight / obesity ( bmi 25 kg / m2 ) identified at baseline .
odds ratios ( ors ) and 95% confidence intervals ( cis ) for incident diabetes at 5-year follow - up were estimated using logistic regression.resultswe identified 223 incident cases of diabetes from which 26 subjects had none of the three risk factors , 37 had one , 56 had two , and 104 had three . in the fully adjusted model ,
the or and ci for diabetes were 3.92 ( 2.865.37 ) for ir , 1.62 ( 1.172.24 ) for overweight / obesity , and 2.42 ( 1.743.36 ) for fatty liver .
the or for the presence of all three factors in a fully adjusted model was 14.13 ( 8.9922.21).conclusionsthe clustering of ir , overweight / obesity , and fatty liver is common and markedly increases the odds of developing type 2 diabetes , but these factors also have effects independently of each other and of confounding factors .
the data suggest that treatment for each factor is needed to decrease risk of type 2 diabetes . |
Others say that it is not clear that pornography itself is the cause of effects seen in studies about the content's bad outcomes, and some see Utah's measures as a religious state putting a public health spin on a private issue. ||||| (CNN) A state with a national reputation for wholesomeness is taking aim at a medium with quite a different reputation: the pornography industry.
Utah Gov. Gary R. Herbert signed two pieces of legislation on Tuesday that aim to combat what's called "a sexually toxic environment" caused by porn.
"Pornography is a public health crisis. Today I signed two bills that will bring its dangers to light. S.C.R. 9 calls for additional research and education so that more individuals and families are aware of the harmful effects of pornography," said Herbert on the governor's Facebook page.
One is technically a resolution, and the other one is a bill:
-- S.C.R. 9 Concurrent Resolution on the Public Health Crisis.
This resolution declares that pornography is "a public health hazard leading to a broad spectrum of individual and public health impacts and societal harms."
The resolution claims Utah would be the first state in the nation to make such a declaration.
It cites what is says are numerous detrimental effects of porn, including the treatment of "women as objects and commodities for the viewer's use."
It also says pornography "equates violence toward women and children with sex and pain with pleasure, which increases the demand for sex trafficking, prostitution, child sexual abuse images, and child pornography."
The resolution has no punishing powers; it doesn't specifically ban pornography in the state.
Jon Cox, spokesman for the Republican governor, said Monday the point of the resolution is to raise awareness and education. "We want Utah youths to understand the addictive habits" of porn that are "harmful to our society."
-- H.B. 155 Reporting of Child Pornography.
This bill is more specific, and has enforcement muscle.
It requires that computer technicians who find child pornography during their work should report it to law enforcement officials. The bill further stipulates that "the willful failure to report the child pornography" would be a class B misdemeanor.
H.B. 155 also specifies that Internet service providers are not liable if the provider "reports child pornography in compliance with specified federal law."
Claims of addiction
The Utah Coalition Against Pornography hailed the move on its Facebook page Monday. It encourages people to head to the Capitol and "celebrate and recognize this historic moment!"
The bills have the support of people such as Dawn Hawkins, the executive director of National Center on Sexual Exploitation in Washington, who is scheduled to appear at the signing.
In an interview in 2015 , she said, "Pornography encourages viewers to view their sexual partners in a dehumanized way, and it increases the acceptance and enjoyment of sexual violence and harmful beliefs about women, sex and rape."
In a video interview on the Salt Lake Tribune website in February, State Sen. Todd Weiler, chief sponsor of both pieces of legislation, said, "Pornography today is like tobacco was 70 years ago," comparing the addictive effects.
An interesting backdrop to this legislation: In 2009, Harvard Business School study found that residents of Utah were the highest per capita purchasers of online adult entertainment in the United States.
Join the conversation See the latest news and share your comments with CNN Health on Facebook and Twitter.
Not everyone agrees porn is necessarily and automatically a problem.
Dan Savage, author of a nationally syndicated sex advice column , said porn can be a tool when dealing with discrepant desires or libidos, such as in the case of new fathers, who can turn to porn for variety or stimulation.
"We have a hard-wired desire for variety. Porn allows you to scratch that itch without physically cheating on your partner," he told CNN in 2015. ||||| Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period. ||||| The question of how to raise kids in a world of ubiquitous pornography of the violent and misogynistic bent has long been on the radar of the American Academy of Pediatrics. It is of particular concern to pediatrician David Hill, who chairs the organization’s Council on Communications and Media. His group has been less than explicit about issuing guidelines on pornography versus non-pornography, seeing vital character-shaping cues from all types of media as germane to health. “We do recommend that parents keep screens out of kids bedrooms, to the extent they’re able to,” Hill told me. “We encourage parents to co-view TV and movies with kids, to give perspective. Movies and TV shows often do not show consequences for high-risk behaviors.” In many cases, high-risk behaviors of all sorts run together.
“That is not to suggest that you co-view pornography,” he said with a laugh. Obvious as that may seem, what we do know about the effects of viewing pornography on kids remains largely speculative. If access to pornography is categorically threatening to public health, he posits, why would it be that the U.S. is seeing historic lows in rates of teenage pregnancy and sexually transmitted infectious diseases? Why, too, would rates of domestic violence and rape be continuing to fall?
“I think the conclusions we can draw from the science are very limited,” said Hill. Usually, public-health crises are based on outcomes rather than risk factors, however plausible. While Dines and others cite many correlations between pornography consumption and negative health outcomes, the causal relationship is rarely explicit. Making that leap is especially tenuous when studies rely on subjects recalling and reporting information about taboo behaviors and thoughts, a notoriously unreliable approach. Yet, Hill notes, no one is going to do a prospective trial where kids are given porn in a controlled environment to see how they are affected.
“Now, as a parent,” he pivoted, “I am concerned. My experience with parental controls has been disappointing at best.”
Weiler has the same sense. “A lot of people say this is a parental thing, but I've had mothers tell me that they block pornography in their homes, and their kids get tablets at school, take them to McDonald's and log onto Wi-Fi, and they're sitting in McDonald's watching porn.”
McDonald's is working to stem this issue of errant purveyance of porn to minors, Weiler said. “But the same is true with libraries. They put their hand over their heart and say it's a first amendment issue. And it is! But we would be appalled if libraries and McDonald’s were handing out cigarettes to children.”
Weiler prefers to intervene upstream of the First Amendment. In the U.K. in 2013, David Cameron asked Internet service providers to create an opt-in option for pornography. Hence Weiler’s interest in a national movement. “If we can get 15 states to take this stand,” he said, “I think we can start putting pressure on Congress to do what England has done.” ||||| Sen. Todd Weiler, R-Woods Cross, is sponsoring a resolution that would designate pornography addiction as a "public health hazard." The resolution is scheduled to be considered at the Capitol on Friday at 4 p.m.
Click the video above to watch Weiler compare porn addiction to tobacco addiction, and why he thinks porn is "dangerous." | – In an effort to fight today's "sexually toxic environment," the governor of Utah is expected on Tuesday to sign both a state resolution and a bill that declare pornography a "health hazard," CNN reports. Gov. Gary Herbert's resolution, more general statement than punishable mandate, says porn is "a public health hazard leading to a broad spectrum of individual and public health impacts and societal harms," with "detrimental effects" including the objectification of women, the linking of violence toward women and children with pleasure, and the promotion of sex trafficking, prostitution, and child porn, as well as biological addiction, emotional issues, and even changes in brain development, per the New York Daily News. Although the resolution won't actually come with penalties for porn-related infractions, the related HB 155 will, specifically for computer techs who don't report child porn while maintaining or fixing equipment. A "willful failure to report the child pornography" could result in a Class B misdemeanor. State Sen. Todd Weiler, sponsor of both measures, told the Salt Lake Tribune in February that "pornography [addiction] today is like tobacco was 70 years ago." Not everyone's on the anti-porn bandwagon, including sex columnist Dan Savage, who told CNN last year that humans have a "hard-wired desire for variety" that can be harmlessly satisfied through porn. Still others don't like Utah putting "a public health spin on a private issue," as the Daily News frames it. And some researchers remain skeptical that porn is causing some of the issues cited. "I think the conclusions we can draw from the science are very limited," American Academy of Pediatrics doctor David Hill tells the Atlantic. One interesting fact, per CNN: Utah was found to be the "pornography capital of America" in a 2009 Harvard Business School study. (A Utah professor was accused of watching child porn on his flight out of Salt Lake City.) |
eukaryotic cells can enclose their own cytoplasmic components in a double - membrane structure , the autophagosome , and deliver it to a lytic compartment , the vacuole / lysosome , where the contents are then degraded .
this conserved system is involved not only in the recycling of proteins under starvation conditions but also in the clearance of organelles and aberrant aggregate - prone proteins , digestion of invading pathogens , and so on [ 14 ] .
genes involved in autophagy were first identified by yeast genetic screenings [ 57 ] . at present ,
more than 30 autophagy - related ( atg ) genes have been identified in yeast , and of them at least 18 genes are essential for autophagosome formation , a crucial process in autophagy .
most of these 18 genes are conserved in mammals , suggesting that the mechanism of autophagosome formation is basically conserved from yeast to mammals .
the 18 atg proteins can be divided into five groups according to their functions [ 8 , 9 ] .
one group consists of subunits of a class iii phosphatidylinositol ( ptdins ) 3-kinase complex ( hereafter , ptdins 3-kinase indicates the class iii ptdins 3-kinase ) .
atg14 and vps30/atg6 are two such proteins and are included in this group together with vps34 and vps15 , catalytic and regulatory subunits , respectively , ( the functions of vps34 and vps15 in the vacuolar protein sorting pathway have been studied in detail , and ; thus , they are not designated as atg proteins although they are essential for autophagy ) .
atg14 is a key subunit in determining the function of the ptdins 3-kinase complex and is the focus of this paper ( see following sections ) .
ptdins 3-kinase phosphorylates ptdins at the d-3 position of the inositol ring , generating ptdins(3)p . in yeast , vps34 is the sole ptdins 3-kinase .
vps34 is essential to both the vacuolar protein - sorting pathway and to autophagy [ 11 , 12 ] .
autophagic activity is completely abolished in vps34 cells expressing a lipid kinase - dead form of vps34 , indicating that production of ptdins(3)p is essential for autophagy . in yeast , it was shown that ptdins(3)p is enriched in the inner surface of the isolation membrane and autophagosome ( figure 1 ) .
produced ptdins(3)p recruits downstream molecules , such as atg18 , that are considered to be directly involved in autophagosome formation [ 14 , 15 ] .
for a general introduction to the function of ptdins 3-kinase and ptdins(3)p in autophagy , please refer to other reviews [ 16 , 17 ] .
. inhibitors of ptdins 3-kinase , such as wortmannin and 3-methyladenine , suppress autophagy in mammalian cells .
conversely , supplementation with ptdins(3)p , but not other phosphoinositides , enhances autophagic degradation in ht-29 cells .
like in yeast , ptdins 3-kinase is also required for vesicular trafficking toward the lytic compartment , the lysosome [ 23 , 24 ] .
as mentioned above , ptdins 3-kinase is required for both autophagy and vacuolar protein sorting [ 11 , 12 ] .
the means by which the sole ptdins 3-kinase , vps34 , is involved in these two distinct processes is explained by the existence of multiple ptdins 3-kinase complexes ( figure 1(b ) ) . in yeast ,
vps34 forms two distinct ptdins 3-kinase complexes ( complexes i and ii ) that are involved in different processes .
complex i specifically functions in autophagy while complex ii is required in the vacuolar protein sorting pathway .
both complexes have ptdins 3-kinase activity and share three common subunits , vps34 , vps15 , and vps30/atg6 .
the ptdins 3-kinase activity of vps34 is not required to form the ptdins 3-kinase complexes but is essential for autophagy and for the vacuolar protein sorting pathway .
vps15 is a serine / threonine protein kinase that phosphorylates vps34 and recruits it to the membrane fraction .
vps15 is myristoylated , but the membrane association of the ptdins 3-kinase complexes does not solely depend on myristoylation .
however , beclin 1 , a mammalian homolog of vps30/atg6 , interacts with various proteins involved in other processes and is proposed to serve as a platform upon which multiple cellular signals converge , thereby regulating the balance between autophagy and other biological processes [ 29 , 30 ] .
atg14 is specifically integrated into complex i , while complex ii contains vps38 as a specific subunit .
these specific factors play an essential role in sorting the ptdins 3-kinase complexes to the distinct processes .
atg14 bridges vps30/atg6 and vps34 to allow formation of complex i. similarly , vps38 bridges vps30/atg6 and vps34 to form complex ii .
thus , these unique subunits serve as connectors to form ptdins 3-kinase complexes . in addition ,
deletion of atg14 does not affect vacuolar protein sorting , and , conversely , disruption of vps38 does not suppress autophagy [ 12 , 31 ] .
thus , the ptdins 3-kinase complexes are strictly sorted to distinct functions depending on the specific subunits , atg14 and vps38 .
complex i is present on the vacuolar membrane and at a perivacuolar structure called the preautophagosomal structure ( pas ) .
most of atg proteins localize to the pas , and ; thus , the pas is considered to be closely related to autophagosome formation .
pas localization of vps34 ( the catalytic subunit ) and vps30/atg6 is abolished in atg14 cells , indicating that complex i is targeted to the pas in an atg14-dependent manner ( figure 1(b ) ) . on the other hand ,
deletion of vps38 does not affect pas localization of complex i. complex ii localizes to the vacuolar membrane and the endosome .
whether or not the endosomal localization of complex ii depends on vps38 is not a simple question . in vps38 cells , the endosomal localization of vps30/atg6
thus , the endosomal localization of the complete complex ii , including vps30/atg6 , is dependent on vps38 , whereas that of the catalytic and regulatory subunits , vps34 and vps15 , is not .
, atg14 has at least two functions in autophagy : ( i ) atg14 acts as a connector to form the ptdins 3-kinase complex and ( ii ) atg14 directs complex i to function in autophagy by targeting it to the pas .
a simple blast database search failed to identify a mammalian homolog of atg14 . recently ; however , several groups succeeded in identifying mammalian atg14 .
itakura et al . identified a candidate for mammalian atg14 by psi - blast , a method for detecting weak but biologically relevant sequence similarities that is more sensitive than conventional blast , and experimentally confirmed that it was the bona fide mammalian atg14 .
other groups identified mammalian atg14 using a biochemical approach , that is , identifying proteins that are copurified with beclin 1 ( mammalian atg14 is hereafter called barkor / atg14(l ) ) [ 1921 ] .
similar to yeast atg14 , barkor / atg14(l ) forms a ptdins 3-kinase complex with beclin 1 and mammalian homologs of vps34 and vps15 . on the other hand , uvrag ,
originally identified as a protein related to uv resistance , is considered to be a counterpart for vps38 .
thus , ptdins 3-kinase complexes that correspond to yeast complexes i and ii exist in mammals . in addition , mammalian cells have another ptdins 3-kinase complex composed of vps34 , vps15 , beclin 1 , uvrag , and rubicon , a rab7 effecter containing a run domain [ 20 , 21 ] .
the rubicon - containing complex negatively regulates maturation of the autophagosome . as is the case in yeast , the mammalian ptdins 3-kinase
discovery of barkor / atg14(l ) and subsequent research uncovered a similar mechanism that sorts the ptdins 3-kinase complexes to distinct functions .
barkor / atg14(l ) targets the ptdins 3-kinase complex to an er subdomain [ 35 , 36 ] .
the pas is not identified in mammalian cells . instead , at least in some cases , autophagosomes are formed inside omegasomes , which are formed at specialized er subdomains prior to autophagosome formation [ 3739 ] .
thus , in both yeast and mammals , atg14 directs one of the ptdins 3-kinase complexes to function specifically in autophagy by targeting it to the proposed site of autophagosome formation .
the difference in this sorting system between yeast and mammals is the targeting site of the ptdins 3-kinase complex ; yeast complex i is targeted to the pas , whereas mammalian autophagy - specific ptdins 3-kinase complex is targeted to the er subdomain .
the amino acid sequence of atg14 is not significantly conserved between yeast and mammals , which is consistent with the fact that a sensitive psi - blast method was needed to identify barkor / atg14(l ) .
nevertheless , we can still find similarities between yeast atg14 and barkor / atg14(l ) at the levels of local amino acid sequences and of secondary structures .
both yeast atg14 and barkor / atg14(l ) contain a cysteine - rich domain at the n - terminal region ( figure 2 ) .
conserved cysteine residues in this region are essential for er targeting of the ptdins 3-kinase complex in mammals . at the secondary structural level , both yeast atg14 and barkor / atg14(l )
deletion analysis of yeast atg14 revealed that the n - terminal half region containing the coiled - coil domains is sufficient to support autophagic activity , although at a significantly reduced level .
the n - terminal half of atg14 still has the ability to form complex i and to localize to the pas .
these results suggest that the functions of atg14bridging vps34 and atg6 and targeting complex i to the pas are exerted by the n - terminal half region .
atg14 is unstable without the interaction with vps30/atg6 and vps34 through these coiled - coil domains .
although the c - terminal half of atg14 is not essential for the minimal level of autophagy , it is required to support a normal level of autophagic activity ( this issue is discussed later ) .
similarly , the coiled - coil domains of barkor / atg14(l ) play an essential role in autophagy .
coiled - coil domains are required for formation of the autophagy - specific ptdins 3-kinase complex [ 1821 ] . of these
, the second coiled - coil domain is involved in interaction with beclin 1 [ 21 , 36 ] , as is the case in yeast atg14 .
endogenous barkor / atg14(l ) is stabilized by binding to beclin 1 and vps34 through the coiled - coil domains .
interestingly , an exogenously expressed barkor / atg14(l ) variant that lacks the coiled - coils still localizes to the isolation membrane or its precursor , suggesting that other regions are also important for proper localization of barkor / atg14(l ) .
recently , fan et al . identified a novel domain , called the barkor / atg14(l ) autophagosome targeting sequence ( bats ) domain , at the c - terminal region of barkor / atg14(l ) .
the bats domain is essential and sufficient for localizing barkor / atg14(l ) to the autophagosome .
an amphiphilic alpha helix resides at the c - terminus of the bats domain , and its hydrophobic side plays a crucial role in localizing to the isolation membrane and the autophagosome .
it was proposed that the bats domain senses membrane curvature and binds to the membrane through the hydrophobic side of the amphiphilic alpha helix , thereby , targeting barkor / atg14(l ) to the isolation membrane to efficiently produce ptdins(3)p there .
however , several prediction programs anticipate that a clear amphiphilic helix also resides within the c - terminal half of the yeast atg14 ( figure 2 ) .
overexpression of barkor / atg14(l ) enhances autophagic activity even under nutrient - rich conditions .
thus , atg14 seems to be one of the limiting factors regulating autophagic activity .
autophagic activity is reduced in yeast cells expressing an atg14 variant lacking the c - terminal half ( hereafter , atg14-c ) compared to cells expressing the full - length atg14 ( atg14-fl ) .
cells expressing the atg14-c variant accumulate smaller autophagic bodies , indicating that atg14 has a close relationship with the size of the autophagosome .
we performed electron microscopy and measured the diameter of the smaller autophagic bodies accumulated in atg14-c cells ( figure 3 ) .
the average diameter of autophagic bodies accumulated in atg14-c cells is approximately 66% of that in cells expressing atg14-fl .
thus , the volume of each autophagic body in atg14-c cells is estimated to be 29% ( the cube of 66% ) of that in atg14-fl cells .
consistent with the estimation based on this electron microscopy , the actual autophagic activity in atg14-c cells , measured by an established biochemical assay , is approximately 33% of that in atg14-fl cells .
thus , the c - terminal half of atg14 is likely to be required to form a normal - sized autophagosome rather than to regulate the number of autophagosomes .
it is possible that the c - terminal half of atg14 is directly involved in the modulation of autophagosome size . in this sense
, it would be interesting to examine whether the amphiphilic helix within the c - terminal half is involved in modulating the curvature of the isolation membrane .
alternatively , the c - terminal half of atg14 may regulate autophagosome size indirectly through one or more downstream molecules .
deletion of atg14 affects the localization of atg8 , the atg12-atg5-atg16 complex , and the atg2-atg18 complex .
similarly , the size of the autophagosome correlates with the protein levels of atg8 [ 43 , 44 ] .
thus , it is possible that atg14 regulates autophagosome size indirectly through modulating atg8 recruitment to the pas . as mentioned above ,
overexpression of barkor / atg14(l ) activates autophagy in mammalian cells even under nutrient - rich conditions , indicating that barkor / atg14(l ) is one of the key players regulating autophagic activity in mammals .
other subunits of ptdins 3-kinase complexes are also involved in the regulation of autophagic activity .
conversely , rubicon negatively regulates maturation of the endosome and the autophagosome by sequestering uvrag from the class c - vps / hops complex .
beclin 1 interacts with multiple proteins in addition to the core subunits of the ptdins 3-kinase complexes .
one of these , ambra1 , positively regulates autophagy and plays a crucial role in neural development .
beclin 1 also interacts with bcl-2 , an antiapoptotic protein , that is believed to regulate the balance between autophagy and apoptosis .
thus , beclin 1 may serve as a platform upon which cellular signals converge and function to regulate the crosstalk of multiple processes , including autophagy .
this function of regulating the balance of multiple cellular events has not been reported for yeast vps30/atg6 , which implies that beclin 1 has obtained this regulatory role during evolution .
in addition to generating ptdins(3)p by the autophagy - specific ptdins 3-kinase , dephosphorylation of ptdins(3)p also plays an important role in regulating autophagy in mammals .
overexpression of ptdins(3)p phosphatases decreases autophagic activity , while the knockdown or the expression of a dominant - negative form of the phosphatases enhances autophagy [ 45 , 46 ] .
taken together , atg14 regulates autophagic activity , at least partially , both in yeast and mammals .
however , the barkor / atg14(l)-containing ptdins 3-kinase complex seems to play a more crucial role in determining autophagic activity than yeast complex i , and the regulation of the barkor / atg14(l ) complex may have evolved to function in a more sophisticated manner .
a conserved function of atg14 in autophagy is to target the ptdins 3-kinase complex to the probable site of autophagosome formation . an important problem to be solved is the mechanism whereby atg14 targets the ptdins 3-kinase complex to the pas in yeast and to the er subdomain in mammals .
there are some reports concerning the regulation of ptdins 3-kinase complex localization . according to a comprehensive analysis of
the hierarchy of atg protein localization , proper targeting of atg14 is dependent on atg17 and fip200 in yeast and mammals , respectively , [ 35 , 41 ] both of which are scaffold proteins for atg assembly [ 47 , 48 ] .
the conserved cysteine residues at the n - terminal region of barkor / atg14(l ) are required for er localization of barkor / atg14(l ) .
yeast vps15 , a regulatory subunit of ptdins 3-kinase complexes , can localize to the pas even in atg14 cells while vps34 and vps30/atg6 can not , indicating that vps15 also contains information related to targeting to the pas .
the function of the n - terminal half of atg14 has been largely , if not completely , determined . on the other hand , the function of the c - terminal half of atg14 is still unclear .
the c - terminal half of atg14 is likely involved in forming a normal - sized autophagosome , directly or indirectly . in this sense , it is interesting that the c - terminal bats domain of barkor / atg14(l ) binds to the membrane through the hydrophobic surface of the amphiphilic alpha helix .
the bats domain favors highly curved membranes that contain ptdins(3)p , which are considered to be the properties of the isolation membrane .
although the amino acid sequence of the bats domain is not conserved in yeast atg14 , a clear amphiphilic alpha helix is predicted within the c - terminal half of yeast atg14 . whether these amphiphilic alpha helixes are involved in the regulation of autophagosome size or not is an interesting issue for future research . | phosphorylation of phosphatidylinositol ( ptdins ) by a ptdins 3-kinase is an essential process in autophagy .
atg14 , a specific subunit of one of the ptdins 3-kinase complexes , targets the complex to the probable site of autophagosome formation , thereby , sorting the complex to function specifically in autophagy .
the n - terminal half of atg14 , containing coiled - coil domains , is required to form the ptdins 3-kinase complex and target it to the proper site .
the c - terminal half of yeast atg14 is suggested to be involved in the formation of a normal - sized autophagosome .
the c - terminal half of mammalian atg14 contains the barkor / atg14(l ) autophagosome - targeting sequence ( bats ) domain that preferentially binds to the highly curved membranes containing ptdins(3)p and is proposed to target the ptdins 3-kinase complex efficiently to the isolation membrane .
thus , the n- and c - terminal halves of atg14 are likely to have an essential core function and a regulatory role , respectively . |
in japan , tetrodotoxin ( ttx ) is the most common natural marine toxin to cause food poisoning , and it poses a serious hazard to public health . this toxin
( c11h17n3o8 ; figure 1 ) is a potent neurotoxin with a molecular weight of 319 , whose various derivatives have been separated from pufferfish , newts , frogs , and other ttx - bearing organisms .
when ingested by humans , ttx acts to block the sodium channels in the nerve cells and skeletal muscles , and to thereby block excitatory conduction , resulting in the occurrence of typical symptoms and signs ( table 1 ) and even death in severe cases .
the lethal potency is 5000 to 6000 mu / mg ( 1 mu ( mouse unit ) is defined as the amount of toxin required to kill a 20-gram male mouse within 30 min after intraperitoneal administration ) , and the minimum lethal dose for humans is estimated to be approximately 10,000 mu ( 2 mg ) . since 1964
, the distribution of ttx has spread to animals other than pufferfish , including newts , gobies , frogs , octopuses , gastropods , starfish , crabs , flatworms , and ribbon worms ( table 2 ) [ 5 , 6 ] .
pufferfish are thought to accumulate ttx through several steps of the food chain , starting from ttx production by marine bacteria ( figure 2 ) [ 6 , 7 ] .
ttx poisoning due to marine gastropods occurs not only in japan , but also in china , taiwan , europe , and new zeeland , suggesting further diversification of ttx - bearing organisms and therefore geographic expansion of ttx poisoning . in the present paper ,
we review ttx poisoning cases due to the ingestion of pufferfish and gastropods , and discuss the ttx intoxication mechanism of these organisms in an effort to contribute to the development of an effective means of protecting humans against ttx poisoning .
marine pufferfish of the family tetraodontidae generally contain a large amount of ttx in their skin and viscera , especially the liver and ovary .
accordingly , edible species and their body tissues , and the allowable pufferfish fishing areas have been clearly stipulated in japan since 1983 , but still several tens of people are poisoned by pufferfish annually , and 2 to 3 people die as a result of pufferfish poisoning ( table 3 ) . the incidence in specializing restaurants is rare , and most cases of poisoning result when people with little knowledge of pufferfish toxicity cook a pufferfish that they caught or received from someone else and mistakenly eating strongly toxic parts such as liver and ovary . some pufferfish fans dare to ingest the liver , believing that the toxin can be eliminated by their own special detoxification methods . in october 2008 ,
he stated that he cooked a usubahagi ( a sort of thread - sail filefish kawahagi ) that he caught by himself and ate its raw meat ( sashimi ) after dipping in a mixture of the liver and soy sauce .
approximately 30 minutes after ingestion , he felt numbness in his limbs , and 30 minutes later , he vomited and became comatose before being transported by an ambulance to the hospital .
the doctor confirmed his death approximately 4 hours after ingestion , with an initial diagnosis of ciguatera due to the ingestion of kawahagi liver , the possibility of ttx is not denied .
thereafter , it was determined that the patient cooked a kinfugu ( local name of pufferfish ) with the usubahagi , but the liver was missing among the leftovers .
we investigated the leftovers , and revealed that the usubahagi was nontoxic , but the kinfugu was actually a highly toxic species , komonfugu takifugu poecilonotus , and 600 mu / g of ttx was detected in the skin .
furthermore , 0.7 mu / ml , 2 mu / ml , and 45 mu / g of ttx was detected in the blood , urine , and vomit of the patient , respectively , leading to the conclusion that this was a case of ttx intoxication due to the mistaken ingestion of t. poecilonotus liver .
recently , the nonedible pufferfish lagocephalus lunaris , which usually inhabits tropical or subtropical waters , has been frequently mixed up with edible species in japanese coastal waters , posing a serious food hygiene problem .
this pufferfish , which bears a very similar appearance to the almost nontoxic species l. wheeleri , also possesses high levels of ttx in their muscles [ 6 , 11 ] , caused 5 poisoning incidents in 11 patients due to mistaken ingestion in kyushu and shikoku islands during 2008 - 2009 .
though not as frequent as in japan , many food poisoning cases due to ingestion of wild pufferfish have also occurred in china and taiwan [ 3 , 6 ] .
ttx - bearing gastropods and the food poisoning incidents due to their ingestion are summarized in tables 4 and 5 , and figure 3 . although the trumpet shell charonia sauliae is not usually sold on the market , it is sometimes eaten locally in japan . in december 1979 , a man in shimizu , shizuoka prefecture , japan , ingested the digestive gland of c. sauliae and was seriously poisoned .
he showed paralysis of his lips and mouth , and respiration failure , which are the typical symptoms and signs of pufferfish poisoning .
ttx was detected for the first time in a marine snail , that is , the leftovers of c. sauliae , and the causative agent was therefore concluded to be ttx .
similar poisonings occurred in 1 patient in the wakayama prefecture in december 1982 , and in 2 patients in the miyazaki prefecture in january 1987 . in c. sauliae , ttx localizes in the digestive gland , and other organs , including the muscle ,
the digestive gland toxicity of c. sauliae collected from shimizu bay in 1981 ranged from 77 to 350 mu / g . a subsequent toxicity survey based on
a total of 1406 digestive glands of c. sauliae from 7 prefectures indicated that the frequency of toxic specimens in each prefecture ranged from 19% to 87% .
ttx or its derivative been also detected in closely related species , such as the frog shell tutufa lissostoma and the european trumpet shell charonia lampas lampus , the latter of which caused ttx poisoning in spain in 2007 .
the ivory shell babylonia japonica is usually ingested as a side dish with sake . in june 1957
, 5 persons were poisoned due to ingestion of the shellfish in teradomari , niigata prefecture , and 3 of them died .
the causative substance was estimated to be ttx based on the facts that the symptoms and signs of the victims were similar to those of the pufferfish poisoning , and that ttx was later detected in b. japonica collected from kawajiri bay , fukui prefecture in may 1980 . in april 2004 , a food poisoning incident resulting from the ingestion of the necrophagous marine snail nassarius ( alectricon ) glans occurred in tungsa island located in the south china sea , taiwan .
the causative agent was identified as ttx by instrumental analyses [ 16 , 17 ] . in a toxicity survey of 20 n. glans specimens collected from the same sea area ,
high toxicity was observed not only in the digestive gland , but also in the muscle ( average of 538 and 1167 mu / g , resp . ) .
ttx poisonings due to n. glans have also occurred in japan recently . in july 2007 in nagasaki , nagasaki prefecture ,
a 60-year - old female developed a feverish feeling in the limbs , abdominal pain , and an active flush and edema in the face 15 minutes after ingesting the shellfish and was administered intravenous fluids at a clinic near her home .
thereafter , her condition worsened , and she developed dyspnea , whole - body paralysis , and mydriasis ; she was finally transported to an emergency hospital .
the patient required an artificial respirator for the first 3 days , but recovered enough to take breakfast on the 4th day .
she unexpectedly relapsed after lunch , however , and developed respiratory arrest and was placed on the respirator again .
she gradually recovered and was discharged from the hospital 3 weeks later . immediately after the incident , we investigated the leftover gastropods and detected a maximum of 4290 mu / g of ttx in the cooked muscles and digestive glands of n. glans .
moreover , during subsequent investigations , an extremely high concentration of ttx and a putative derivative of ttx , that is , a maximum of 10,200 mu / g ( 15,100
mu / individual ) in the viscera and 2370 mu / g ( 9860 mu / individual ) in the muscle , were detected in n. glans specimens collected from the same sea area as the ingested snail . in this case
although the reason is not clear , the recurrence might have been due to the digestion of a highly toxic , previously undigested tissue fragment of n. glans and absorption due to the resumption of meals , again exposing her respiratory center to a high concentration of ttx . in july 2008 ,
in association with the occurrence of ttx poisoning by c. sauriae in shizuoka prefecture in 1979 , ttx screening was performed in several species of small marine snails in japan .
zeuxis siquijorensis and niotha clathrata were found to possess ttx or a ttx - like substance .
there have been , however , no poisoning cases in japan , as japanese people do not typically feed on these species . on the other hand , inhabitants along the coast of the east china sea in china and taiwan have a long history of eating small marine snails , and zeuxis spp .
are generally sold at the supermarket or fish markets in these areas . from 1977 to 2004 , more than 419 people were poisoned by ingesting these snails , and over 19 people died in zhoushan , fujian , and the ninxia hui automous region in china [ 3 , 2123 ] .
furthermore , poisoning cases have spread along the coasts from fujian to tsuingtao . in 1994 and 2001 ,
similar poisonings occurred in the southern and northern parts of taiwan , respectively , and the main causative substance was identified as ttx [ 2325 ] . from july to november 2009 , 15 dogs were suddenly poisoned at the beaches adjacent to hauraki gulf , auckland , new zealand , all exhibiting similar symptoms , and 5 of them died .
. detected a very high level of ttx in the grey side - gilled sea slug pleurobranchaea maculate found in tide pools near the beach and claimed that the dogs were poisoned with ttx by contact with the sea slugs .
ttx was found in the eggs and larvae and distributed over the whole body with increasing concentrations toward the outer tissues in the adult sea slugs .
marked individual and regional variations are observed in pufferfish toxicity . in addition , the facts that the ttx of c. sauliae and b. japonica comes from the food chain as described below and that several shell fragments of z. siquijorensis are detected in the digestive tract of the toxic pufferfish takifugu pardalis suggest that ttx contained by pufferfish is exogenous via the food chain [ 6 , 7 ] .
moreover , many studies of ttx have revealed that ( 1 ) ttx is distributed over various organisms other than pufferfish , ( 2 ) marine bacteria primarily produce ttx ( table 6 ) , ( 3 ) pufferfish become nontoxic when they are fed ttx - free diets in a closed environment in which there has been no invasion of ttx - bearing organisms , ( 4 ) such nontoxic pufferfish efficiently accumulate ttx when ttx is orally administered , and ( 5 ) pufferfish are equipped with high resistance to ttx , supporting the exogenous intoxication theory a hypothesis that ttx is originally produced by marine bacteria , and pufferfish accumulate ttx through the food chain that starts with the bacteria [ 6 , 7 ] . to test ( 3 ) , we investigated the toxicity of more than 8700 individual pufferfish that had been reared in an environment in which the invasion of ttx - bearers was prevented and were provided nontoxic diets in netcages in the sea , or in tanks with an open or closed circulation system on land , and confirmed that all the livers remained nontoxic ( table 7 ) [ 6 , 25 ] .
production of nontoxic pufferfish can reduce the risk of food poisoning from eating toxic pufferfish and reduce the mortality rate . moreover
, this method might also contribute to maintain the japanese food culture by reviving pufferfish liver dishes as a safe traditional food , which , although eaten previously , has been prohibited as a food since the regulation of 1983 in japan . the transfer , accumulation , and elimination mechanisms of ttx taken up into the pufferfish body via food organisms remain unclear .
we recently found that ttx administered intramuscularly to nontoxic cultured specimens of the pufferfish takifugu rubripes was transferred first to the liver and then to the skin via the blood .
matsumoto / nagashima et al . demonstrated that , unlike general nontoxic fish , the liver tissue of t. rubripes is equipped with a specific ttx - uptake mechanism [ 2729 ] , and using a pharmacokinetic model showed that ttx introduced into the pufferfish body is rapidly taken up into the liver via the blood [ 30 , 31 ] .
these findings indicate that marine pufferfish are endowed with a mechanism by which they transport ttx specifically and actively .
ttx - binding proteins have been isolated from the blood plasma of marine pufferfish , and may be involved in the transportation mechanism [ 32 , 33 ] . in wild pufferfish ,
the liver and ovary usually have strong toxicity , whereas the muscle and testis are weakly toxic or nontoxic .
in addition , the toxicity varies with the season , usually reaching the highest level during the spawning season ( march to june in japan ) , indicating sexual differences in pufferfish toxicity and that maturation may affect toxin kinetics in the pufferfish body .
recently , we investigated seasonal changes in tissue toxicity and the amount and forms of ttx in the blood plasma using wild specimens of the pufferfish t. poecilonotus and demonstrated that maturation greatly affects the intertissue transfer and/or accumulation of ttx via the bloodstream .
the trumpet shell c. sauliae is a carnivorous marine snail , and fragments of the starfish astropecten polyacanthus were detected in the digestive tract of the specimens collected from shimizu bay in association with the food poisoning in 1979 .
moreover , an experiment in which nontoxic c. sauliae were fed toxic starfish demonstrated that the ttx of c. sauliae is derived from these starfish , namely , their food source [ 35 , 38 ] .
the starfish of genus astropecten are also carnivorous , and their toxin is also estimated to come from their food .
the ivory shell b. japonica is necrophagous and feeds on the muscles and viscera of dead fish . in the hokuriku and joetsu districts along the japan sea where sakajiri bay is located , and
ttx intoxication of b. japonica was recognized in 1980 , fishermen are familiar with the feeding habits of b. japonica and catch them using the viscera of dead toxic pufferfish takifugu niphobles as bait .
we performed a similar experiment with c. sauliae and observed that b. japonica preferentially ate dead pufferfish viscera , thereby accumulating ttx .
it is presumed that the b. japonica that caused poisoning in teradomari of the joetsu district were intoxicated with ttx by a similar mechanism .
although the ttx intoxication mechanisms of n. glans in tsungsa island as well as nagasaki and kumamoto prefectures are unclear , the necrophagous characteristics of the snail suggest that dead pufferfish viscera are one of the origins of ttx .
the toxicity of the nagasaki / kumamoto specimens of n. glans collected from september to january was highest in september , and gradually decreased thereafter ( figure 4 ) [ 10 , 18 ] .
there are no data on the other months , but both poisoning incidents in nagasaki and kumamoto occurred in july , indicating that the n. glans had already accumulated a high concentration of ttx that month . in japan , t. niphobles comes en masse to the seashore to spawn their eggs in june , and die shortly thereafter .
the spawning season of t. niphobles almost corresponds to the intoxication season of n. glans , indicating a possibility that n. glans is intoxicated by feeding on the mass of dead t. niphobles at the sea bottom .
the occurrence of food poisoning cases in china and taiwan is concentrated from spring to early summer ( table 5 ) , somewhat earlier than that of the nagasaki / kumamoto incidents . on the other hand ,
the season during which toxic pufferfish approach the seacoast in a group to spawn is earlier in china and taiwan than in japan , as the latitude of the area where the poisonings occur is lower than that of japan proper ( figure 3 ) .
therefore , the season when poisonings occur appears to correspond to the spawning season of toxic pufferfish .
the small marine snails that have caused food poisonings in china and taiwan are all necrophagous , having the same feeding habit as b. japonica and n. glans , and seem to be intoxicated by the same mechanism ; they accumulate ttx by feeding on the viscera of toxic pufferfish that died after spawning . in this context , ttx has been found to act as an attractant to toxic marine snails . in our experiment using 8 toxic and 2 nontoxic snail species to investigate the attracting effect of ttx , we observed a significantly positive correlation between toxicity and comparative attracting variations in toxic species , whereas nontoxic species showed a negative response to ttx .
carnivorous or necrophagous marine snails generally live at the sea bottom , and their habitat , including their prey and food sources , is very limited . under such conditions ,
the snails may be endowed with the ability to detect ttx - bearing foods and to ingest them selectively as a species - specific characteristic .
although necrophagous small snails ingest ttx - containing foods selectively , they also have access to a diet contaminated with paralytic shellfish poison ( psp ; i.e. , a group of neurotoxins produced by certain species of dinoflagellates , and the main component , stx , has an almost equivalent molecular size and action mechanism to ttx ) . in such cases ,
they accumulate not only ttx but also psp , as seen in natica lineate , niotha clathrata [ 23 , 24 ] , and zeuxis scalaris [ 23 , 24 ] in pingtung , taiwan .
this is also the case in the toxic crabs zosimus aeneus in the philippines and taiwan , and atergatis floridus in taiwan .
according to mcnabb et al . , sea slugs are carnivorous scavengers living in the shallow subtidal crustose turf / benthic algal communities
sea slugs are generally not used for human food , but the dog poisonings may be viewed as a warning to human public hygiene .
namely , if their intoxication is caused by a route other than the presently known food chain , this may suggest a novel original organism of ttx , and the food chain that begins with this organism may contaminate seafood previously thought to be safe with ttx .
ttx was originally named after the family name , tetraodontidae , of pufferfish as their exclusive toxin , and ttx poisoning due to ingestion of pufferfish has long been recognized .
ttx poisoning due to gastropods , however , has also begun to occur frequently , posing a serious food hygiene problem .
ttx is exogenous to both pufferfish and gastropods , and they are thought to ingest it from toxic food organisms and to accumulate the ttx in specific organs .
interestingly , it is presumed that live pufferfish ingest / accumulate ttx from necrophagous small or medium marine snails , while on the other hand , these snails ingest / accumulate the toxin from dead pufferfish .
thus , it is possible that the ttx produced by bacteria not only transfers to higher organisms through the food chain , but that it also partly circulates between certain organisms ( figure 2 ) . as described above , the pufferfish l. lunaris , originally inhabiting tropical to subtropical sea areas ,
now frequently appear in the temperate coastal waters of japan , and dog poisonings due to sea slugs have suddenly begun to occur in the southern hemisphere .
such facts indicate the possibility of further geographic expansion and/or diversification of ttx - bearing organisms , or of ttx contamination of seafood caused by a change in the marine environment , such as an increase in the water temperature due to global warming .
careful attention must be paid to this point from the food hygiene perspective for the future . | marine pufferfish generally contain a large amount of tetrodotoxin ( ttx ) in their skin and viscera , and have caused many incidences of food poisoning , especially in japan .
edible species and body tissues of pufferfish , as well as their allowable fishing areas , are therefore clearly stipulated in japan , but still 2 to 3 people die every year due to pufferfish poisoning .
ttx is originally produced by marine bacteria , and pufferfish are intoxicated through the food chain that starts with the bacteria .
pufferfish become nontoxic when fed ttx - free diets in a closed environment in which there is no possible invasion of ttx - bearing organisms . on the other hand , ttx poisoning due to marine snails
has recently spread through japan , china , taiwan , and europe .
in addition , ttx poisoning of dogs due to the ingestion of sea slugs was recently reported in new zealand .
ttx in these gastropods also seems to be exogenous ; carnivorous large snails are intoxicated by eating toxic starfish , and necrophagous small - to - medium snails , the viscera of dead pufferfish after spawning .
close attention must be paid to the geographic expansion and/or diversification of ttx - bearing organisms , and to the sudden occurrence of other forms of ttx poisoning due to their ingestion . |
the renormalization group ( rg ) transformations is one of the most powerful and frequently used conceptual as well as practical tools in statistical physics and quantum field theory . while conceptually the idea of combining variables on neighbouring sites into complexes is very simple , in practise it almost always turns out to be rather complicated . historically , the usefulness of rg transformations was realized after the d=2 ising model on triangular lattice was very elegantly solved by niemeijer and van leeuwen @xcite using block spinning .
the nonlinear rg method they used was peculiar to that particular system and did nt allow generalization to more complicated cases . for more general systems wilson @xcite proposed to use the weak coupling ( low temperature ) perturbation theory in momentum space .
this was first applied to scalar @xmath6 models and subsequently to spin systems @xcite and lattice gauge theories for thinning by factor 2 @xcite . for complicated systems like these with local gauge symmetries
some approximate methods were developed like migdal - kadanoff @xcite approximation , variational rg @xcite , mean - field rg @xcite or block spinning using monte carlo numerical methods @xcite .
however , unlike perturbation theory , these approximations are uncontrollable in a sense that it is not clear how to estimate errors .
exact rg transformations are generally not known ( exceptions are decimations in spin chains @xmath0 and mentioned above very special cases in @xmath2 ) . moreover after one rg transformation the resulting action contains generically infinite number of interaction terms , and therefore one is forced to make an additional approximation dropping some of them ( hopefully the less relevant ones ) .
rg transformations are especially useful when applied repeatedly .
this requires self similarity of the approximate effective action and is justified only around fixed points .
note however that accurate thinning of the lattice even just by factor @xmath5 can greatly facilitate the study of a model by means of subsequent mc simulation .
there are several types of the rg transformations .
the conceptually simplest one is the decimation or thinning of degrees of freedom in the configuration space .
some degrees of freedom located , for example , on sites with at least one odd coordinate are simply integrated out .
( 40000,27000)(0,-14000 ) ( 2000,-5000)[19000 ] ( 7000,-5000)[19000 ] ( 12000,-5000)[19000 ] ( 17000,-5000)[19000 ] ( 22000,-5000)[19000 ] ( 0,-3000)[24000 ] ( 0,2000)[24000 ] ( 0,7000)[24000 ] ( 0,12000)[24000 ] ( 2000,-3000 ) ( 12000,-3000 ) ( 22000,-3000 ) ( 7000,2000 ) ( 17000,2000 ) ( 2000,7000 ) ( 12000,7000 ) ( 22000,7000 ) ( 7000,12000 ) ( 17000,12000 ) ( 7000,-3000 ) ( 17000,-3000 ) ( 2000,2000 ) ( 12000,2000 ) ( 22000,2000 ) ( 7000,7000 ) ( 17000,7000 ) ( 2000,12000 ) ( 12000,12000 ) ( 22000,12000 ) ( -2000,-9000)fig .
1 . decimation : full circles belong to sublattice @xmath7 , ( -2000,-10500 ) while empty circles @xmath8 denote the integrated out sites .
example is given on fig .
1 on which spins at empty circle points are integrated out .
@xmath9}=\sum_{\phi(x):x \in { \cal l}^*}e^{-a^{dec}[\phi(x ) ] } \label{z}\ ] ] here and in what follows points of the coarse lattice are denoted by capital letters .
the resulting effective action @xmath10 contains generally interactions of any range . here
@xmath11 is the original @xmath12 dimensional lattice while @xmath7 is a sublattice .
note that remaining variables are all the old variables .
this is not the case for the so called block spin transformations .
one defines a linear or a nonlinear combination of the variables on @xmath11 , the block spin : @xmath13 \label{<o>}\ ] ] for example , for the @xmath1 classical spins @xmath14 one can define @xcite @xmath15 this combination is highly ambiguous and success of the transformation critically depends on it s choice .
the main problem is that it is extremely difficult in practise to perform such a transformation even perturbatively .
the relations like eq .
( [ block ] ) are very nonlinear and even singular @xcite . another type of rg transformations , used especially extensively in field theory , is the momentum space rg @xcite .
one defines the momentum space variables @xmath16 now one performs integration over high frequence modes ( strictly speaking chopping the brillouin zone , but more often the approximate spherically symmetric momentum cutoff @xmath17 is utilized @xcite ) .
this type of rg transformations , while convenient for the @xmath6 model , turns out to be especially inconvenient for constrained systems like the @xmath1 symmetric spin models .
the reason is following . while generally in x - space the constraint are local , for example @xmath18 in p - space it becomes a convolution .
what does it mean now high frequency physical modes ?
the constraint mixes between low and high frequencies .
since most systems of interest belong to this class one has to circumvent the difficulty .
one way is to solve the constraint and make the momentum space rg for physical quantities only .
then the mode integrated effective action contains generally `` non - covariant terms '' .
the original global symmetry is lost since the high frequency modes do not constitute an @xmath1 symmetric set .
problems are more acute with local gauge symmetries . in practice this type of thinning out of degrees of freedom
is often used for the demonstration purposes only and very rarely the actual calculations .
decimation are extremely difficult to perform even in free theory in more then one dimension ( see for example @xcite ) .
it might sound surprising that something is difficult in free theory since all the integrals are gaussian and `` doable in principle '' .
of course it is still a gaussian integral , but a very complicated one .
let us consider a free massless boson nearest neighbours action @xmath19=-\frac { a^{(d-2)}}{2}\sum_{xy}\phi(x)\box(x - y)\phi(y ) \label{f(p)}\ ] ] where the lattice laplacian is defined by @xmath20.\ ] ] if one tries to integrate out a point @xmath21 the gaussian integral involves all its @xmath22 nearest neighbours .
@xmath23=\ ] ] @xmath24\ ] ] this is very simple .
however when trying subsequently to integrate another point , say @xmath25 , all the previous point s neighbours enter the gaussian integral and so on .
the gaussian integration requires inverting increasingly larger matrices .
since we have to integrate out all the points not belonging to the sublattice , some other methods are required .
an exception is the @xmath0 case . here
the size of the matrix does not grow : integration of a point leads just to interactions of the neighbouring remaining points .
this is the reason why it is possible in many cases to explicitly find decimations in @xmath0 . in this paper
we perform the decimation for multidimensional free theories . the result does not coincides with the naive continuum limit even in the limit of large @xmath26 .
this is discussed in section 2 . then using this result we develop in section 3 general perturbative formalism for weakly interacting models .
it is applied in section 4 , 5 , 6 to the @xmath1 symmetric nearest neighbours interaction spin model ( the nonlinear @xmath27-model ) . in section 4
we derive the general diagrammatic technique for such a models . in section 5
the solvable @xmath0 model is considered and results compared with the usual perturbative ones , while in 6 the two dimensional asymptotically free model is studied .
the perturbative method generally is better suited for asymptotically free ( = phase transition at @xmath28 ) models like this one .
this is because on fine lattices coupling becomes small .
then the rg transformed model can be investigated , say by the mc method , on coarser lattice . even simplest decimation with @xmath5 reducing the number of points by factor 4 greatly simplifies the numerical work .
a coarse grained effective action is generally obtained as a series in `` closeness '' of the interacting spins : the nearest neighbours , next to nearest etc . in order to make any practical calculations possible
, one has to truncate it at some point .
we restrict our consideration here to the fourth order terms in fields and up to the fourth derivatives . we conclude in section 7 by discussing complexity of such calculations and some of their uses .
let us start with free boson theory on the lattice .
effective action , after decimation with parameter @xmath29 generally has a form : @xmath30=\frac{1}{2}\sum_{x\in r^d}\phi(x){\bf \delta}(x - y)\phi(y)\ ] ] where bold letters denote sublattice functions .
of course it is quadratic in @xmath31 .
let us now perform fourier transforms of the original and sublattice fields @xmath32 @xmath33 this convention fixes the fourier transforms of propagators : @xmath34 @xmath35 @xmath36 @xmath37 and inverse propagators @xmath38 @xmath39 @xmath40 due to translation invariance , we can invert these in momentum space to obtain propagators on the lattice and sublattice correspondingly : @xmath41 @xmath42 the two models should result in equivalent correlator between two sublattice points : 0 and @xmath43 : @xmath44 .
this leads to the following relation between the fourier transforms : @xmath45 summation over @xmath43 results in sum over @xmath46 functions @xmath47 which are used to perform the momentum integrations : @xmath48 limits of summation in the last expression follow from the different sizes of brillouin zone for two lattices ( see fig .
2 ) . note that due to periodicity the limits of summation in @xmath49 can be shifted by the period @xmath29 . in @xmath0
we recover the previous result since the sum is doable @xcite . in @xmath50 ,
the summation over one of the variables , @xmath51 can be performed similarly , but the remaining summations should be done numerically . in particular for ( 40000,23000)(1000,-14000 ) ( 2000,-5000)[10000 ] ( -3000,-5000)@xmath52 ( -3000,5000)@xmath53 ( , ) [ 40000 ] ( 2000,-7000)@xmath54 ( 36000,-7000)@xmath55 ( , ) [ 10000 ] ( , ) [ 40000 ] ( 2000,-5000)[14000 ] ( 12000,-5000)[14000 ] ( 22000,-5000)[14000 ] ( 15000,-2000)[10000 ] ( , ) ( 23000,-9000)@xmath56 ( 32000,-5000)[14000 ] ( 2000,0)[18 ] = 700 ( 22000,-5000)[5 ] ( 0,-12000)fig . 2 .
brillouin zones for original lattice ( @xmath57 ) and sublattice ( @xmath58 ) . lines ( 2000,-13500)correspond to @xmath56 .
@xmath2 we have the propagator : @xmath59 where @xmath60 for large @xmath29 the euclidean invariance is restored , @xmath61 and numerical calculations show it can be fitted by @xmath62\ ] ] ( see fig .
3 ) with an accuracy better 1 percent in all the brillouin zone .
the first term is the continuum propagator .
note that the decimated propagator even for large @xmath29 does not coincides with the naive continuum limit .
the contact constant term with logarithmic dependence of @xmath29 is typical for @xmath2 and is nothing else but the bubble integral .
the polynomial coefficients + are very small and almost coincide with the loran fig .
3 . @xmath2 free decimated propagator ( upper line ) and fit eq .
( [ fit ] ) ( lower line ) for @xmath63 .
+ expansion of the propagator around @xmath64 . for finite @xmath29
the symmetry remains of course just the discrete subgroup of the rotations . in higher dimensions
similar expressions can be written .
similar procedure can be extended to free fermion fields ( see appendix a ) .
especially interesting aspect of this is the species doubling @xcite . in the simplest case of one dimensional massless boson field
we can explicitly integrate out all the odd points since the integrals do not intertwine : we therefore obtain the original form with twice lattice spacing : @xmath65 , where @xmath66 is an inverse temperature . the action is a perfect one @xcite . for arbitrary @xmath67
we get similar results , @xmath68 in higher dimensions we still can perform the decimation using momentum space in the intermediate steps to diagonalize the matrices .
now we would like to build a perturbation theory for the decimation - type rg transformations in the interacting case .
for concreteness we discuss the lattice @xmath6 model @xmath69=a^{(d-2 ) } \sum_x \left [ \frac{1}{2}(\nabla \phi_x)^2+\frac{m^2}{2}\phi^2_x + \frac{\lambda}{4!}\phi^4_x \right],\ ] ] where @xmath70 .
low temperature ( weak coupling ) perturbation theory for this model can be represented via feynman diagrams including propagator and the four vertex . in momentum space rg , when the high frequency modes from @xmath17 to @xmath71 are integrated out , the resulting effective action on the scale @xmath72 has a general form @xmath73=\sum_{n=1}^{\infty } \frac { 1}{(2 n ) ! }
x_{2n } } \gamma^{(2n)}(x_1, .. ,x_{2n})\phi_{x_1} ... \phi_{x_{2n}}\ ] ] the coefficient functions @xmath74 are sums of all the one particle irreducible contributions with @xmath75 ends .
the external momenta are all below @xmath72 while all the integrated internal momenta are between @xmath72 and @xmath17 @xcite .
since we will significantly modify the procedure in x space let us briefly outline the p space diagrammatics for rg .
this is most easily done if original vertices and propagators are split into several pieces .
the vertex decomposes into : the vertex connecting just high momenta modes ( fig .
4(c ) ) , only low momenta modes ( fig .
4(d ) ) and mixing the two ( fig .
4(e ) , 4(f ) , 4(g ) ) .
( 40000,27000)(0,-17000 ) ( 5000,8000)[6000 ] ( 8000,6500)a ( 20000,8000)[6000 ] ( 23000,6500)b ( 5000,2000)[6000 ] ( 8000,-1000)[6000 ] ( 8000,-2500)c ( 20000,2000)[6000 ] ( 23000,-1000)[6000 ] ( 23000,-2500)d ( 2000,-6000)[3000 ] ( 5000,-9000)[3000 ] ( 5000,-10500)e ( 5000,-6000)[3000 ] ( 5000,-6000)[3000 ] ( 13000,-6000)[3000 ] ( 16000,-10500)f ( , ) [ 3000 ] ( 16000,-9000)[3000 ] ( , ) [ 3000 ] ( 24000,-6000)[3000 ] ( 27000,-10500)g ( , ) [ 3000 ] ( , ) [ 3000 ] ( , ) [ 3000 ] ( 0,-13000)fig . 4 . momentum space rg propagators ( a , b ) and vertices ( c , d , e , f , g ) ( 0,-14500)for @xmath6 model .
low momenta are indicated by bold lines .
propagators are decomposed analogously into low and high frequency parts : @xmath76 .
note that the coupling between the modes is by means of the vertex only .
this will be completely different in real space rg .
the integral over high frequencies ( denoted @xmath77 ) @xmath78}=\ ] ] @xmath79 } \int_{\phi_k } { \rm exp } \left[-\int_{k>\lambda ' } \left(\frac{1}{2}\phi_k(k^2+m^2)\phi_{-k } \right)-v[\phi_k,\phi_k ] \right],\ ] ] with @xmath80 = \frac{\lambda}{4 ! } \left(\int\limits_{\scriptstyle(|k|,|l|,|m|)>\lambda'\atop\scriptstyle|k+l+m|>\lambda ' } \phi_k \phi_l \phi_m \phi_{(-k - l - m ) } \right.\ ] ] @xmath81 @xmath82 apart from `` classical '' parts independent of @xmath83 is exponent of the vacuum energy of the high frequency theory with @xmath77 playing a role of the external sources .
this is the sum of all the vacuum diagrams in this theory .
however , as we remarked before the lines do not connect low to high frequency modes and consequently all the one particle reducible diagrams vanish . for the real space
rg the perturbation theory can be built in a similar way .
the fields @xmath84 , @xmath85 will be treated as `` external sources '' , while all the internal points will belong to @xmath86 . for @xmath6 model
this means the following decomposition ( fig .
action is divided into three parts : ( 40000,20000)(0,-10000 ) ( 500,8000 ) ( 500,6000)a ( 10000,8000)[6000 ] ( 13000,6000)b ( 9500,8000 ) ( 16500,8000 ) ( 23000,8000)[6000 ] ( 22500,8000 ) ( 29500,8000 ) ( 26000,6000)c ( 3500,-1000)[3000 ] ( 7500,-1000)[3000 ] ( 7000,-4500)[3000 ] ( 7000,-500)[3000 ] ( 7000,-1000 ) ( 7000,-6000)d ( 18500,-1000)[3000 ] ( 22500,-1000)[3000 ] ( 22000,-4500)[3000 ] ( 22000,-500)[3000 ] ( 22000,-1000 ) ( 22000,-6000)e ( 0,-8000)fig .
5 . real space rg ( decimation ) propagators ( a , b , c ) and vertices ( d , e ) ( 0,-9500)for @xmath6 model .
full circles belong to sublattice ( external fields ) , ( 0,-11000)while empty denote `` internal fields '' ( @xmath87 ) .
`` classical action '' of the `` external '' field @xmath84 ( fig .
5(a ) , 5(d ) ) , @xmath88=\sum_x\left ( \frac { a^{d } m^2}{2 } \phi^2_x+ a^{d-2 } d\ ; \phi^2_x+ \frac { a^{d } \lambda}{4!}\phi^4_x\right)\ ] ] @xmath89 , \label{ext}\ ] ] cross - term ( fig .
5(b ) ) @xmath90=-a^{(d-2)}\sum_{x , x } \phi_x \dbar(x , x ) \phi_x\equiv -a^{(d-2)}\sum_{x , x } \phi_x b_{x x } \phi_x\ ] ] with `` external legs '' @xmath91 and an internal part for which all the vertices belong to @xmath92 ( `` decorated '' model ) ( fig .
5(c ) , 5(e ) ) : @xmath93=-\frac { a^{(d-2)}}{2 } \sum_{x , y } \phi_x d(x , y )
\phi_y+\frac { a^{d-2 } \lambda}{4!}\sum_x \phi^4_x . \label{int}\ ] ] note that unlike momentum space rg , here external fields @xmath84 are coupled to internal part only via derivative couplings like off - diagonal part of propagator ( fig .
5(b ) ) or derivative interaction in nonlinear @xmath27-model ( see the next sections ) .
all the local vertices will completely decouple into internal ( fig .
5(e ) ) and external ( fig .
5(d ) ) . integration out all the fields @xmath94 will lead to an effective action for the fields @xmath84 on sublattice of the form : @xmath95=\sum_{x_1, .. ,x_{2n } } \frac { 1}{(2 n ) ! }
h^{(2n)}(x_1, .. ,x_{2n})\phi_{x_1} ... \phi_{x_{2n } } , \label{acteff}\ ] ] where the coefficient functions @xmath96 , contrary the momentum space rg , are sums of all the connected contributions with @xmath75 ends .
these connected functions does not degenerate into one particle irreducible . to integrate over field @xmath94 perturbatively
, we need to find its propagator which is matrix inverse to @xmath97 . at the same time , all we know explicitly is the full laplacian @xmath98 and original propagator @xmath99 .
moreover , since @xmath100 does not constitute a sublattice , it is impossible to make use of fourier analysis on it .
therefore it is useful to represent all the summations over @xmath100 via summations over @xmath11 and @xmath7 .
this can be done using following algebraic trick .
matrix @xmath98 as well as matrix @xmath101 can be decomposed into following blocs @xmath102 @xmath103 where the infinite - dimensional matrices @xmath104 are defined as follows .
@xmath105 defined by the quadratic part of action eqs .
( [ ext ] , [ cross ] , [ int ] ) ( fig .
5(a ) , 5(b ) , 5(c ) ) .
the matrices @xmath106 are the usual propagator matrices ( fig .
5(b ) , 5(c ) ) and @xmath107 is the inverse propagator between the points of sublattice , that is an expression inverse to @xmath108 .
now we can invert @xmath97 using the fact that matrices @xmath98 and @xmath101 are inverse to each other .
this implies a set of algebraic relations for their submatrices : @xmath109 and , after straightforward transformations we obtain an expression for the `` internal '' propagator : @xmath110 or , returning to previous notations , @xmath111 ( 10000,3000)(0,-2000 ) ( 5000,-300)@xmath112 ( 7000,0)[5000 ] ( 13000,-300)@xmath113 ( 16000,0)[5000 ] ( , 0)[5000 ] ( , 0 ) ( , 0)[5000 ] ( , 0 ) here and in what follows bold line denotes free inverse decimated propagator @xmath114 , while thin line corresponds to propagator of the original theory @xmath115 . using this representation of @xmath116 as an internal line
, we can now begin to build the perturbation theory .
we would like to stress out that using of @xmath116 eq .
( [ int - prop ] ) enables us to extend this summation over @xmath117 the original lattice @xmath11 rather then `` decorated '' subset @xmath100 .
indeed , one can see that this expression is equal to zero when at least one of the points @xmath118 in @xmath116 belongs to @xmath7 ( fig .
6 ) . therefore , any additional contribution due to this extension is equal to zero as well .
( 40000,13000)(0,-7500 ) ( 5000,2000 ) ( 5000,0)[3000 ] ( 5000,0)[3000 ] ( 5000,0)[3000 ] ( 5000,0 ) ( 5000,4000 ) ( 10000,1000)@xmath113 ( 16000,2000 ) ( 16000,0)[3000 ] ( 16000,0)[3000 ] ( 16000,0)[3000 ] ( 16000,0 ) ( 18000,2000 ) ( 14000,2000 ) ( 16000,4000 ) ( 21000,1000)@xmath119 ( 0,-4000)fig . 6 .
an additional contribution from extended summation .
( 3000,-5500 ) small circles denote the points of the sublattice @xmath7 .
calculation of an n - point function @xmath120 in effective action eq .
( [ acteff ] ) for the coarse - grained field @xmath84 will be as follows .
all connected diagrams with @xmath49 end points @xmath121 , @xmath122 vertices and @xmath123 internal lines in real space are drawn with following components : \a ) all vertices are situated at the points @xmath124 and to every vertex at point @xmath125 corresponds summation @xmath126 ; \b ) @xmath127 are assigned to external ends @xmath43 and \c ) the internal lines @xmath116 are represented via eq.([int - prop ] ) .
this representation splits each diagram into @xmath128 subdiagrams and each of these subdiagrams should be calculated separately .
this calculation includes summation over all the internal points @xmath125 on the fine grained lattice and over the internal sublattice points @xmath129 [ end points of the inverse decimated propagator @xmath130 , see eq .
( [ int - prop ] ) ] .
finally the `` classical contribution '' to coefficient function should be added . for the sake of simplicity
let us discuss calculations of the decimation diagrams on the concrete example .
namely , we will consider one of the contributions to four point function @xmath131 in @xmath6 model ( fig .
7 ) . with using of the propagator @xmath116 ,
original diagram splits into five subdiagrams ( fig .
7(a ) , 7(b ) , 7(c ) , 7(d ) , 7(e ) ) .
typical subdiagram here ( for instance , subdiagram 7(d ) ) can be written as @xmath132 @xmath133 as usual , in practical calculations it is very convenient to employ the fourier transformed functions at the intermediate steps . in this way we will operate with vertices , legs @xmath134 propagators @xmath135 and inverse decimated propagator @xmath136 eq .
( [ decprop ] ) . however , it is not convenient to perform fourier transform of expression eq .
( [ fourp ] ) immediately , because it contains functions defined on different lattices and , therefore , obeying different transformation rules ( cf .
( [ finefourier1],[finefourier2],[coarsefourier1],[coarsefourier2 ] ) ) .
instead , we can use the fact that this expression breaks into blocks where all internal points lie on the fine grained lattice @xmath11 and sublattice points enter only as ends .
these blocks are connected by @xmath137 . in terms of such a `` @xmath11-connected '' parts , diagram eq .
( [ fourp ] ) has the form : @xmath138 @xmath139 with blocks @xmath140 @xmath141 and @xmath142 now we can proceed as follows . first we will fourier transform each block in eq . ( [ blockdiagr ] ) with respect to its internal points .
this will result in diagram @xmath143 as function on sublattice , and we can apply to it the fourier transformation rules for the sublattice eq .
( [ coarsefourier1],[coarsefourier2 ] ) .
first step does not differ from the usual perturbative lattice calculations , and resulting functions read , for example , as @xmath144 ( 45000,46000)(0,-33000 ) ( 0,10000)[9000 ] ( , ) [ 2 ] ( , ) ( , ) [ 2 ] ( , ) ( 0,)[9000 ] = ( , ) [ 2 ] ( , ) ( , ) [ 2 ] ( , ) by -300 by 11000 ( , ) @xmath145 ( 20000,10000)[9000 ] ( , ) [ 2 ] ( , ) ( , ) [ 2 ] ( , ) ( 20000,)[9000 ] ( , ) [ 2 ] ( , ) ( , ) [ 2 ] ( , ) ( 22500,0)(a ) ( 0,-7500)@xmath113 ( 2000,-7500)@xmath146 ( 5000,-4000)[1500 ] = ( , ) [ 2 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 6000 ] ( , ) ( , ) [ 2 ] ( , ) ( 5000,)[9000 ] ( , ) [ 2 ] ( , ) ( , ) [ 2 ] ( , ) ( 7500,-14000)(b ) by 1000 ( , -7500)@xmath147 ...
@xmath148 ( 17500,-7500)@xmath147 ( 20000,-7500)@xmath146 ( 23000,-4000)[1500 ] ( , ) [ 2 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 6000 ] ( , ) ( , ) [ 2 ] ( , ) ( 23000,)[1500 ] ( , ) [ 2 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 6000 ] ( , ) ( , ) [ 2 ] ( , ) ( 25000,-14000)(c ) ( 30000,-7500)@xmath147 ... @xmath148 ( 0,-21500)@xmath113 ( 2000,-21500)@xmath149 ( 5000,-18000)[1500 ] ( , ) [ 2 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 3000 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 2 ] ( , ) ( 5000,)[1500 ] ( , ) [ 2 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 6000 ] ( , ) ( , ) [ 2 ] ( , ) ( 7500,-28000)(d ) by 1000 ( , -21500)@xmath147 ...
@xmath148 ( 18000,-21500)@xmath147 ( 23000,-18000)[1500 ] ( , ) [ 2 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 3000 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 2 ] ( , ) ( 23000,)[1500 ] ( , ) [ 2 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 3000 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 2 ] ( , ) ( 25000,-28000)(e ) ( 0,-31000)fig . 7 . four point contribution to the @xmath6 effective action this , however is not the case when the remaining fourier transforms and summations over internal sublattice points are performed . due to the difference between the brillouin zones result
will be sums rather then monomials .
for instance , for the block @xmath150 we will obtain : @xmath151 \frac { 1}{\eta^2 \sum_\mu 4{\rm sin}^2 ( \frac{k_\mu+2 \pi n_\mu}{2\eta})}\ ] ] such an expressions we will call `` decimated '' and denote by @xmath152 $ ] : @xmath153\equiv \sum_{(n_{k\mu},n_{l\mu}, .. )=1}^\eta f(k+2 \pi n_k ,
l+2 \pi n_l , ... ) \label{decimator}\ ] ] notice that the decimated function does not possess original translation invariance ( with the period @xmath154 ) but instead turns out to be @xmath155 periodic .
eventually , diagram @xmath143 takes the following form : @xmath156 @xmath157{\bf g}^{-1}(k)[|f_1(p)|]{\bf g}^{-1}(p)[|f_3(k , p , q ,-
k - p - q)|].\ ] ] another simple example , diagrammatic calculation of the @xmath0 free boson effective action , is given in appendix b. note that described algorithm is applicable not only to the tree level contributions .
when the representation of the propagator eq ( [ int - prop ] ) is used , some of the loop diagrams will contain @xmath130 .
if this is the case , one should calculate the corresponding decimated functions , and only then perform the loop integration .
we will encounter such diagrams in the next section .
in this section we apply the formalism developed in the previous section to the @xmath1 symmetric nonlinear @xmath27 model . in @xmath12 dimensions ,
this model is described by action @xmath158 where @xmath14 ( @xmath159 ) is @xmath1 vector normalized on unity , @xmath160 , @xmath161 is the coupling constant or temperature and @xmath162 is the lattice laplacian .
the partition function of this model is given by the path integral @xmath163}.\ ] ] this is an example of a theory with constraints . to develop a perturbation theory for this model , it is convenient to re - express it in terms of the unconstrained fields @xcite @xmath164 , @xmath165 . for this purpose
one can solve constraint for @xmath166 , obtaining @xmath167 , and then the partition function in terms of `` pions '' @xmath164 will have the form : @xmath168 @xmath169 .\ ] ] this last expression when used perturbatively gives rise to the infinite set of vertices both local ( coming from the exponentiated and expanded measure ) and via derivative couplings originating from the @xmath166 part of the action . for our present purpose
it is sufficient to restrict diagrammatic to the order @xmath170 in coupling constant @xmath161 . to this order
, partition function takes the form : @xmath171 @xmath172\ ] ] from this expression , the basic diagrams will be : massless propagator @xmath173 and vertices ( fig . 8) .
( 35000,32000)(-5000,-25000 ) ( 0,5000)[6000 ] ( , ) ( 200,500)@xmath174 ( 200,-2500)(a ) ( 10000,10000)[4000 ] ( , ) [ 4000 ] ( , ) [ 3 ] ( , ) [ 4000 ] ( , ) [ 4000 ] ( 11000,0)@xmath175 ( 11000,-2500)(b ) ( 2000,-5000)[4000 ] ( , -6500)@xmath176 ( , ) [ 4000 ] ( , ) [ 2 ] ( , ) [ 4000 ] ( , ) [ 4000 ] ( , -13000)@xmath177 ( 2000,-16000)@xmath178 ( 3000,-18500)(c ) ( 13000,-5000)[4000 ] ( , -6500)@xmath176 ( , ) [ 4000 ] ( , ) [ 2 ] ( , ) [ 3 ] ( , ) [ 4000 ] ( , ) [ 4000 ] ( , -13000)@xmath177 ( , ) [ 4000 ] ( , ) [ 4000 ] ( , -16000)@xmath179 ( 15000,-18500)(d ) ( -3000,-21000)fig . 8 .
first - order ( a , b ) and second - order ( c , d ) vertices of @xmath27-model . the curly ( 0,-22500)line stands for a lattice laplacian , the broken line - for a @xmath46 function .
now we can perform decimation perturbatively .
first of all , one can see that the `` classical '' part of the action comes from the measure and the cross term @xmath180 . indeed , other classical terms are : @xmath181=\frac { 1}{g}.\ ] ] the same cancellation takes place , of course , also for the diagonal terms of the `` internal '' part of the action . at the same time
the cross term expands as @xmath182 @xmath183\ ] ] @xmath184+\sum_x \left [ -a^{(d-2 ) } \pi^2_x -\frac { g \;a^{2(d-2)}}{4 } ( \pi^2_x)^2 \right]\ ] ] @xmath185 ( 35000,32000)(-5000,-18000 ) ( 6000,10000)[4000 ] ( , 8500)@xmath43 ( , ) [ 3 ] ( , 8500)@xmath176 ( , ) ( 4000,5700)@xmath186 ( 4700,3000)(a ) ( 19500,12000 ) ( 20500,12000 ) ( 22000,12000)@xmath43 ( 20000,12000)[3 ] ( 22000,)@xmath176 ( , ) [ 4000 ] ( , ) [ 4000 ] ( , 4000)@xmath187 ( 18000,2000)(b ) ( 3000,1000)[4000 ] ( , -500)@xmath176 ( , ) [ 4000 ] ( , ) [ 2 ] ( 4000,)@xmath176 ( , ) [ 3 ] ( 10500,)@xmath43 ( , ) [ 4000 ] ( , ) [ 4000 ] ( , -7000)@xmath177 500 500 ( , ) -1000 ( , ) ( 1500,-10000)@xmath188 ( 5000,-12500)(c ) ( 18000,)[2 ] ( , ) [ 3 ] ( 22500,)@xmath176 ( , -1000)@xmath43 ( , -7000)@xmath129 ( , ) [ 4000 ] ( , ) [ 4000 ] 500 ( , ) -1000 ( , ) 500 ( , ) -1000 ( , ) ( 16000,-10000)@xmath189 ( , -12500)(d ) ( 3000,-14500)fig . 9 .
the lowest order @xmath27-model sources .
terms belonging to the sublattice @xmath7 contribute to the classical action while the terms lying on @xmath100 complete the remaining off - diagonal internal part to the usual lattice action .
the basic diagrammatic elements are : free massless propagator @xmath190 , vertices , coinciding with the usual @xmath27-model vertices ( fig .
8) , sources : external leg @xmath191 ( fig .
9(a ) ) and cross interaction ( fig .
9(b ) , 9(c ) , 9(d ) ) and `` classical '' terms @xmath192.\ ] ] notice , that unlike local theories like @xmath6 , in @xmath27-model the `` classical effective action '' contains infinite series of such decoupled terms .
performing the fourier transform according the rules sect .
2 , we obtain the corresponding functions in momentum space : propagator @xmath193 vertices ( fig .
8(a ) , 8(b ) , 8(c ) , 8(d ) ) @xmath194 @xmath195 and sources ( fig .
9(a ) , 9(b ) , 9(c ) , 9(d ) ) @xmath196 @xmath197 @xmath198 where @xmath199 and `` half - decimation '' @xmath200 means that once the `` l - connected '' part will be formed , it should be decimated . the classical part remains , of course , constant .
let us consider first the simplest nontrivial example : @xmath0 @xmath27 model and decimation with @xmath5 .
we start from the two point function .
the tree level contribution is , obviously , that of the free theory ( see fig .
17 in appendix b ) , in this dimension @xmath201+[|\dbar(k ) g(k)|]^2 { \bf g}^{-1}(k ) \right).\ ] ] this contribution can be calculated analytically , and one can see that it reduces to @xmath202 this corresponds to the fact that the free massless bosonic action in @xmath0 is perfect . one loop contribution to propagator consists of three classes of diagrams : diagrams coming from the measure , bubble diagrams and the self energy part .
measure gives the contribution shown on fig .
10 , which reduces to @xmath203 ( 25000,21000)(0,-5000 ) ( 2000,12700)@xmath204 ( 10000,13000 ) ( 11000,13000 ) ( 13000,12700)@xmath147 ( 16000,13000)[3 ] ( 16000,13000 ) ( , 13000)[8000 ] ( , 13000 ) ( , 13000)[3 ] ( , ) ( 2000,7700)@xmath205 ( 5000,8000)[3 ] ( , 8000 ) ( , 8000)[4000 ] ( , 8000)[3000 ] ( , 8000 ) ( , 8000)[6000 ] ( , 8000 ) ( , ) ( , 8000)[3 ] ( , 8000 ) ( 0,2700)@xmath147 ( 5000,3000)[3 ] ( , 3000 ) ( , 3000)[4000 ] ( , 3000)[3000 ] ( , 3000 ) ( , 3000)[6000 ] ( , 3000 ) ( , 3000 ) ( , 3000)[3000 ] ( , 3000 ) ( , 3000)[4000 ] ( , 3000 ) ( , 3000)[3 ] ( , 3000 ) ( 0,-3000)fig . 10 .
contribution from path integral measure to one loop effective action .
bubble diagrams are shown on fig .
they give the contribution @xmath206 notice here that although these integrals can seem to be ir divergent , in fact this is not the case : all the divergences cancel .
the cancellation is due to the fact that in decimation only short distance effects are involved ; long range `` tails '' remain unaffected .
( 40000,24000)(-8000,-17000 ) ( -8000,2500)@xmath207 ( -500,0 ) ( 500,0 ) ( 0,200)[2 ] by 2000 ( , ) ( 0,-2000)a ( 4000,2500)@xmath113 ( 8000,0 ) ( 9000,0 ) ( 8500,200)[2 ] by 2000 ( , ) by -1000 by 1700 ( , ) by 2000 ( , ) ( 8500,-2000)b ( -9500,-7000)@xmath147 ( -7000,-10000)[2 ] ( , ) ( , ) [ 6000 ] ( , ) [ 2 ] by 2000 ( , ) by -1000 by 1700 ( , ) by 2000 ( , ) ( , ) [ 2 ] ( , ) ( -2000,-12000)c ( 4500,-7000)@xmath208 ( 8000,-10000)[2 ] ( , ) ( , ) [ 1000 ] ( , ) [ 1000 ] ( , ) ( , ) [ 4000 ] ( , ) by 1000 ( , ) [ 2 ] by 2000 ( , ) by -1000 by 1700 ( , ) by 2000 ( , ) ( , ) [ 2 ] ( , ) ( 13000,-12000)d ( 19500,-7000)@xmath147 ( 22000,-10000)[2 ] ( , ) ( , ) [ 1000 ] ( , ) [ 1000 ] ( , ) ( , ) [ 2000 ] ( , ) by 1000 ( , ) [ 2 ] by 2000 ( , ) by -1000 by 1700 ( , ) by 2000 ( , ) ( , ) [ 1000 ] ( , ) ( , ) [ 1000 ] ( , ) ( , ) [ 2 ] ( , ) ( 27000,-12000)e ( 0,-14000)fig . 11 . one loop bubble diagrams .
last group , self energy diagrams , give the following contributions depicted on fig .
this contribution reduces to @xmath209 one can see that again all the infrared divergences canceled .
( 45000,27000)(5000,-9000 ) ( 0,15700)@xmath210 ( 0,12700)@xmath211 ( 5000,13000)[2 ] ( , ) ( , ) [ 3000 ] ( , ) [ 4500 ] by -2000 ( , ) ( @xmath107 ) ( , ) ( , ) [ 3000 ] ( , ) [ 2 ] ( , ) = by 2000 ( , 12700)@xmath113 by 2500 ( , 13000)[2 ] ( , ) ( , ) [ 3000 ] ( , ) [ 1500 ] ( , ) ( , ) [ 1500 ] by -2000 ( , ) ( @xmath212 ) ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 3000 ] ( , ) [ 2 ] ( , ) by 1500 ( , 12700)@xmath213 ( 0,4700)@xmath214 ( 5000,5000)[2 ] ( , ) ( , ) [ 1000 ] ( , ) [ 1000 ] ( , ) ( , ) [ 1000 ] ( , ) ( , ) [ 4500 ] by -2000 ( , ) ( @xmath215 ) ( , ) ( , ) [ 3000 ] ( , ) [ 2 ] ( , ) = by 2000 ( , 4700)@xmath113 by 2500 ( , 5000)[2 ] ( , ) ( , ) [ 1000 ] ( , ) [ 1000 ] ( , ) ( , ) [ 1000 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 1500 ] by -2000 ( , ) ( @xmath12 ) ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 3000 ] ( , ) [ 2 ] ( , ) by 1500 ( , 4700)@xmath213 ( 0,-3300)@xmath216 ( 5000,-3000)[2 ] ( , ) ( , ) [ 1000 ] ( , ) [ 1000 ] ( , ) ( , ) [ 1000 ] ( , ) ( , ) [ 4500 ] by -2000 ( , ) ( @xmath217 ) ( , ) ( , ) [ 1000 ] ( , ) [ 1000 ] ( , ) ( , ) [ 1000 ] ( , ) ( , ) [ 2 ] ( , ) = by 2000 ( , -3300)@xmath113 by 2500 ( , -3000)[2 ] ( , ) ( , ) [ 1000 ] ( , ) [ 1000 ] ( , ) ( , ) [ 1000 ] ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 1500 ] by -2000 ( , ) ( @xmath218 ) ( , ) ( , ) [ 1500 ] ( , ) ( , ) [ 1000 ] ( , ) [ 1000 ] ( , ) ( , ) [ 1000 ] ( , ) ( , ) [ 2 ] ( , ) by 1500 ( , -3300)@xmath213 ( 0,-7000)fig . 12 .
self energy - type contribution to one loop effective action the total one loop quadratic part of an effective action with restored coupling constant therefore takes the following simple form : @xmath219 where @xmath220 is understood as lattice operator .
our next step will be the four - point function .
let us consider next terms in an expansion of the effective action .
they contain four fields and up to four derivatives [ see eq . ( [ sigaction ] ) ] . at the tree level
, there are three contribution to these terms .
first is a `` classical '' term @xmath221 eq .
( [ classical ] ) .
two other contributions are given by diagrams fig .
13 , 14 . again , using the representation eq .
( [ int - prop ] ) splits every diagram into several .
( 40000,20000)(4000,-9000 ) ( 5000,5000)[3000 ] ( , ) [ 2 ] ( , ) ( , ) [ 3000 ] ( , ) [ 2 ] ( , ) ( , ) [ 3 ] ( , ) [ 3000 ] ( , ) [ 2 ] ( , ) ( , ) [ 3000 ] ( , ) [ 2 ] ( , ) ( 8000,-3500)(a ) ( 19000,7000)[3000 ] ( , ) [ 2 ] ( , ) ( , ) [ 3000 ] ( , ) [ 2 ] ( , ) ( , ) [ 3 ] ( , ) [ 3000 ] ( , ) [ 2 ] ( , ) ( , ) [ 3000 ] ( , ) [ 2 ] ( , ) ( 20000,-3500)(b ) ( 31000,7000)[3000 ] ( , ) [ 2 ] ( , ) ( , ) [ 3 ] ( , ) [ 3000 ] ( , ) [ 2 ] ( , ) ( , ) [ 7000 ] ( , ) [ 2 ] ( , ) ( , ) [ 7000 ] ( , ) [ 2 ] ( , ) ( 33000,-3500)(b ) ( 5000,-6000)fig . 13 .
fourth - order contributions including internal vertex fig .
( 40000,17000)(-1000,-12000 ) ( -500,500 ) ( -500,-500 ) ( -2500,1000)@xmath222 ( -2500,-1500)@xmath223 ( 0,0)[3 ] ( 500,1000)@xmath224 ( , ) [ 4000 ] ( , ) [ 3 ] ( , ) ( 8000,5500)@xmath225 ( , ) [ 4000 ] ( , ) [ 3 ] ( , ) ( 4000,-8000)(a ) ( 16500,4500 ) ( 17500,4500 ) ( 14000,4500)@xmath222 ( 18000,4500)@xmath225 ( 17000,4000)[3 ] ( 18000,)@xmath226 ( , ) [ 4000 ] ( , ) [ 3 ] ( , ) ( 11500,-5500)@xmath223 ( , ) [ 4000 ] ( , ) [ 3 ] ( , ) ( 17000,-8000)(b ) ( 30500,4500 ) ( 31500,4500 ) ( 28000,4500)@xmath223 ( 32000,4500)@xmath225 ( 31000,4000)[3 ] ( 32000,)@xmath227 ( , ) [ 4000 ] ( , ) [ 3 ] ( , ) ( 25500,-5500)@xmath222 ( , ) [ 4000 ] ( , ) [ 3 ] ( , ) ( 31000,-8000)(c ) ( 0,-11000)fig .
the fourth - order contributions coming from the source fig .
we calculate the coefficient function @xmath228 as follows : an effective action as decimated one is given in momentum space by diagrams ( fig .
13 , 14 ) plus classical contribution : @xmath229=\int\limits_{k , l , m } ( \pi^i_k \pi^i_l)(\pi^i_m \pi^i_{-k - l - m } ) \frac { 1}{4 ! } h^{(4)}_{ijkl}(k , l , m)\ ] ] @xmath230,\ ] ] where @xmath231 @xmath232\ ] ] @xmath233,\ ] ] @xmath234 \right . \right.\ ] ] @xmath233.\ ] ] on the other hand , this term in an effective action on the coarse grained lattice in one dimension could have the only form : @xmath235=- \int\limits_{k , l , m } \frac { 1}{4 ! } { \bf h}^{(4)}_{ijkl}(k , l , m ) \pi^i_k \pi^j_l \pi^k_m \pi^l_{-k - l - m}\ ] ] with @xmath236 here the first term , @xmath237 was determined in previous subsections ( it comes from expansion of the covariant action in terms of pions ) , and the second term is @xmath238 @xmath239 to find the remaining coefficient @xmath215 , we equate the coefficient functions ( @xmath240 fourth functional derivatives in fields @xmath241 ) in expressions eqs .
( [ fourdec],[foureff ] ) at some definite momentum configuration , for example , for @xmath242 ( the case of `` back - to - back scattering '' ) and for definite flavour indices ( @xmath243 , for instance ) : @xmath244 then the coefficient @xmath215 is determined by the linear equation @xmath245 @xmath246 for this configuration : @xmath247 @xmath248 equation eq .
( [ coeff ] ) gives then : @xmath249 notice , that at the first sight the decimated expression , eq .
( [ fourdec ] ) , should behave as @xmath250 ( one can check that using the representation ( [ int - prop ] ) cancels leading term @xmath251 even without decimation ) .
however , due to decimation procedure all the terms up to constant cancel , and in fact @xmath252 begins from the second order terms .
moreover , equality of the @xmath253 terms in @xmath254 and @xmath255 can serve as some additional consistency check .
the one loop contributions to the four derivative terms are shown in figs .
18 , 19 and 20 ( appendix d ) . for the sake of simplicity , we restrict our consideration to the leading @xmath256 contribution . these diagrams are calculated analogously to the one loop two derivatives terms , and give the following contribution to the effective action : @xmath257 again , all the potential ir divergences cancel due to the decimation procedure . to calculate the four point function ,
it is more convenient to employ somewhat different method .
namely , instead of direct calculation of the effective ( decimated ) action , we will use a matching approach .
the clue to this approach is that decimation does not change field variables , so that their correlators between points of sublattice @xmath7 , calculated in both original ( fine grained ) and effective ( coarse grained ) models should coincide .
we will therefore calculate correlators of fields in points @xmath258 starting from different scales , @xmath107 and @xmath259 in terms of the lattice spacing , and after this we will require matching conditions between these functions to be fulfilled . for practical calculations
this means that we should compare the amplitudes under considerations for two cases : decimated with parameter @xmath26 for the original lattice model and the usual lattice amplitudes calculated from the `` phenomenological '' lagrangian including irrelevant operators with as yet free coefficients : @xmath260 in momentum space this condition takes the form @xmath261|_{(p = p , k = k, ... )}={\bf g}^{(n)}(p , k, .. ).\ ] ] in fact , the usual approach when the lattice quantities are compared to the continual ones is nothing but the matching for particular value @xmath262 , although its physical meaning is not as transparent as for the decimation rg . strictly speaking ,
any amplitude is given as a power series in momenta , so that we should truncate this series at some point and consider truncated effective lagrangian with finite number of irrelevant operators rather than exact one .
matching conditions will then fix these coefficients . here
we will restrict our consideration to the four derivative terms in an effective action . in @xmath0 @xmath27-model
, there could be only one such term , @xmath263 .
thus , our `` phenomenological lagrangian '' has the form : @xmath264 where @xmath265 is the usual @xmath27-model lagrangian .
because there is only one arbitrary coefficient in a decimated lagrangian , we need the four point function calculated for one particular configuration of external momenta , for example @xmath266 $ ] . by four point function
we mean correlator of unconstrained fields , @xmath164 , rather than constrained fields @xmath14 .
then the second configuration ( back - to - back scattering , @xmath267 $ ] , for instance ) can be used as a consistency check . on the tree level , this amplitude as calculated from the scale @xmath107 is given by three contributions : @xmath268=\delta_{ij } \delta_{kl } [ |g^{(4)}_1 ( k , l , m)|]+\delta_{ik } \delta_{jl } [ |g^{(4)}_2 ( k , l , m)|]\ ] ] @xmath269\ ] ] to perform matching , it is sufficient to consider the term @xmath270 $ ] only .
this term is given by diagram fig .
15 and for configuration @xmath271 has the following expansion : @xmath272=-\frac { 4}{p^6}-\frac { 1}{4\ ; p^4}+ ... \ ] ] ( 25000,15000)(-10000,-5000 ) ( -7000,3300)@xmath273=$ ] ( 9500,7000 ) ( 9500,700 ) ( 7000,7000)@xmath274 ( 7000,700)@xmath275 ( 10000,7000)[4000 ] ( 10000,1000)[4000 ] ( 13000,4000)[3 ] ( 16000,4000)[4000 ] ( 16000,4000)[4000 ] ( 19000,7000 ) ( 19000,700 ) ( 20000,7000)@xmath276 ( 20000,700)@xmath277 ( -3000,-2000)fig . 15 .
fine grained four point function the same correlator but calculated as a sublattice quantity , has the following form ( fig .
16 ) : ( 35000,16000)(0,-5000 ) ( 3000,9000)@xmath278 ( 1000,4000)@xmath112 ( 2000,7000)@xmath274 ( 2000,700)@xmath275 ( 5000,7000)[4000 ] ( 5000,1000)[4000 ] ( 8000,4000)[3 ] ( 11000,4000)[4000 ] ( 11000,4000)[4000 ] ( 15000,7000)@xmath276 ( 15000,700)@xmath277 ( 16000,3700)@xmath279 ( 19000,3700)@xmath280 ( 21000,7000)[4000 ] by -400 ( , ) @xmath281 ( , ) [ 4000 ] by -400 ( , ) @xmath281 ( 21000,1000)[8000 ] ( 28000,3700)@xmath147 permutations @xmath213 ( 19000,7000)@xmath274 ( 19000,700)@xmath275 ( 27500,7000)@xmath276 ( 27500,700)@xmath282 ( 0,-2000)fig . 16 .
four point function in effective theory .
strokes correspond to the laplacian .
@xmath283 @xmath284 @xmath285,\ ] ] and expands at @xmath271 as @xmath286 comparing these two expressions , eqs .
( [ aexpan],[aexpan ] ) , one can find @xmath287 .
thus , the only tree level four - derivative term in the decimated effective action in @xmath0 is @xmath288 in fact , in the case @xmath5 the problem becomes especially simple . in this case
the partition function has the following form : @xmath289 @xmath290 @xmath291 @xmath292 .\ ] ] therefore , for @xmath5 an internal line reduces to the contact term , @xmath293 for @xmath294 .
this means that both an internal propagator and all the internal vertices are local , @xmath295 ets ( the problem becomes classical ) .
the only remaining non - local terms are sources . moreover , in this case @xmath100 is a lattice itself and thus there is no need to use the representation eq .
( [ int - prop ] ) ( it becomes trivial as the only non zero contribution comes from the basic diagram without any replacement ) .
the loops shrink into the points and therefore there is no loop integrals .
due to all these simplifications , all the calculations can be done immediately in the real space .
we can compare our perturbative rg results with the perturbative effective action obtained from the partition function eq .
( [ 1daction ] ) .
one can see that these expressions coincide .
effective action obtained here differs from the exact effective action for the @xmath0 @xmath27 model @xcite .
this difference is not surprising , however .
indeed , an exact decimation takes into account both perturbative and instanton - like nonperturbative configurations while here we restrict our consideration to the perturbative contribution only .
the real space rg discussed in previous sections is not restricted to the @xmath0 models only , but is immediately generalized to the higher dimensional theories . in this case calculations become much more cumbersome but otherwise all the technique remains unchanged .
here we would like to consider one of the most popular two dimensional theories - @xmath2 nonlinear @xmath1 @xmath27-model .
diagrammatics of the model under consideration coincides with one dimensional case .
therefore , we can immediately use diagrams described above and the formalism from section 3 to calculate an effective action . the only differences are disappearance of factors @xmath296 from the action and , of course , two dimensional sums in place of one dimensional . in @xmath2 , however , action is not perfect anymore and thus an exact analytical results not always can be obtained .
instead the numerical methods should be employed for this theory . moreover
, our discussion here will be restricted to the simplest case of decimation with the parameter @xmath297 on the tree level .
even such a simple transformation , however , can be of a use when the lattice calculations are concerned .
decimation diagrammatics once again consists of the internal propagator @xmath298 external legs @xmath299 inverse free decimated propagator @xmath136 and the vertices ( fig . 8 , fig .
the diagrams similar to one - dimensional ( see previous section ) , or equivalently matching conditions , determine form of the coefficient function for the effective action s quadratic part @xmath300 for @xmath5 as @xmath301 where @xmath302 @xmath303 is an inverse decimated propagator .
up to the order @xmath304 it can be approximated as @xmath305 on the other hand , this term should have the form ( see appendix c ) : @xmath306 @xmath307 comparing these two expression , we find : @xmath308 next , quartic terms in an effective action s expansion are calculated similarly to one dimensional case . again
, there are three contributions to these terms : classical term @xmath309 and two different diagrams ( fig .
. an effective theory , however , allows now for three different four derivative terms ( see appendix c ) contrary to the previous example where such term was unique .
an effective action in momentum space has the form : @xmath229=-\frac{1}{2 \pi } \int\limits_{k , l , m } \frac { 1}{4 ! } { \bf h}^{(4)}_{ijkl}(k , l , m ) \pi^i_k \pi^j_l \pi^k_m \pi^l_{-k - l - m}\ ] ] with @xmath310 @xmath311 once again the first term , @xmath312 comes from expansion of the quadratic part of the covariant action in terms of pions .
next terms are : @xmath313 @xmath314 @xmath315 @xmath314 and @xmath316 @xmath317 to fully determine this quartic terms , we need now to fix three coefficients ( @xmath318 , @xmath319 and @xmath320 ) .
this can be done calculating the coefficient functions @xmath131 , @xmath228 for three linearly independent momenta configurations .. as a consequence , decimated expressions including functions like @xmath321 can be sensitive to the sign of the argument even if the original expressions are not . therefore , when calculating the decimated diagrams , one should take into account all different momenta permutations . ]
this will give us a system of three linear equations similar to eq .
( [ coeff ] ) for the coefficients .
it is convenient to chose the following configurations : @xmath322 @xmath323 and @xmath324 then the coefficients @xmath325 will obey the equations : @xmath326 @xmath327 @xmath328 with the solution : @xmath329 the vanishing of @xmath318 and @xmath319 is rather surprising .
we do not see any obvious reason for this . on the tree level , one can still employ matching method to calculate next coefficients in expansion of an effective action : @xmath330 four point function as calculated in the original theory is given by the same diagram as in @xmath0 case . an effective theory
, however , allows now for three different four derivative terms and thus there will be four contribution to the effective correlator @xmath331 instead of two ( fig .
16 ) . thus to determine all the coefficients
, we need to impose matching conditions for at least three independent momenta configurations .
numerical calculations gave the following result ( a momenta configurations have been chosen the same as before ) : for @xmath332 ; @xmath333= -\frac { 1/2}{p^6}-\frac { 15/32}{p^4}+ ... ,\ ] ] @xmath334 for @xmath335 , @xmath336 , @xmath337 ; @xmath338= -\frac { 1}{p^6}-\frac { 117/128}{p^4}+ ... ,\ ] ] @xmath339 for @xmath340 ; @xmath333=-\frac { 4/3}{p^6}-\frac { 91/72}{p^4}+ ... ,\ ] ] @xmath341 matching conditions therefore give three linear algebraic equations on the coefficients @xmath342 and once again lead to the solution : @xmath343
to summarize we found a systematic way to perform rg of the decimation type in @xmath50 perturbatively .
we have seen during our discussion that perturbative decimation rg has rather complicated structure including considerable number of extra contributions compared to more customary approaches ( as momentum space rg ) as well as some cumbersome numerical calculations . here
we would like to some of its uses .
first of all , the formalism we propose here , being based on the decimation rg transformations , pssesses all the advantages of this type of rg .
as we have seen , it operates with original fields only and does not require any ( linear or nonlinear ) transformations of variables . besides , it preserves all the local relations including constraints .
this in turn means that effective ( coarse grained ) theory will obey exactly the same local constraints as did original , and that no non - covariant terms will appear in the effective action . among other applications , this opens a possibility to employ this formalism to study the critical phenomena , when strict control over symmetry properties of model becomes particularly important .
another , compared to others @xcite , useful feature of proposed formalism is its perturbative character
. this can provide us with systematic method of calculations in asymptotically free models and , what is even more essential , with a way to do _
controllable _ approximations .
hopefully , this side of proposed formalism will make it applicable in situations when such a control is essential , as in above mentioned critical phenomena or in the recently proposed double strong - weak expansion approach @xcite .
it turns out that in asymptotically free theories there exist region in parameter space where both strong and weak coupling expansions are valid at the same time . namely both the practical weak coupling @xmath344 and strong coupling @xmath345 expansion parameters are reasonably small .
the `` loop factors '' @xmath346 in the practical weak coupling expansion parameter @xmath347 are partly responsible for this . in this
scheme high frequency modes are integrated out perturbatively and the resulting effective action treated using strong coupling expansion.the symmetry preserving and controllable perturbative decimation technique is the most suitable tool for the first part of such calculations . of course , to apply the method described here to the one dimensional models one should take into account correctly the nonperturbative instanton - like configurations , because in @xmath0 the perturbation theory can be ill defined ( one such example we considered in section 5 ) . in higher dimensions , however , the relative contribution of the nonperturbative configurations becomes less significant .
we would like to stress also that the method described here unlike most of the other decimation ( and exact rg in general ) techniques , enables us to perform decimations not restricted to the simplest case @xmath5 only .
the @xmath348 calculation just takes a bit more computer time .
in this appendix free wilson fermions will be considered and decimated fermionic action in one dimension with @xmath297 will be derived via matching .
the lattice action of the @xmath0 wilson fermions has the form : @xmath349 where @xmath350 + a m \delta ( x - y)\ ] ] @xmath351.\ ] ] the fourier transformed kernel is : @xmath352 decimated propagator then reads : @xmath353 first of all , notice that at @xmath354 this expression does not have the correct form of the fermion propagator , and is instead @xmath355 in particular , the corresponding effective action has no massless limit .
the reason of such a strange behaviour is that actually in massless fermion theory without the wilson term there is no way to build an exponential generating functional for decimated theory and therefore there is no way to define an effective action .
moreover , one can check that in such a theory correlators between the even ( or odd ) cites vanish , so that @xmath5 decimation in this case leads to the complete loss of information . in general case
, the propagator takes the form : @xmath356 that is the kernel of an effective action is @xmath357 notice that for massless theory effective action after field rescaling has the same form with the new wilson parameter @xmath358 this means that the massless wilson action in @xmath0 is a perfect action and has a fixed point @xmath359 .
here we would like to reproduce the effective action in @xmath0 free boson model by means of the perturbation theory .
diagrammatics in this case consists of the internal propagator @xmath135 , external leg @xmath321 and the inverse decimated propagator @xmath360 : @xmath361 diagrams contributing to the effective action are shown on fig .
17 or ( 5000,15000)(-10000,0 ) ( -4000,12700)@xmath362 ( 500,13000 ) ( -500,13000 ) ( 3000,12700)@xmath147 ( 5000,12700)@xmath363 ( 7000,13000)[3 ] ( , ) ( , 13000)[5000 ] ( , 13000)[3 ] ( , ) ( -2000,7700)@xmath113 ( 0,7700)@xmath364 ( 2500,8000)[3 ] ( , ) ( , 8000)[4000 ] ( , 8000)[3000 ] ( , ) ( , 8000)[4000 ] ( , ) ( , 8000)[3 ] ( , ) ( -3000,3000)fig . 17
. diagrammatic representation of @xmath0 decimated action .
@xmath365 with @xmath366 - \frac { 4}{a^2}[| \frac { a^2 { \rm cos } a k}{4 { \rm sin}^2 \frac { a k}{2}}|]^2 4 { \rm sin}^2 \frac { k}{2},\ ] ] where @xmath367 $ ] denotes decimated expression : @xmath368=\frac { a}{2 \pi } \sum_x \int_{-\pi /a}^{\pi /a } e^{-i ( k - p)x } f(p ) d p = \sum_{n=1}^{\frac { 1}{a } } f(k+2 \pi n).\ ] ] the decimated blocs in an effective action are : ( 10000,7000)(0,0 ) ( 2000,3000)[3 ] ( , ) ( , 3000)[5000 ] ( , 3000 ) by 2000 ( , ) @xmath369=\frac{1}{4 { \rm sin}^2 \frac { k}{2 } } -\frac { a}{2}$ ] , ( 10000,5000)(0,0 ) ( 2000,3000)[3 ] ( , ) ( , 3000)[5000 ] ( , 3000)[3 ] ( , 3000 ) by 2000 ( , ) @xmath370=\frac{1}{4 { \rm sin}^2 \frac { k}{2 } } -\frac { a}{2}$ ] .
thus , for the kernel of an effective action we obtain an expression @xmath371 which clearly coincides with that obtained by the matching method .
in this appendix an expression for the quartic part of an effective action of @xmath2 @xmath27-model will be given .
this expression is necessary when the matching approach is employed .
decimation technique is applied to the unconstrained variables , `` pions '' @xmath164 and therefore mainly gives us non - covariant quantities . on the other hand ,
an effective action is expressed in terms of constrained , covariant variables @xmath14 . to reconstruct it from the non - covariant rg results , we need to re - express this effective action in terms of the fields @xmath164 and
then its coefficients can be identified by simple comparison .
the most general covariant effective action with up to four derivatives is given by an expression @xcite @xmath372 @xmath373.\ ] ] here @xmath14 ( @xmath374 ) are @xmath1 vectors on lattice normalized to unity : @xmath160 , and we follow notations of @xcite with lattice spacing @xmath375 .
to reconstruct coefficients @xmath376 , one can solve the constraint : @xmath377 and expand this action in terms of `` pions '' .
to the fourth order in @xmath164 and up to fourth derivatives , actions is : @xmath378 with quadratic part @xmath379\ ] ] and quartic part @xmath380 @xmath381.\ ] ]
here the leading in @xmath256 expansion one loop diagrams contributing to the four derivatives terms of the effective action are shown .
all the diagrams can be divided into four groups : diagrams with all the external points coinciding ( fig .
18(a ) , 18(b ) ) ; with two pairs of the coinciding points(fig .
18(c ) ) ; with two coinciding and two different points ( fig . 19 ) and with all different external points ( fig .
diagrams from different groups give different analytical expressions even for @xmath0 , where diagrams inside each group are proportional to each other .
thus , these groups can be calculated independently .
this can provide us with th .
niemeijer and j.m.j .
van leeuwen , in _ phase transitions and critical phenomena _
6 , ed . c. domb and m.s .
green ( academic , new - york , 1976 ) ; ; h.j .
hilhorst , m. schick and j.m.j .
van leeuwen ; .
t.l . bell and k.wilson , ; k.g .
wilson and j. kogut , .
d.r.nelson and r.a .
pelcovits , .
k.h . mutter and k. schilling , ; t. matsui , a.a .
migdal , ; l.p.kadanoff and a. houghton , ; l.p .
kadanoff , .
a. patkos , .
indekeu , a. maritan and a.l .
stella , ; i.p .
fittipaldi , .
s.h . shenker and j. tobochnik , ; a. hasenfratz and a. margaritis , .
griffiths and p.r .
j. polchinski , ; r.s .
ball and r.s .
ma , . b. hu , .
prudnikov , yu.a .
brychkov and o.i .
marichev , _ integrals and series _ , gordon and breach science publishers , new york ( 1986 ) . c. itzykson and j .- m .
drouffe , statistical field theory , cambridge university press , new york ( 1989 ) ; m. creutz , _ `` quarks , gluons and lattices.''_. cambridge university press , new york ( 1983 ) .
p. hasenfratz and f. niedermayer , .
c. sire and j. bellissard , .
s. elitzur , .
stanley , .
b. rosenstein , _
double expansion in asymptotically free theories _ , to be published in _ phys . letters _
* b*. k. simanzik , . | we develop a formalism for performing real space renormalization group transformations of the `` decimation type '' using low temperature perturbation theory .
this type of transformations beyond @xmath0 is highly nontrivial even for free theories .
we construct such a solution in arbitrary dimensions and develop a weak coupling perturbation theory for it .
the method utilizes schur formula to convert summation over decorated lattice into summation over either original lattice or sublattice .
we check the formalism on solvable case of @xmath1 symmetric heisenberg chain .
the transformation is particularly useful to study models undergoing phase transition at zero temperature ( various @xmath0 and @xmath2 spin models , @xmath2 fermionic models , @xmath3 nonabelian gauge models ... ) for which the weak coupling perturbation theory is a good approximation for sufficiently small lattice spacing .
results for one class of such spin systems , the d=2 o(n ) symmetric spin models ( @xmath4 ) for decimation with scale factor @xmath5 ( when quarter of the points is left ) are given as an example .
# 1#2#3 _ ann .
phys .
( ny ) _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ ann
. rev .
nucl . part .
sci . _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ int .
j. mod .
phys . _ * a#1 * ( 19#2 ) # 3 # 1#2#3 _ nucl .
phys .
_ * b#1 * ( 19#2 ) # 3 # 1#2#3 _ phys .
lett . _
* b#1 * ( 19#2 ) # 3 # 1#2#3 _ phys .
rev . _
* d#1 * ( 19#2 ) # 3 # 1#2#3 _ phys . rev . _
* b#1 * ( 19#2 ) # 3 # 1#2#3 _ phys .
rev . _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ phys .
rep . _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ physica _ * a#1 * ( 19#2 ) # 3 # 1#2#3 _ phys .
rev .
lett . _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ prog .
theor .
phys . _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ rev .
mod .
phys . _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ z. physik _ * c#1 * ( 19#2 ) # 3 # 1#2#3 _ mod .
phys .
lett . _ * a#1 * ( 19#2 ) # 3 # 1#2#3 _ sov . j. nucl
. phys . _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ yad . fiz . _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ nuovo cim . _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ sov .
. jetp _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ jetp lett . _
* # 1 * ( 19#2 ) # 3 # 1#2#3 _ ibid . _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ int .
j. mod .
phys . _ * a#1 * ( 19#2 ) # 3 # 1#2#3 _ europhys .
lett . _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ j. magn .
magn .
mater . _ * # 1 * ( 19#2 ) # 3 # 1#2#3 _ j. phys . _ * a#1 * ( 19#2 ) # 3 june 14 , 1995 ip - astp-15 v. kushnir and b. rosenstein + institute of physics , academia sinica , taipei , 11529 , taiwan , r.o.c pacs number(s ) : 05.50.+q |
this paper concerns with mixed interior penalty discontinuous galerkin ( mip - dg ) approximations of the following cahn - hilliard problem : @xmath2 here @xmath3 is a bounded domain , and @xmath4 , @xmath5 is a nonconvex potential density function which takes its global minimum zero at @xmath6 . in this paper , we only consider the following quartic potential density function : @xmath7 after eliminating the intermediate variable @xmath8 ( called the chemical potential ) , the above system reduces into a fourth order nonlinear pde for @xmath9 , which is known as the cahn - hilliard equation in the literature .
this equation was originally introduced by john w. cahn and john e. hilliard in @xcite to describe the process of phase separation , by which the two components of a binary fluid spontaneously separate and form domains pure in each component . here @xmath9 and @xmath10 denote respectively the concentrations of the two fluids , with @xmath6 indicating domains of the two components .
we note that the equation differs from the original cahn - hilliard equation in the scaling of the time , and @xmath11 here corresponds to @xmath12 in the original formulation .
@xmath13 , which is positively small , is called the interaction length . besides its important role in materials phase transition
, the cahn - hilliard equation has been extensively studied due to its close relation with the hele - shaw problem .
it was first formally proved by pego @xcite that the chemical potential @xmath14 tends to a limit which satisfies the following free boundary problem known as the hele - shaw problem : @xmath15 , \label{eq1.5}\\ \frac{\partial w}{\partial n } & = 0 & & \quad \mbox{on } { \partial}\omega,\ t\in[0,t ] , \label{eq1.6}\\ w & = \sigma\kappa & & \quad \mbox{on } \gamma_t,\ t\in[0,t],\label{eq1.7}\\ v & = \frac{1}{2 } \bigl [ \frac{\partial w}{\partial n } \bigr]_{\gamma_t } & & \quad \mbox{on } \gamma_t,\ t\in[0,t],\label{eq1.8 } \ ] ] as @xmath16 , provided that the hele - shaw problem has a unique classical solution . here
@xmath17 @xmath18 and v represent the mean curvature and the normal velocity of the interface @xmath19 . a rigorous justification that @xmath20 in the interior or exterior of @xmath19 for all @xmath21 $ ] as @xmath16 was given by stoth @xcite for the radially symmetric case , and by alikakos , bates and chen @xcite for the general case . in addition , chen @xcite established the convergence of the weak solution of the cahn - hilliard problem to a weak ( or varifold ) solution of the hele - shaw problem .
moreover , the cahn - hilliard equation ( together with the allen - cahn equation ) has become a fundamental equation as well as a building block in the phase field methodology ( or the diffuse interface methodology ) for moving interface and free boundary problems arising from various applications such as fluid dynamics , materials science , image processing and biology ( cf .
@xcite and the references therein ) .
the diffuse interface approach provides a convenient mathematical formalism for numerically approximating the moving interface problems because explicitly tracking the interface is not needed in the diffuse interface formulation .
the main advantage of the diffuse interface method is its ability to handle with ease singularities of the interfaces . like many singular perturbation problems ,
the main computational issue is to resolve the ( small ) scale introduced by the parameter @xmath13 in the equation .
computationally , the problem could become intractable , especially in three - dimensional cases if uniform meshes are used .
this difficulty is often overcome by exploiting the predictable ( at least for small @xmath13 ) pde solution profile and by using adaptive mesh techniques ( cf .
@xcite and the references therein ) , so fine meshes are only used in the diffuse interface region .
numerical approximations of the cahn - hilliard equation have been extensively carried out in the past thirty years ( cf . @xcite and the references therein ) . on the other hand ,
the majority of these works were done for a fixed parameter @xmath13 .
the error bounds , which are obtained using the standard gronwall inequality technique , show an exponential dependence on @xmath22 .
such an estimate is clearly not useful for small @xmath13 , in particular , in addressing the issue whether the computed numerical interfaces converge to the original sharp interface of the hele - shaw problem .
better and practical error bounds should only depend on @xmath22 in some ( low ) polynomial orders because they can be used to provide an answer to the above convergence question , which in fact is the best result ( in terms of @xmath13 ) one can expect . the first such polynomial order in @xmath22 a priori estimate was obtained in @xcite for mixed finite element approximations of the cahn - hilliard problem . in addition , polynomial order in @xmath22 a posteriori error estimates were obtained in @xcite for the same mixed finite element methods .
one of the key ideas employed in all these works is to use a nonstandard error estimate technique which is based on establishing a discrete spectrum estimate ( using its continuous counterpart ) for the linearized cahn - hilliard operator .
an immediate corollary of the polynomial order in @xmath22 a priori and a posteriori error estimates is the convergence of the numerical interfaces of the underlying mixed finite element approximations to the hele - shaw flow before the onset of singularities of the hele - shaw flow as @xmath13 and mesh sizes @xmath23 and @xmath24 all tend to zero
. the objectives of this paper are twofold : firstly , we develop some mip - dg methods and to establish polynomial order in @xmath22 a priori error bounds , as well as to prove convergence of numerical interfaces for the mip - dg methods .
this goal is motivated by the advantages of dg methods in regard to designing adaptive mesh methods and algorithms , which is an indispensable strategy with the diffuse interface methodology .
secondly , we use the cahn - hilliard equation as another prototypical model problem @xcite to develop new analysis techniques for analyzing convergence of numerical interfaces to the underlying sharp interface for dg ( and nonconforming finite element ) discretizations of phase field models . to the best of our knowledge , no such convergence result and analysis technique is available in the literature for fourth order pdes .
the main obstacle for improving the finite element techniques of @xcite is that the dg ( and nonconforming finite element ) spaces are not subspaces of @xmath1 .
as a result , whether the needed discrete spectrum estimate holds becomes a key question to answer .
this paper consists of four additional sections . in section [ sec-2 ]
we first collect some a priori error estimates for problem - , which show the explicit dependence on the parameter @xmath13 .
we then cite two important technical lemmas to be used in the later sections .
one of the lemma states the spectral estimate for the linearized cahn - hilliard operator . in section [ sec-3 ]
, we propose two fully discrete mip - dg schemes for problem
, they differ only in their treatment of the nonlinear term . the first main result of this section is to establish a discrete spectrum estimate in the dg space , which mimics the spectral estimates for the differential operator and its finite element counterpart .
the second main result of this section is to derive optimal error bounds which depends on @xmath22 only in low polynomial orders for both fully discrete mip - dg methods . in section [ sec-4 ] , using the refined error estimates of section [ sec-3 ] , we prove the convergence of the numerical interfaces of the fully discrete mip - dg methods to the interface of the hele - shaw flow before the onset of the singularities as @xmath25 and @xmath24 all tend to zero .
finally , in section [ sec-5 ] we provide some numerical experiments to gauge the performance of the proposed fully discrete mip - dg methods .
in this section , we shall collect some known results about problem from @xcite , which will be used in sections [ sec-3 ] and [ sec-4 ] .
some general assumptions on the initial condition , as well as some energy estimates based on these assumptions , will be cited .
standard function and space notations are adopted in this paper @xcite .
we use @xmath26 and @xmath27 to denote the standard inner product and norm on @xmath28 . throughout this paper
, @xmath29 denotes a generic positive constant independent of @xmath13 , space and time step sizes @xmath23 and @xmath24 , which may have different values at different occasions .
we begin with the following well known fact @xcite that the cahn - hilliard equation - can be interpreted as the @xmath30-gradient flow for the cahn - hilliard energy functional @xmath31 the following assumptions on the initial datum @xmath32 were made in @xcite , they were used to derive a priori estimates for the solution of problem . * general assumption * ( ga ) * assume that @xmath33 where @xmath34 * there exists a nonnegative constant @xmath35 such that @xmath36 * there exists nonnegative constants @xmath37 , @xmath38 and @xmath39 such that @xmath40 under the above assumptions , the following solution estimates were proved in @xcite .
[ prop2.1 ] the solution @xmath9 of problem satisfies the following energy estimates : @xmath41 moreover , suppose that
hold , @xmath42 and @xmath43 , then @xmath9 satisfies the additional estimates : @xmath44 furthermore , if there exists @xmath45 such that @xmath46 then there hold for @xmath47 , @xmath48 where @xmath49 the next lemma concerns with a lower bound estimate for the principal eigenvalue of the linearized cahn - hilliard operator , a proof of this lemma can be found in @xcite . [ lem3.4 ] suppose that hold .
given a smooth initial curve / surface @xmath50 , let @xmath32 be a smooth function satisfying @xmath51 and some profile described in @xcite .
let @xmath9 be the solution to problem .
define @xmath52 as @xmath53 then there exists @xmath54 and a positive constant @xmath55 such that the principle eigenvalue of the linearized cahn - hilliard operator @xmath52 satisfies @xmath56 for @xmath57 $ ] and @xmath58 .
+ \(a ) a discrete generalization of on @xmath59 finite element spaces was proved in @xcite .
it plays a pivotal role in the nonstandard convergence analysis of @xcite . in the next section
, we shall prove another discrete generalization of on the dg finite element space .
\(b ) the restriction on the initial function @xmath32 is needed to guarantee that the solution @xmath60 satisfies certain profile at later time @xmath61 which is required in the proof of @xcite .
one example of admissible initial functions is @xmath62 , where @xmath63 stands for the signed distance function to the initial interface @xmath50 .
such a @xmath32 is smooth when @xmath50 is smooth .
next lemma can be regarded as a nonlinear generalization of the classical discrete gronwall lemma .
it gives an upper bound estimate for a discrete sequence which satisfies a nonlinear inequality with bernoulli - type nonlinearity , which will be utilized crucially in the next section .
a proof of this lemma can be found in @xcite and its differential counterpart can be seen in @xcite .
[ lem2.3 ] let @xmath64 be a positive nondecreasing sequence , @xmath65 and @xmath66 be nonnegative sequences , and @xmath67 be a constant .
if @xmath68 then @xmath69 where @xmath70
in this section we present and analyze two fully discrete mip - dg methods for the cahn - hilliard problem . the primary goal of this section is to derive error estimates for the dg solutions that depend on @xmath0 only in low polynomial orders , instead of exponential orders . as in the finite element case ( cf .
@xcite ) , the crux is to establish a discrete spectrum estimate for the linearized cahn - hilliard operator on the dg space .
let @xmath71 be a quasi - uniform triangulation of @xmath72 parameterized by @xmath73 .
for any triangle / tetrahedron @xmath74 , we define @xmath75 to be the diameter of @xmath76 , and @xmath77 .
the standard broken sobolev space is defined as @xmath78 for any @xmath74 , @xmath79 denotes the set of all polynomials of degree at most @xmath80 on the element @xmath76 , and the dg finite element space @xmath81 is defined as @xmath82 let @xmath83 denote the set of functions in @xmath28 with zero mean , and let @xmath84 .
we also define @xmath85 to be the set of all interior edges / faces of @xmath86 , @xmath87 to be the set of all boundary edges / faces of @xmath86 on @xmath88 , and @xmath89 .
let @xmath90 be an interior edge shared by two elements @xmath91 and @xmath92 .
for a scalar function @xmath93 , define @xmath94 = v|_k - v|_{k^\prime } , \quad \text{on } e\in { \mathcal{e}}_h^i,\ ] ] where k is @xmath91 or @xmath92 , whichever has the bigger global labeling and @xmath95 is the other .
the @xmath96-inner product for piecewise functions over the mesh @xmath86 is naturally defined by @xmath97 let @xmath98 be a partition of the interval @xmath99 $ ] with time step @xmath100 .
our fully discrete mip - dg methods are defined as follows : for any @xmath101 , @xmath102 are given by @xmath103 where @xmath104\,ds\\ & -\sum_{e\in \mathcal{e}_{h}^i}\int_{e}\{\nabla v\cdot \mathbf{n}_{e}\}[u]\,ds + \sum_{e\in \mathcal{e}_{h}^i}\int_{e}\frac{\sigma_{e}^{0}}{h_e}[u][v]\,ds , \nonumber\end{aligned}\ ] ] and @xmath105 is the penalty parameter .
there are two choices of @xmath106 considered in this paper , namely @xmath107 which lead to the energy - splitting scheme and fully implicit scheme respectively .
@xmath108 is the ( backward ) difference operator defined by @xmath109 and @xmath110 ( or @xmath111 ) is the starting value , with the finite element @xmath112 ( or @xmath96 ) projection @xmath113 ( or @xmath114 ) to be defined below .
we refer to @xcite for a discussion why a continuous projection is needed for the initial condition .
we remark that only the fully implicit case was considered in @xcite for the mixed finite element method . in order to analyze the stability of
, we need some preparations . first , we introduce three projection operators that will be needed to derive the error estimates in section [ sec-3.4 ] .
@xmath115 denotes the elliptic projection operator defined by @xmath116 which has the following approximation properties ( see @xcite ) : @xmath117 here @xmath118 .
let @xmath119 denote the standard continuous finite element elliptic projection , which is the counterpart of projection @xmath120 .
it has the following well - known property @xcite : @xmath121 next , for any dg function @xmath122 , we define its continuous finite element projection @xmath123 by @xmath124 where @xmath125 and @xmath126 is a parameter that will be specified later in section [ sec-3.3 ] . a mesh - dependent @xmath30 norm will also be needed . to the end
, we introduce the inverse discrete laplace operator @xmath127 as follows : given @xmath128 , let @xmath129 such that @xmath130 we note that @xmath131 is well defined provided that @xmath132 for some positive number @xmath133 and for all @xmath134 because this condition ensures the coercivity of the dg bilinear form @xmath135 .
we then define `` -1 '' inner product by @xmath136 and the induced mesh - dependent @xmath30 norm is given by @xmath137 where @xmath138 .
the following properties can be easily verified ( cf .
@xcite ) : @xmath139 and , if @xmath140 is quasi - uniform , then @xmath141 in this subsection we first establish a discrete energy law , which mimics the differential energy law , for both fully discrete mip - dg methods defined in . based on this discrete energy law , we prove the existence and uniqueness of solutions to the mip - dg methods by recasting the schemes as convex minimization problems at each time step .
it turns out that the energy - splitting scheme is unconditionally stable but the fully implicit scheme is only conditionally stable . [ lem3.1 ] let @xmath142 be a solution to scheme .
the following energy law holds for any @xmath143 @xmath144 for all @xmath145 where @xmath146 note that the sign
@xmath147 " in takes
@xmath148 " when @xmath149 and @xmath150 " when @xmath151 .
the proof of the above theorem follows from taking @xmath152 in and @xmath153 in , adding the resulting two equations and combining like terms .
we leave the detailed calculations to the interested reader .
let @xmath154 be a sufficiently large constant .
suppose that @xmath155 for all @xmath134 .
then scheme is stable for all @xmath156 when @xmath149 and is stable for @xmath157 and @xmath158 when @xmath151 .
the first case holds trivially from . in the second case
, the bad term " @xmath159 can be controlled by the good terms " @xmath160 and @xmath161 by using the norm interpolation inequality provided that @xmath158 .
[ existence ] suppose that @xmath155 for all @xmath134 .
then scheme has a unique solution @xmath162 at each time step for for all @xmath156 in the case @xmath149 and for @xmath157 and @xmath158 in the case @xmath151 . setting @xmath163 in we get @xmath164 adding the above equation to yields @xmath165 hence , @xmath166 satisfies @xmath167 in the case @xmath149 it is easy to check that can be recast as a convex minimization problem ( cf .
@xcite ) whose well - posedness holds for all @xmath156 .
hence , in this case there is a unique solution @xmath166 to . on the other hand , when @xmath151 , there is an extra term @xmath168 comes out from the nonlinear term in .
this extra term contributes a bad term " @xmath169 to the functional of the minimization problem .
again , this term can be controlled by the good terms " @xmath160 and @xmath161 in the functional by using the norm interpolation inequality , provided that @xmath158 .
hence , in the case @xmath151 , there is a unique solution @xmath166 to for all @xmath157 and @xmath158 .
the proof is complete . in this subsection
, we shall establish a discrete spectrum estimate for the linearized cahn - hilliard operator on the dg space , which plays a vital role in our error estimates . to the end
, we first state a slightly modified version of a discrete spectrum estimate for the linearized cahn - hilliard operator on the continuous finite element space first proved in @xcite . due to the close similarity , we omit the proof of this modified version and refer the interested reader to @xcite .
[ lem3.5 ] suppose the assumptions of lemma [ lem3.4 ] hold , and @xmath55 is the same as in .
@xmath170 and @xmath171 are defined by @xmath172 then there exists @xmath173 such that , for any @xmath174 , there holds @xmath175 provided that @xmath23 satisfies @xmath176 here @xmath177 denotes the inverse laplace operator .
we are now ready to state the discrete spectrum estimate on the dg space .
[ prop2.3 ] suppose the assumptions of lemma [ lem3.4 ] hold .
let @xmath9 be the solution of and @xmath178 denote its dg elliptic projection .
assume @xmath179 for a constant @xmath180 , then there exists @xmath181 and an @xmath13-independent and @xmath23-independent constant @xmath182 , such that for any @xmath183 , there holds @xmath184 provided that @xmath23 satisfies the constraints @xmath185 where @xmath170 and @xmath171 are same as in lemma [ lem3.5 ] , @xmath186 and @xmath187 are defined by @xmath188 by proposition 2 in @xcite , under the mesh constraint , we have @xmath189 similarly , under the mesh condition , we can show that for any @xmath190 , there holds @xmath191 it follows from and that @xmath192 therefore , @xmath193 next , we derive a lower bound for each of the first two terms on the right - hand side of . notice that the first term can be rewritten as @xmath194 to bound @xmath195 from above , we consider the following auxiliary problem : @xmath196 for @xmath197 for all @xmath134 , the above problem has a unique solution @xmath198 for @xmath199 such that @xmath200.\ ] ] by the definition of @xmath201 , we immediately get the following galerkin orthogonality : @xmath202 it follows from the duality argument ( cf .
* theorem 2.14 ) ) that @xmath203 for all @xmath23 satisfying @xmath204 , we get @xmath205 now the last term on the right - hand side of can be bounded as follows : @xmath206 the second term on the right - hand side of can be bounded by @xmath207 here we have used the facts that @xmath208 where @xmath209 and @xmath23 is chosen small enough such that @xmath210 . the term @xmath211 can be bounded by @xmath212 for any constant @xmath213 .
adding the fifth term on the right - hand side of , the last term on the right - hand side of and that of , we get for all @xmath23 satisfying @xmath214 @xmath215 combining , , and with , we have @xmath216 applying the spectrum estimate , we get @xmath217 which together with implies that @xmath218 by the stability of @xmath219 , we have @xmath220 which together with the triangle inequality yields @xmath221 similarly , since @xmath222 is the elliptic projection of @xmath223 , there holds @xmath224 therefore , choosing @xmath225 , can be further reduced into @xmath226 for some @xmath227
. this proves , and the proof is complete . in this subsection , we shall derive some optimal error estimates for the proposed mip - dg schemes , in which the constants in the error bounds depend on @xmath228 only in low polynomial orders , instead of exponential orders .
the key to obtaining such refined error bounds is to use the discrete spectrum estimate .
in addition , the nonlinear gronwall inequality presented in lemma [ lem2.3 ] also plays an important role in the proof . to ease the presentation ,
we set @xmath229 in this subsection and section [ sec-4 ] , and generalization to @xmath230 can be proven similarly .
the main results of this subsection are stated in the following theorem .
[ thm3.1 ] let @xmath231 be the solution of scheme ( [ eq3.8])([eq3.9 ] ) with @xmath229 .
suppose that ( ga ) holds and @xmath197 for all @xmath134 , and define @xmath232 then , under the following mesh and starting value conditions : @xmath233 there hold the error estimates @xmath234 moreover , if the starting value @xmath235 satisfies @xmath236 then there hold @xmath237 furthermore , suppose that the starting value @xmath238 satisfies @xmath239 for some @xmath240 , and there exists a constant @xmath241 such that @xmath242 then we have @xmath243 in the following , we only give a proof for the convex splitting scheme corresponding to @xmath244 in because the proof for the fully implicit scheme with @xmath245 is almost same . since the proof is long , we divide it into four steps
. it is obvious that equations imply that @xmath246 define error functions @xmath247 and @xmath248 .
subtracting from and from yield the following error equations : @xmath249 where @xmath250 it follows from that @xmath251 introduce the error decompositions @xmath252 where @xmath253 using the definition of the operator @xmath120 in , can be rewritten as @xmath254 setting @xmath255 in and @xmath256 in , adding the resulting equations and summing over @xmath257 from @xmath258 to @xmath259 , we get @xmath260 for @xmath261 for all @xmath134 , the first long term on the right - hand side of can be bounded as follows @xmath262 where we have used and the following facts @xcite : @xmath263 we now bound the last term on the left - hand side of . by the definition of @xmath106
, we have @xmath264 by the discrete energy law , and , we obtain for any @xmath265 @xmath266 substituting and into we get @xmath267 to control the second term on the right - hand side of , we appeal to the following gagliardo - nirenberg inequality @xcite : @xmath268 thus we get @xmath269 the third item on the right - hand side of can be bounded by @xmath270 again , here we have used . finally ,
for the third term on the left - hand side of , we utilize the discrete spectrum estimate to bound it from below as follows : @xmath271 by the stability of @xmath219 and , we also have @xmath272 substituting , , , into , we get @xmath273 by discrete energy law , general assumption , @xmath112 stability of elliptic projection , @xmath274 stability(or @xmath274 error estimate and triangle inequality ) of elliptic projection , we can get for any @xmath275 @xmath276 since the projection of @xmath9 is bounded , then for any @xmath275 @xmath277 we point out that the exponent for @xmath278 is @xmath279 , which is bigger than @xmath280 for @xmath47 . by we
have @xmath281 using the schwarz and young s inequalities , we have @xmath282 therefore , becomes @xmath283 on noting that @xmath166 can be written as @xmath284 then by and , we get @xmath285 using the boundedness of the projection , we have @xmath286 also , can be written in the following equivalent form @xmath287 where @xmath288 it is easy to check that @xmath289 under this restriction , we have @xmath290 define the slack variable @xmath291 such that @xmath292 we also define @xmath293 by @xmath294 and equation shows that @xmath295 then @xmath296 applying lemma [ lem2.3 ] to @xmath297 defined above , we obtain @xmath298 , @xmath299 provided that @xmath300 we note that @xmath301 are all bounded as @xmath302 , therefore , holds under the mesh constraint stated in the theorem .
it follows from and that @xmath303 then follows from the triangle inequality on @xmath304 .
is obtained by taking the test function @xmath305 in and @xmath256 in , and is a consequence of the poincar inequality .
now setting @xmath305 in and @xmath306 in , and adding the resulting equations yield @xmath307 the last three terms on the right - hand side of can be bounded in the same way as in , and the first term can be controlled as @xmath308 multiplying both sides of by @xmath24 and summing over @xmath257 from @xmath258 to @xmath309 yield the desired estimate .
estimate follows from an applications of the following inverse inequality : @xmath310 and the following @xmath274 estimate for the elliptic projection : @xmath311 finally , it is well known that there holds the following estimate for the elliptic projection operator : @xmath312 using the identity @xmath313 we get @xmath314 the first term on the right hand side of can be absorbed by the second term on the left hand side of .
the second tern on the right hand side of has been obtained in .
estimate for @xmath315 then follows from and
. follows from an application of the triangle inequality , the inverse inequality , and .
this completes the proof .
in this section , we prove that the numerical interface defined as the zero level set of the finite element interpolation of the solution @xmath166 converges to the moving interface of the hele - shaw problem under the assumption that the hele - shaw problem has a unique global ( in time ) classical solution . to the end ,
we first cite the following pde convergence result proved in @xcite .
[ thm4.1 ] let @xmath72 be a given smooth domain and @xmath316 be a smooth closed hypersurface in @xmath72 .
suppose that the hele - shaw problem starting from @xmath316 has a unique smooth solution @xmath317 in the time interval @xmath318 $ ] such that @xmath319 for all @xmath21 $ ] .
then there exists a family of smooth functions @xmath320 which are uniformly bounded in @xmath321 $ ] and @xmath322 , such that if @xmath323 solves the cahn - hilliard problem , then * @xmath324 , where @xmath325 and @xmath326 stand for the inside " and outside " of @xmath327 ; * @xmath328 uniformly on @xmath329 .
we note that since @xmath166 is multi - valued on the edges of the mesh @xmath86 , its zero - level set is not well defined . to avoid this technicality
, we use a continuous finite element interpolation of @xmath166 to define the numerical interface .
let @xmath330 denote the finite element approximation of @xmath166 which is defined using the averaged degrees of freedom of @xmath166 as the degrees of freedom for determining @xmath331 ( cf .
the following approximation results were proved in theorem 2.1 of @xcite .
[ lem4.1 ] let @xmath140 be a conforming mesh consisting of triangles when @xmath332 , and tetrahedra when @xmath333 .
for @xmath334 , let @xmath335 be the finite element approximation of @xmath336 as defined above .
then for any @xmath334 and @xmath337 there holds @xmath338\|_{l^2(e)}^2,\end{aligned}\ ] ] where @xmath339 is a constant independent of @xmath23 and @xmath336 but may depend on @xmath340 and the minimal angle @xmath341 of the triangles in @xmath140 . by the construction , @xmath331
is expected to be very close to @xmath166 , hence , @xmath331 should also be very close to @xmath342 .
this is indeed the case as stated in the following theorem , which says that theorem [ thm3.1 ] also hold for @xmath331 .
[ lem4.2 ] let @xmath343 denote the solution of scheme and @xmath344 denote its finite element approximation as defined above . then under the assumptions of theorem [ thm3.1 ] the error estimates for @xmath166 given in theorem [ thm3.1 ] are still valid for @xmath344 , in particular , there holds @xmath345 we omit the proof to save space and refer the reader to @xcite to see a proof of the same nature for the related allen - cahn problem .
we are now ready to state the first main theorem of this section .
[ thm4.2 ] let @xmath346 denote the zero level set of the hele - shaw problem and @xmath347 denote the piecewise linear interpolation in time of the finite element interpolation @xmath348 of the dg solution @xmath349 , namely , @xmath350 for @xmath351 and @xmath101 .
then , under the mesh and starting value constraints of theorem [ thm3.1 ] and @xmath352 with @xmath353 , we have * @xmath354 uniformly on compact subset of @xmath326 , * @xmath355 uniformly on compact subset of @xmath325 .
* moreover , in the case that dimension @xmath332 , when @xmath356 , suppose that @xmath238 satisfies @xmath357 for some @xmath358 , then we have @xmath359 uniformly on @xmath329 . for any compact set @xmath360 and for any @xmath361 , we have @xmath362 equation of theorem [ thm3.1 ] infers that there exists a constant @xmath363 such that @xmath364 when @xmath365 ( note that @xmath366 , too ) .
the second term converges uniformly to @xmath367 on the compact set @xmath368 , which is ensured by ( i ) of theorem [ thm4.1 ] .
hence , the assertion ( i ) holds .
to show ( ii ) , we only need to replace @xmath326 by @xmath325 and @xmath258 by @xmath369 in the above proof . to prove ( iii ) , under the assumptions @xmath356 , in theorem [ thm3.1 ] implies that there exists a positive constant @xmath370 such that @xmath371 then by the triangle inequality we obtain for any @xmath322 , @xmath372 the first term on the right - hand side of tends to @xmath367 when @xmath365 ( note that @xmath366 , too ) .
the second term converges uniformly to @xmath367 in @xmath329 , which is ensured by ( ii ) of theorem [ thm4.1 ] .
thus the assertion ( iii ) is proved .
the proof is complete . the second main theorem of this section which is given below addresses the convergence of numerical interfaces .
[ thm4.3 ] let @xmath373 be the zero level set of @xmath374 , then under the assumptions of theorem [ thm4.2 ] , we have @xmath375$}.\ ] ] for any @xmath376 , define the open tabular neighborhood @xmath377 of width @xmath378 of @xmath19 as @xmath379 let @xmath368 and @xmath380 denote the complements of the neighborhood @xmath377 in @xmath326 and @xmath325 , respectively , i.e. @xmath381 note that @xmath368 is a compact subset outside @xmath19 and @xmath380 is a compact subset inside @xmath19 , then there exists @xmath382 , which only depends on @xmath383 , such that for any @xmath384 @xmath385 now for any @xmath21 $ ] and @xmath386 , from @xmath387 we have @xmath388 and imply that @xmath389 is not in @xmath368 , and and imply that @xmath389 is not in @xmath380 , then @xmath389 must lie in the tubular neighborhood @xmath377 . therefore , for any @xmath390 , @xmath391$}.\ ] ] the proof is complete .
in this section , we present three two - dimensional numerical tests to gauge the performance of the proposed fully discrete mip - dg methods using the linear element ( i.e. , @xmath229 ) . the square domain @xmath392 ^ 2 $ ] is used in all three tests and the initial condition is chosen to have the form @xmath393 , where @xmath63 denotes the signed distance from @xmath394 to the initial interface @xmath50 .
our first test uses a smooth initial condition to satisfy the requirement for @xmath32 , consequently , the theoretical results established in this paper apply to this test problem . on the other hand ,
non - smooth initial conditions are used in the second and third tests , hence , the theoretical results of this paper may not apply .
but we still use our mip - dg methods to compute the error order , energy decay and the evolution of the numerical interfaces .
our numerical results suggest that the proposed dg schemes work well , even a convergence theory is missing for them .
@xmath395 consider the cahn - hilliard problem ( [ eq1.1])-([eq1.4 ] ) with the following initial condition : @xmath396 where @xmath397 , and @xmath63 represents the signed distance function to the ellipse : @xmath398 hence , @xmath32 has the desired form as stated in proposition [ prop2.3 ] .
table [ tab3 ] shows the spatial @xmath96 and @xmath112-norm errors and convergence rates , which are consistent with what are proved for the linear element in the convergence theorem .
@xmath399 is used to generate the table .
figure [ figure5678 ] displays four snapshots at four fixed time points of the numerical interface with four different @xmath13 .
once again , we observe that at each time point the numerical interface converges to the sharp interface @xmath19 of the hele - shaw flow as @xmath13 tends to zero , the interface evolves faster in time for larger @xmath13 and the mass conservation property is preserved .
the total mass approximates a constant 2.989 .
a. c. aristotelous , o.a .
karakashian and s.m .
wise,_a mixed discontinuous galerkin , convex splitting scheme for a modified cahn - hilliard equation _ , disc .
series b. , 18(9 ) , 22112238 ( 2013 ) . t. dupont , _
some @xmath96 error estimates for parabolic galerkin methods .
_ , in the mathematical foundations of the finite element method with applications to partial differential equations(proc .
maryland , baltimore , md . , 1972 ) , 491 - 504 .
academic press , new york , 1972 .
x. feng and y. li , _ analysis of symmetric interior penalty discontinuous galerkin methods for the allen - cahn equation and the mean curvature flow _ , i m a j. numer .
, doi : 10.1093/imanum / dru058 , ( 2014 ) .
o. karakashian and f. pascal , _ adaptive discontinuous galerkin approximations of second order elliptic problems _ , proceedings of european congress on computational methods in applied sciences and engineering , 2004 . | this paper proposes and analyzes two fully discrete mixed interior penalty discontinuous galerkin ( dg ) methods for the fourth order nonlinear cahn - hilliard equation .
both methods use the backward euler method for time discretization and interior penalty discontinuous galerkin methods for spatial discretization .
they differ from each other on how the nonlinear term is treated , one of them is based on fully implicit time - stepping and the other uses the energy - splitting time - stepping . the primary goal of the paper is to prove the convergence of the numerical interfaces of the dg methods to the interface of the hele - shaw flow .
this is achieved by establishing error estimates that depend on @xmath0 only in some low polynomial orders , instead of exponential orders .
similar to @xcite , the crux is to prove a discrete spectrum estimate in the discontinuous galerkin finite element space .
however , the validity of such a result is not obvious because the dg space is not a subspace of the ( energy ) space @xmath1 and it is larger than the finite element space .
this difficult is overcome by a delicate perturbation argument which relies on the discrete spectrum estimate in the finite element space proved in @xcite .
numerical experiment results are also presented to gauge the theoretical results and the performance of the proposed fully discrete mixed dg methods .
cahn - hilliard equation , hele - shaw problem , phase transition , discontinuous galerkin method , discrete spectral estimate , convergence of numerical interface . 65n12 , 65n15 , 65n30 , |
in a young country like iran , in which 35% of total population are between 10 to 24 years old ( 1 ) , attention to sexual and reproductive health is of paramount importance .
modernity and westernization have been recently widespread among large cities in iran with an obvious influence on society and culture ( 2 ) .
access to information technologies such as satellite and internet has had a crucial role on social and tradition changes .
pre- and extra - marital sex among young people is one of the clear outcomes , which is rising among young iranian people especially in large cities ( 2 ) . on the other hand
, the mean age at first marriage is rising , particularly among females in the country ( 3 ) . due to lack of comprehensive education and services on sexual health
, risky behaviors such as unprotected sexual contact and multiple - partnership are increasing among the iranian youth ( 4 , 5 ) .
unprotected and extramarital sex enhances the risk of sexual transmitted infections ( stis ) and hiv / aids .
data shows that the incidence of stis among young iranian people is growing ( 6 ) .
although the prevalence of hiv infection in iran is not currently so high , as estimated up to 100000 cases in 2012 ( 7 ) , the trend of hiv infection is noticeable as reports show that above two - thirds of hiv / aids cases are detected in the last six years ( 6 ) .
globally , about two - thirds of people suffering from stis , are less than 25 years old ( 8) , and about one - half of the new hiv infected cases are 15 - 24 years old ( 9 ) .
people in this age may get involved in high - risk behaviors and do not care about the consequences seriously . as most of the college students are in this age group and are potentially at risk , it is necessary to estimate sexual activities of college students carefully in order to prevent any possible epidemic in this population .
third wave , which considers sexual contacts as the main route of hiv transmission ( 10 ) .
the incidence of unprotected sexual contacts among university students in different countries is relatively high ( 1114 ) . in iran ,
studies on this subject are remarkably scarce . in a study on iranian young single males , more than one - fourth had a history of sexual contact ( 5 ) .
mashhad , the center of razavi khorasan province , in the northeast of iran , with about 2.5 million population is known as the second largest city of the country after tehran , the capital city of iran . as the second largest holy city of the world , mashhad attracts more than 20 million tourists and pilgrims every year .
this city has been one of the primary destinations for emigrants from afghanistan ( 15 ) . according to health center of razavi khorasan province ,
476 hiv positive cases were reported from 1986 to march 2011 in the region , of which 34.7% were detected in the last five years ( 16 ) . to our knowledge , there is no survey on students sexual activities in this region .
therefore , this study aimed to evaluate the prevalence of sexual and reproductive behaviors among young students of a great public non - medical university in mashhad , iran with more than 20000 students .
twelve faculties with 12645 undergraduate students including 8398 females ( 66.4% ) and 4247 males ( 33.6% ) were stratified as humanities , psychology , agriculture , engineering , and basic sciences .
the most populous faculty was selected from each stratum . in each faculty , the students were classified based on four different admission years and one study field was randomly chosen from each admission year .
data were collected in may and june 2008 using an anonymous self - administered questionnaire including age , gender , marital status , shift of study , lifetime and current history of sexual contacts , age at first sex , number of partners , using condoms during the last sexual contact and history of aids education at the university during the last year . for confirming the confidentiality , students were not supposed to write their personal information on the questionnaires and they had a choice of not answering the questions if did not feel comfortable about a question .
furthermore , for persuading the students to answer questions more accurately , the questionnaires were put in a box for participants assurance .
the study was approved by research and technology deputy of iranian academic center for education , culture & research ( acecr ) regarding methodological and ethical issues .
the data was described and analyzed by spss 16.0 . for each question , percent of answers
were calculated according to the number of responders instead of the total participants . due to considerable presence of censored data
-which were truly the cases without a positive history - kaplan - meier survival statistic was used to calculate the mean initiation age of the sexual contact .
twelve faculties with 12645 undergraduate students including 8398 females ( 66.4% ) and 4247 males ( 33.6% ) were stratified as humanities , psychology , agriculture , engineering , and basic sciences .
the most populous faculty was selected from each stratum . in each faculty , the students were classified based on four different admission years and one study field was randomly chosen from each admission year .
data were collected in may and june 2008 using an anonymous self - administered questionnaire including age , gender , marital status , shift of study , lifetime and current history of sexual contacts , age at first sex , number of partners , using condoms during the last sexual contact and history of aids education at the university during the last year . for confirming the confidentiality , students were not supposed to write their personal information on the questionnaires and they had a choice of not answering the questions if did not feel comfortable about a question .
furthermore , for persuading the students to answer questions more accurately , the questionnaires were put in a box for participants assurance .
the study was approved by research and technology deputy of iranian academic center for education , culture & research ( acecr ) regarding methodological and ethical issues .
the data was described and analyzed by spss 16.0 . for each question , percent of answers
were calculated according to the number of responders instead of the total participants . due to considerable presence of censored data
-which were truly the cases without a positive history - kaplan - meier survival statistic was used to calculate the mean initiation age of the sexual contact .
the average age of participants was 20.81.5 ( 1825 ) years ; 71.4% were female and 85.3% were single .
most of the students were studying as dayshift ( complimentary ) course and the study fields were as follow : humanities ( 30.1% ) , engineering ( 22.6% ) , agriculture ( 18.4 ) , basic sciences ( 18.0% ) , and psychology ( 10.9% ) . from
the 572 respondents , 103 ( 18% ) attended in hiv / aids education programs in their faculties during the last 12 months .
lifetime prevalence of sexual contact was defined as a history of vaginal , anal or oral sex contact with a same- or opposite - sex partner at least once in any point of life for single students and before marriage in case of married individuals .
the lifetime prevalence was 15.1% ( 84/557 ) , and 35.3% ( 24/68 ) of single students with a history of sexual activity reported to have sex in the last three months .
total current prevalence of sexual relation - having any type of sex during 3 months preceding the study among single students - was 5.2% ( 24/466 ) . using kaplan - meier survival statistics ,
the mean age of first sexual experience was 23.7 years old ( 95% ci , 23.4 - 24.0 ) ; in males 22.1 ( 95% ci , 21.4 - 22.8 ) and in females 24.4 ( 95% ci , 24.2 - 24.6 ; p < 0.001 ) .
the mean age at first sex was 17.63.3 years among 78 sexually experienced youth ( range= 13 - 22 , median = 18.5 ) .
twenty - four percent of students with a positive history started sexual contact at age below 15 years old and 50% initiated between 15 - 19 years old .
as shown in table 1 , the lifetime prevalence of sexual relationship in males was significantly higher than females ( 32.9% vs. 7.6% , p < 0.001 ) .
furthermore , the students with a history of sexual contact were older than other ones ( 21.71.5 vs. 20.81.5 years ; p < 0.001 )
. on the other hand , no significant relation between sexual contact and marital status and study fields was found ( p= 0.715 , p = 0.101 , respectively ) .
in addition , attendance in an hiv / aids education program during the last year was not associated with having sexual contact ( p = 0.696 ) .
factors related to having a premarital sex among university students in mashhad , iran % was calculated according to the number of responders instead of the total participants in 67 single students with a sexual contact history , 38.8% had one lifetime partner and 34.3% had three or more partners .
only 26/64 ( 40.6% ) of students stated using condom in their last sexual relation . moreover
, 35% ( 28/80 ) of the students including 21 males and 7 females declared to have a same - sex experience . among married students , 12/87 ( 13.8% ) had a history of premarital sex and one person stated an extramarital sex .
this study on university students in mashhad , iran showed a 15% prevalence of premarital sex in any time of their life ; 33% in males and 8% in females , and current prevalence of sexual relationships during last 3 months was 5% among single students .
a study in qazvin , a city in center of iran , reported even a lower prevalence of sexual contact before marriage in university students ; 16% in males and 0.6% in females ( 4 ) . on the other hand , a study by farahani et al
. showed a greater prevalence of some type of sexual relationship as 23% in female undergraduate students from universities of tehran ( 17 ) .
it seems that the prevalence of premarital sexual relationship among iranian students is significantly lower than those are reported from other countries .
frothy - four percent of students in armenia ( 77% in males and 7% in females ) ( 11 ) , 69% of male and 59% of female japanese students ( 14 ) , 80% of males and 72% of female students from west america ( 18 ) , and 53% of university students in south ethiopia ( 19 ) have had some type of sexual contact . on the other hand , prevalence of premarital sex in chinese female students ( 8.6% ) was similar to our study , although male students had obviously experienced a lower sexual contact ( 17.6% ) ( 20 ) .
the difference in the prevalence of sexual contact among students between countries might be due to the socio - cultural and religious conditions that govern the country ( 21 , 22 ) .
stigmatization of pre- and extramarital sexual contacts is thought to be an important factor for reducing the rate of this behavior in a society ( 21 ) . in a muslim country like iran , sexual relationships
furthermore , these types of sexual relationships are also socio - culturally unacceptable in this country ( 22 ) . in a study in tehran , the capital city of iran
, 46% of respondents stated that sexual intercourse before marriage is wrong and 54% noted that premarital sex brings a bad reputation for a girl ( 23 ) .
one of the parameters relevant to the risk of sexual behaviors is the age at which youth initiate sexual activity .
certainly , it differs between countries as it is influenced by ethical , religious and legal affairs ( 24 ) .
in addition , some biologic factors such as the age of menarche , and social factors like freedom in the sexual relationships , educational status and peer pressure might be important in the age of the first sexual experience ( 25 ) . in our study ,
the mean and median age at first sex among sexually experienced students was 17.6 and 18.5 years , respectively , and a fourth reported a history of sexual activity before 15 years old .
however , the average initiation age is considerably lower in western countries , for instance in the united states , it was as low as 15.7 for males and 16.1 for females ( 26 ) .
another study in new zealand showed that the median age at first intercourse was 17 years for males and 16 years for females ( 27 ) .
moreover , in united kingdom , over the past 30 years , the median age at first intercourse has declined nearly two years and has reached to 14 years for females and 13 years for males ( 25 ) . however , in eastern countries like china the age of first sex experience is above 20 years old ( 28 ) .
the important point related to the age of sexual initiation is that people who initiate sexual activities at younger ages would be at the higher risk for stis ( 27 , 29 , 30 ) .
a study from new zealand revealed that the prevalence of stis was more in people who had sexual activities at younger ages ( 27 ) .
thus , the higher age of first sex experience in iran compared to other countries can be assumed as a preventive factor regarding hiv / aids epidemics unless the increasing age of marriage leads to a change in the pattern of sexual activity and a rise in the rate of premarital sexual activity .
this relationship is banned and unacceptable in a muslim country like iran , which would place partners in higher risk of stis .
the consequences would be impairment of effective education - based hiv / stis prevention or any other preventive , diagnostic and therapeutic measurement .
this alarming issue needs to be assessed more in the future investigations . in our study , male students experienced sexual relations four times more than female students .
in addition , the age of first sexual experience was lower in the males . in all of the abovementioned surveys , similar findings have been shown regarding the influence of gender on the sexual pattern .
gender in addition to ethnicity was the predictive factor for the adolescents risk of starting sex in a study in the united states ( 31 ) .
moreover , young males are more engaged in dangerous sexual behaviors such as drinking or casual intercourse ( 3235 ) .
while drinking is not related to lower use of condom ( 29 ) , there is an increased likelihood of having probably unsafe sex after drinking ( 32 ) .
condoms are now the most important contraceptive method which young people use in their sexual experiences . due to the higher knowledge of people about the preventive role of condoms in stis and hiv infection ,
in our study , only 35% of the students declared using condom during their last sexual contact , which is lower than other countries though in another study on iranian students , a higher rate of condom use ( 48% ) was reported ( 4 ) . not using condom in sexual contact increases the risk of hiv and other sti transmission . in japan , 75% of students who had experienced sexual contact used condom during their sexual contact ( 14 ) . in armenia ,
74% of students used condom during their sexual contact ( 11 ) . a lower condom use compared to other countries
although iran is one of a few countries in the middle east and north africa region that provides stis / aids education for youth ( 36 , 37 ) , it seems that these education services are not so efficient to change the sexual practices .
this study clearly shows that considerable lifetime and current prevalence of sexual contact among university students , needs a tactful strategy to shift high - risk behaviors into more sensible and healthier behaviors , in order to prevent the spread of sexually - transmitted disease among the university students .
although the results of this study demonstrates that the first sexual experience was observed in higher ages , especial attention must be paid to risky sexual patterns such as low condom use .
this study was the first survey on sexual and reproductive behaviors among university students in northeastern iran and revealed a noticeable prevalence of premarital sexual relationship with a low rate of condom use .
nevertheless , the proportion of various types of sexual experiences such as vaginal , penetrative , etc . was not determined .
another limitation of this study was that the reasons for not using condom and type of partners were not asked .
moreover , it is possible that students who were absent on the time of survey in comparison to students in the class might have a different sexual behavior which should be considered in generalizing the study 's results .
the authors declare that there are no conflicts of interest that could be perceived as prejudicing the impartially of the research reported . | backgroundthe incidence of sexual transmitted infections ( stis ) and hiv / aids is globally higher in young people .
this study evaluated the prevalence of sexual reproductive behaviors among undergraduate students of mashhad , iran.methodsthe study was conducted on 605 students in twelve non - medical faculties of a great university of mashhad .
a self - administered questionnaire was completed on demographic information , sexual contact in the lifetime and during the last three months , and age of first sex .
kaplan - meier statistic was used to calculate the mean age of initiation of sex .
a p < 0.05 was considered statistically significant.resultsafter exclusion of individuals over 25 years of age , among 590 students with a mean age of 20.81.5 years included in the analysis , 71.4% were female and 85.3% were single .
prevalence of at least one sexual contact in life was 15.1% and 35.3% of single sexually experienced students reported to have sex in the last three months .
the lifetime prevalence of sexual relationship in males was significantly higher than females ( 32.9% vs. 7.6% , p < 0.001 ) .
the mean age of first sexual experience was 23.7 years with a significant difference between both sexes ( p < 0.001 ) .
in single sexually experienced students , the mean age at first sex was 17.63.3 years , 24% started sexual activity at < 15 years , 34.3% had at least 3 partners and only 40.6% stated using condom in their last sex.conclusionalthough very small proportion of females reported premarital sex , a significant minority of male students experienced sexual and risky behaviors .
therefore , the use of educational programs on related issues to reduce the risk of stis / hiv among youth including university students seems to be a necessity . |
Democratic Senate hopeful Jason Kander is fighting back against criticism of his loyalty to the Second Amendment with a new ad depicting him assembling a rifle blindfolded.
ADVERTISEMENT
"Sen. Blunt has been attacking me on guns. Well, in the Army, I learned how to use and respect my rifle. In Afghanistan, I volunteered to be an extra gun in a convoy of unarmored SUVs," Kander says.
"And in the State Legislature, I supported Second Amendment rights. I also believe in background checks, so the terrorists can't get their hands on one of these," he adds, holding the just-assembled rifle.
The Democrat closes with a flourish on the generic campaign ad tagline: "I approve this message, because I would like to see Sen. Blunt do this," he says, displaying the rifle and removing his blindfold.
The National Rifle Association cast Kander as someone who would vote against Second Amendment rights in an ad last week that depicted a home invasion ||||| Close Get email notifications on Kevin McDermott daily!
Your notification has been saved.
There was a problem saving your notification.
Whenever Kevin McDermott posts new content, you'll get an email delivered to your inbox with a link.
Email notifications are only sent once a day, and only if there are new matching items. ||||| A new ad released by Missouri Democrat Jason Kander shows the senatorial candidate piecing together a firearm and then challenging Sen. Roy Blunt (R-Mo.) to do the same. (Jason Kander)
Politicians have long liked to hold guns in campaign spots. They really like to fire them. Now, a candidate has introduced a new caliber of gun ad.
Democrat Jason Kander, who is challenging Sen. Roy Blunt (R-Mo.) for his seat, appears in an ad blindfolded and standing at a wooden table in a darkly lit warehouse. As he begins to speak, Kander starts piecing together a rifle. The metallic clank of each part falling into place accentuates his argument, and the beginning of the ad establishes his credentials as someone who knows how to use guns, and respects them, too. But despite his proclaiming of support for the Second Amendment, it's not an ad that's likely to make the National Rifle Association happy.
"I also believe in background checks, so that terrorists can't get their hands on one of these," Kander pivots.
Second Amendment advocates argue that background checks would limit a constitutionally guaranteed right, while background-check advocates say it's a simple matter of keeping weapons out of the hands of dangerous individuals.
Recent efforts at passing background-check legislation haven't gone very far in Congress. But the American public apparently isn't as split as the legislature; The Post reported last fall that about 85 percent of gun owners support universal background checks for purchasers.
Kander actually has a chance to take Blunt's seat; despite its GOP lean at the federal level, Missouri is ranked the 10th most likely to flip parties in The Fix's most recent Senate rankings, in part because Democrats consider Kander, a 35-year-old secretary of state and Afghanistan war veteran, a strong recruit.
If his newest ad resonates with voters, Kander will be that much closer. | – How gun crazy have political ads gotten? Missouri's supposedly anti-gun candidate for US Senate just released one in which he assembles an AR-15 semi-automatic rifle while blindfolded and bragging that he'd "like to see Sen. Blunt do this." The ad, from Democrat Jason Kander, was created in response to an NRA ad endorsing his opponent, Roy Blunt, and claiming he'd vote against Second Amendment rights, the St. Louis Post-Dispatch reports. The Washington Post calls it a "new caliber of gun ad" that "may be the best ad of the election so far." While assembling the rifle, Kander—a former Army captain who served in Afghanistan—voices his support for gun ownership while saying he "believes in background checks so that terrorists can't get their hands on one of these." The ad was praised by Americans for Responsible Solutions, the gun control group founded by Gabby Giffords, the Hill reports. The group's executive director says Kander is fighting back against the "Washington gun lobby" while fighting for "the rights of responsible gun owners." A recent poll showed 85% of gun owners support universal background checks. The NRA's major beef with Kander is that he once voted against allowing concealed guns on college campuses. |
pulasr radio signals probe fluctuations in the local interstellar medium @xcite .
the broad electron density fluctuation spectrum @xcite is commonly interpreted as a turbulent inertial range .
the pulsar signal width yields information about fluctuation statistics @xcite .
the width scales as @xmath1 ( @xmath2 is the distance to source ) , a result that is incompatible with gaussian statistics @xcite .
the latter would produce a scaling of @xmath3 , while @xmath1 is recovered for levy statistics @xcite .
a levy - distributed random walk typically consists of a series of small random steps , punctuated by occasional levy flights in which there is a single large jump to a new locale . in the context of a pulsar radio signal propagating through a levy distribution of electron density fluctuations , a sea of low intensity density fluctuations would scatter the signal through a series of small angles . intermittently , as the signal traversed an intense , localized density fluctuation , it would scatter through a much larger angle .
the assertion that pulsar signals are dispersed by levy - distributed fluctuations is a statistical ansatz validated to some degree by observation .
this ansatz does not address the difficult and important question of what processes or conditions produce the statistics .
it has been suggested that levy statistics can emerge from radio signal trajectories grazing the surface of molecular clouds @xcite .
here we examine a different mechanism rooted in the turbulent cascade implied by the broad fluctuation spectrum .
the mechanism is intrinsic spatial intermittency , a process known to create non gaussian tails in the probability distribution function ( pdf ) . in navier - stokes turbulence , intrinsic intermittency takes the form of randomly dispersed , localized vortex strands , surrounded by regions of relative inactivity @xcite .
intermittency is most pronounced at small scales .
intermittency also occurs in mhd turbulence @xcite .
however the statistical properties of electron density fluctuations in magnetic turbulence are not known . in this paper
we address the fundamental and nontrivial question of whether electron density can become intermittent in the magnetic turbulence of the interstellar medium . the effect on pulse - width scaling
requires that additional issues be addressed , and will be taken up later .
the question of intermittency in pulsar scintillation is twofold .
first , can intermittent electron density fluctuations in interstellar turbulence achieve the requisite intensity to change the pdf ?
to some extent this question has been answered by studies that show that passive advection and the limitations it places on electron density excitation ( as indicated , for example , by mixing length arguments ) applies only to scales larger than tens of gyroradii . at smaller scales
the electron density becomes active through kinetic alfvn wave ( kaw ) interactions with magnetic fluctuations , exciting the internal energy to equipartition with the magnetic energy @xcite .
evidence for a transition to kaw dynamics near the gyroradius scale has recently been inferred from solar wind observations @xcite .
since scintillation is dominated by small scales , the regime of kinetic alfvn interactions is appropriate for studying the intermittency potentially associated with the scaling of the pulsar signal width .
the second aspect of intermittency in the context of pulsar signals deals with how isolated structures can form against the homogenizing influence of turbulent mixing in a type of turbulence that does not involve flow . virtually all mechanisms proposed for intermittency involve flow or momentum , yet , the flow of ions in magnetic turbulence decouples from small - scale kinetic alfvn waves , with the interaction of magnetic field and density taking place against a background of unresponsive ions .
while intermittency has been widely studied in hydrodynamic turbulence @xcite and mhd @xcite , historically the emphasis has been on structure and statistics , not mechanisms .
structure studies have included efforts to visualize intermittent structures @xcite .
quantitatively , measurements of structure function scalings have been made to gauge how intermittency changes with scale @xcite .
statistical characterizations of intermittency generally postulate a non gaussian statistical ansatz , and the resultant properties are compared with measurements to determine if the ansatz is reasonable .
these approaches do not address the mechanisms that endow certain fluctuation structures either with individual longevity or collective prominence , in a statistical sense , relative to other regions in which such structures are not present @xcite .
the mechanistic approach is nascent but has already yielded significant insights into the long - standing problem of subcritical instability in plane pouseille flow @xcite . a starting point for our considerations
are simulations of decaying kaw turbulence that showed the emergence of coherent , longlived current filaments under collisional dissipation of density @xcite . in these simulations
finite amplitude fluctuations in density and magnetic field decayed from initial gaussian distributions .
( the current , as curl of the magnetic field , was also gaussian initially . )
the distribution of current became highly non gaussian as certain current fluctuations persisted in the decay long past the nominal turbulent correlation time .
the longevity of these filaments enhanced the tail of the pdf , steadily increasing the value of the fourth order moment ( kurtosis ) significantly above its gaussian value .
while the pdf was affected by mutual interactions of filaments later in the simulation , initially the tail enhancement was dominated by the interaction of filaments with surrounding turbulence , and the lack of mixing of those filaments relative to the rapidly - decaying surrounding turbulence .
intermittency was not reported when resistivity dominated the dissipation .
while these simulations showed intermittency in kaw turbulence , non gaussian statistics was demonstrated for current fluctuations , not density .
the turbulence decayed via collisional dissipation of density ; the current had no direct damping .
it is not clear what effect this had on density structure formation within the constraints of the resolution of the simulations .
the question of intermittency in density therefore remains open .
no mechanism for the intermittency was proposed . in this paper
we will examine analytically the dynamics of structures in density and current and determine how one relates to the other .
we will use analysis tools and results developed to understand the emergence of long lived vortices in decaying 2d navier - stokes turbulence @xcite . for that problem
, two time - scale analysis showed that the vortices are coherent and long lived because strong shear flow in the outer part of the vortex suppresses ambient mixing by turbulence @xcite .
the ambient mixing would otherwise destroy the vortex in a turnover time .
this mechanism for maintaining the coherent vortex in decaying turbulence correctly predicts the observed distribution of gaussian curvature of the flow field @xcite .
we use two - time - scale analysis to describe coherent structure formation in decaying kaw turbulence .
the following are obtained .
1 ) we identify the mechanism that allows certain current filaments to escape the turbulent mixing that otherwise typifies the turbulence .
current and density are mixed by the random interaction of kinetic alfvn waves .
this process is disrupted in current filaments whose azimuthal field has an unusually large amount of transverse shear .
this creates a strong refraction of turbulent kinetic alfvn waves that localizes them to the periphery of the filament and restricts their ability to mix current and density .
2 ) we derive a shear threshold criterion based on this mechanism .
it identifies which current filaments escape mixing and become coherent , or long lived .
the criterion relates to the gaussian curvature of the magnetic field , providing a topological construct that maps the intermittency in a way analogous to the flow gaussian curvature of decaying 2d navier - stokes turbulence .
3 ) we trace the relative effects of the refractive shear mechanism on current , magnetic field , density , and flux .
the magnetic field and density have long - lived , localized fluctuation structures that coexist spatially with localized current filaments .
however , the magnetic field extends beyond the localized current . like the magnetic field of a line
current , it falls off as @xmath4 . because the density is equipartitioned with the magnetic field in kaw turbulence a similar mantle is expected for the density .
this mantle tends to prevent the density kurtosis from rising to values greatly above 3 ; however , it is responsible for giving the pdf of density gradient a levy distribution .
this paper is organized as follows .
section ii presents the kinetic alfvn wave model used in this paper .
the two - time - scale analysis is introduced in sec .
section iv derives the condition for strong refraction , and the resultant refractive boundary - layer structure for turbulent kaw activity in and around the coherent filament .
the turbulent mixing stresses are determined in sec .
v , from which the filament and density lifetimes can be derived .
section vi discusses the gaussian curvature and spatial properties of the current and density structures .
the latter are used to infer heuristic pdfs .
conclusions are given in sec .
the shear - alfvn and kinetic - alfvn physics described in the introduction is intrinsic to models of mhd augmented by electron continuity .
when there is a strong mean field , the nonlinear mhd dynamics can be represented by a reduced description @xcite , given by @xmath5 @xmath6 @xmath7 where @xmath8 and @xmath9 . in the reduced description the perturbed magnetic field is perpendicular to the mean field and can be written as @xmath10 , where @xmath11 is the direction of the mean field , and @xmath12 is the normalized parallel component of the vector potential .
the flow has zero mean and is also perpendicular to the mean field @xmath13 .
it can be expressed in terms of the electrostatic potential as @xmath14 where @xmath15 is the normalized electrostatic potential .
the normalized density fluctuation is @xmath16 , where @xmath17 is the mean density , and @xmath18 is the normalized resistivity , where @xmath19 is the spitzer resistivity .
spatial scales are normalized to @xmath20 , time is normalized to the alfvn time @xmath21 , @xmath22 is the ion acoustic velocity , @xmath23 is the alfvn velocity , and @xmath24 is the ion gyrofrequency . within their limitations ( isothermal , incompressible fluctuations ) , eqs .
( 1)-(4 ) are valid for scales both large and small compared to the gyroradius , as well as the intermediate region .
equation ( 3 ) is the electron continuity equation .
the advective nonlinearity , @xmath25 , couples electron density fluctuations to the flow .
if there is a nonuniform mean density , advection drives weak density fluctuations of amplitude @xmath26 , where @xmath27 is the scale of density fluctuations and @xmath28 is the mean density scale length .
the continuity equation also contains a compressible nonlinearity , @xmath29 , whereby compressible electron motion along magnetic field perturbations provides coupling to the magnetic field .
electrons act on the magnetic field through parallel electron pressure in ohm s law , expressed as @xmath30 in eq .
. the couplings of magnetic field and density are weak at scales appreciably larger than the ion gyroradius . on those scales
the advection of electron density is passive to a good approximation , and governs electron density evolution . in the region around @xmath31 , the two nonlinearities in each of eqs .
( 1)-(3 ) become comparable @xcite . for @xmath32
, @xmath30 begins to dominate @xmath33 in eq .
( 1 ) , and @xmath29 begins to dominate @xmath25 in eq .
this is a very different regime from incompressible mhd , where the magnetic field and flow actively exchange energy through shear alfvn waves . in a turbulent cascade
approaching the ion gyroradius scale from larger scales , the energy exchanged between flow and magnetic field in shear alfvn interactions diminishes relative to the energy exchanged between the electron density and the magnetic field through the compressible coupling .
consequently flow decouples from the magnetic field , increasingly evolving as a go - it - alone kolmogorov cascade , while electron density and magnetic field , interacting compressively through kinetic alfvn waves , supplant the shear alfvn waves .
once the kinetic alfvn wave coupling reaches prominence , the internal and magnetic energies become equipartitioned , @xmath34 , even if the internal energy is only a fraction of the magnetic energy at larger scales .
if there is no significant damping at the ion gyroradius scale , the large - scale shear alfvn cascade continues to gyroradius scales and beyond though kinetic alfvn waves .
the gyroradius scale at which kaw dynamics is active is order @xmath35 cm in the warm ism .
this is small relative to the scale of intermittent flow structures in molecular gas clouds , recently reported to be order @xmath36 cm @xcite .
this scale difference is crudely consistent with the high magnetic prandtl number of the warm ism .
the value pr @xmath37 allows very small scales in the ionized medium , before dissipation becomes important , relative to scales of viscous dissipation in the clouds .
the gyroradius scale of intermittent kaw structures makes direct visualization in the ism difficult . in the kaw regime
, the model can be further simplified by dropping the flow evolution .
this leaves a kaw model in which electron density and magnetic field interact against a neutralizing background of unresponsive ions , @xmath38 @xmath39 solutions of this model closely approximate those of eqs .
( 1)-(3 ) when the scales are near the gyroradius or smaller @xcite .
this model assumes isothermal fluctuations , consistent with strong parallel thermal conductivity .
equations ( 5 ) and ( 6 ) are fluid equations , hence landau - resonant @xcite and gyro - resonant dissipation , which may be important in the ism , are not modeled .
ohm s law has resistive dissipation , and density evolution has collisional diffusion .
depending on the ratio @xmath40 , either of these dissipation mechanisms can damp the energy in decaying turbulence , however , the damping occurs at small dissipative scales .
we will focus on inertial behavior at larger scales .
we assume that mean density is nearly uniform , and neglect the last term of eq .
the dispersion relation for ideal kinetic alfvn waves is determined by linearizing eqs .
( 5 ) and ( 6 ) , neglecting resistive dissipation @xmath41 , and introducing a fourier transform in space and time .
the result is @xmath42 , where @xmath43 , or @xmath44 .
if dimensional frequency and wavenumbers @xmath45 , @xmath46 , and @xmath47 are reintroduced , the dispersion relation is @xmath48 . the wave is seen to combine alfvnic propagation with perpendicular motion associated with the gyroradius scale .
the kaw eigenvector yields equal amplitudes of magnetic field and the density , @xmath49 , with a phase difference of @xmath50 . in magnetic turbulence with its hierarchy of scales ,
kinetic alfvn waves also propagate along components of the turbulent magnetic field . in the reduced description
the turbulent field is perpendicular to the mean field , hence the dispersion relation of these kinetic alfvn waves carries no @xmath51 dependence .
to illustrate , we isolate such a fluctuation from the mean - field kinetic alfvn wave by setting @xmath52 ; with this wavenumber zero , we drop the subscript from @xmath53 ; we consider a turbulent magnetic field component @xmath54 at wavenumber @xmath55 that dominates the low-@xmath56 fluctuation spectrum ; and we look at the dispersion for smaller scale fluctuations satisfying @xmath57 .
the latter conditions linearize the problem , yielding a dispersion relation for kinetic alfvn waves propagating along the turbulent field @xmath58 according to @xmath59\textrm { } k$ ] . reintroducing dimensions , @xmath60 .
we see that the dispersion is alfvnic , but with respect to a perturbed field component that is perpendicular to the mean field .
hence the frequency goes like @xmath61 instead of @xmath62 .
to understand and quantify the conditions under which a coherent current fluctuation persists for long times relative to typical fluctuations , we examine the interaction of the coherent structure with surrounding turbulence and derive its lifetime under turbulent mixing .
the interaction is described using a two - time - scale analysis , allowing evolution on disparate time scales to be tracked @xcite . the coherent structure , a current filament with accompanying magnetic field and electron density fluctuations , evolves on the slow time scale under the rapid - scale - averaged effect of turbulence . on the rapid scale
the filament is essentially stationary , creating an inhomogeneous background for the rapidly evolving turbulence .
identifying conditions that support longevity justifies the two time scale approximation a posteriori .
simulations suggest the filament is roughly circular .
if coordinates are chosen with the origin at the center of the filament , a circular filament is azimuthally symmetric , while the turbulence breaks that symmetry .
the filament current is localized , hence its current density becomes zero at some distance from the origin .
the localized current profile necessarily creates a magnetic field that is strongly inhomogeneous . on the rapid time scale over which the turbulence evolves
, this field , which is part of the coherent structure , is essentially stationary .
it acts as a secondary equilibrium field in addition to the primary equilibrium field ( which is homogeneous and directed along the @xmath11-axis ) .
turbulence , in the form of random kinetic alfvn waves , propagates along both the primary and secondary fields . because the primary field is homogeneous its effect on the turbulence is uninteresting .
however , the secondary field is strongly sheared because of the local inhomogeneity created by the structure .
strong shear refracts the turbulent kinetic alfvn waves . in the subsequent analysis
we will ignore the primary kaw propagation , which we can do by setting @xmath52 , and focus on the refraction of kaw propagation by the secondary magnetic field shear .
strong refraction will be shown to localize kinetic alfvn waves away from the heart of the filament , allowing it to escape mixing and thereby acquire the longevity to make it coherent .
with @xmath63 , we apply the separation of long and short time scales to fluctuations in @xmath64 and @xmath65 as follows : @xmath66 where the symbol @xmath67 represents either @xmath68 or @xmath69 , with @xmath70 and @xmath17 the flux function and density of the coherent structure , and @xmath71 and @xmath72 the turbulent fields of flux and density . the variables for slowly and rapidly evolving time are @xmath73 and @xmath74 .
the origin of a polar coordinate system with radial and angle variables @xmath75 and @xmath76 is placed at the center of the structure .
the structure is assumed to be azimuthally symmetric .
the turbulence evolves in the presence of the structure , hence it is necessary to specify the radial profile of @xmath70 , or more explicitly , the profile of the secondary , structure field @xmath77 . as a generic profile for localized current
we adopt a reference profile that peaks at the origin and falls monotonically to zero over a finite radius @xmath78 . for simplicity
we take the variation as quadratic , giving @xmath79 where @xmath80 . integrating the current we obtain @xmath81 b_{\theta}(r)=\frac{j_0(0)a^2}{4r } & & ( r\geq a ) .
\end{aligned}\ ] ] these profiles are for reference .
shortly we will introduce a more general description for a filament whose current peaks at the origin and decays monotonically .
the current of the coherent filament is wholly localized within @xmath82 .
however , the magnetic field is not localized , but slowly decays as @xmath4 outside the filament .
the quantities in eqs .
( 8)-(10 ) all evolve on the slow time scale @xmath73 .
the dependence on @xmath73 is not notated because when @xmath83 appears in the turbulence equations , it is a quasi equilibrium quantity on the rapid time scale . to describe the rapid time scale evolution and its azimuthal variations we introduce a fourier - laplace transform , @xmath84\exp[\gamma t],\ ] ] where @xmath85 is the shift of the complex integration path of the inverse laplace transform .
the radial variation of @xmath86 creates an inhomogeneous background field for the turbulence , making fourier transformation unsuitable for the radial variable .
the laplace transform is appropriate for turbulence that decays from an initial state . to obtain equations for the slowly evolving fields @xmath70 and @xmath17 , we average the full equations over the rapid time scale @xmath74 .
this is accomplished by applying the laplace transform to the equations , and integrating over @xmath74 .
the integral selects @xmath87 as the time average , _
i.e._. , @xmath88 .
applying this procedure , the evolution equations for the slowly evolving fields are given by @xmath89\tilde{n}_{m',\gamma'}\bigg\rangle=0,\ ] ] @xmath90\times \nonumber \\ [ 3 mm ] & & \bigg[\frac{1}{r}\frac{\partial}{\partial r}\big(r\frac{\partial}{\partial r}\big)-\frac{m'^2}{r^2}\bigg]\tilde{\psi}_{m',\gamma'}\bigg\rangle=0 , \end{aligned}\ ] ] where @xmath91 , @xmath92 , @xmath93 , and @xmath94 are understood to depend on the radial variable @xmath75 ; @xmath95 ; and @xmath96 .
the correlations @xmath97 , @xmath98 , @xmath99 , and @xmath100 , which appear in eqs .
( 12 ) and ( 13 ) , are turbulent stresses associated with random kinetic alfvn wave refraction .
their fast time averages govern the mixing ( transport ) of the coherent fields .
these stresses must be evaluated from solutions of the fast time scale equations to find the lifetime of the structure .
the evolution equations for the rapidly evolving turbulent fluctuations are @xmath101 \bigg[\frac{im'}{r}\tilde{\psi}_{m',\gamma'}\frac{\partial}{\partial r}-\frac{i(m - m')}{r}\frac{\partial \tilde{\psi}_{m',\gamma'}}{\partial r}\bigg]\tilde{n}_{m - m',\gamma-\gamma'}&=&\frac{im}{r}\tilde{\psi}_{m,\gamma}\frac{\partial}{\partial r}n_0(r),\end{aligned}\ ] ] @xmath102\tilde{\psi}_{m,\gamma}-\frac{1}{2\pi i}\int_{-i\infty+\gamma_0}^{i\infty+\gamma_0}d\gamma'\sum_{m'}\times \nonumber \\ [ 2 mm ] & & \bigg[\frac{im'}{r}\tilde{\psi}_{m',\gamma'}\frac{\partial}{\partial r}-\frac{i(m - m')}{r}\frac{\partial \tilde{\psi}_{m',\gamma'}}{\partial r}\bigg]\bigg[\frac{1}{r}\frac{\partial}{\partial r}\big(r\frac{\partial}{\partial r}\big)-\frac{(m - m')^2}{r^2}\bigg]\tilde{\psi}_{m - m',\gamma-\gamma ' } \nonumber \\ [ 2 mm ] & = & \frac{im}{r}\tilde{\psi}_{m,\gamma}\frac{\partial}{\partial r } j_0(r).\end{aligned}\ ] ] we have not shown the dissipative terms in accordance with our focus on inertial scales .
the sources contain gradients of @xmath103 and @xmath104 .
these are the density and current of the coherent structure , but , unlike @xmath105 and @xmath106 , are not evaluated in the laplace transform domain .
three terms drive the evolution of @xmath107 and @xmath108 in each of these equations .
the first term describes linear kinetic alfvn wave propagation along the inhomogeneous secondary magnetic field @xmath83 of the coherent structure .
the second term is the nonlinearity , and describes turbulence of random kinetic alfvn waves .
the third term is proportional to mean - field gradients .
it is a fluctuation source via the magnetic analog of advection ( @xmath109 @xmath110 @xmath111 ) .
it yields quasilinear diffusivities for the turbulent mixing process .
for example , if the kinetic alfvn wave and nonlinear terms of eq .
( 15 ) are dropped , the solution is @xmath112 the superscript @xmath113 indicates that , for deriving diffusivities , this density is to be substituted iteratively into the correlations of the turbulent stresses . from eq .
( 12 ) these correlations are @xmath114 and @xmath115 .
substitution of eq .
( 16 ) yields mean turbulent diffusivities for @xmath70 .
similarly , if eq .
( 14 ) is solved by dropping its kinetic alfvn wave and nonlinear terms , we obtain @xmath116 substituting this solution into the correlations @xmath117 and @xmath118 of eq .
( 13 ) , mean turbulent diffusivities are obtained for @xmath17 . off - diagonal transport ( relaxation of @xmath70 by gradient of @xmath17 )
can also be obtained by substituting eq .
( 17 ) into @xmath119 and @xmath120 .
the role of the nonlinear and kinetic alfvn wave terms omitted from eqs .
( 16 ) and ( 17 ) is to modify the time scale @xmath121 and couple the sources .
this is calculated in the next section .
the inverse of @xmath121 represents the lifetime of the correlations @xmath119 , @xmath120 , @xmath122 , and @xmath123 .
generally the nonlinear terms enhance decorrelation , increasing the effective value of @xmath121 .
if the shear in @xmath83 is strong , the kinetic alfvn wave term increases @xmath121 even further .
the role of shear in the kinetic alfvn wave terms is not explicit but should be , so that it can be varied independently of the field amplitude @xmath124 at some radial location @xmath125 . in explicitly displaying the role of shear
we note that if @xmath126 , as would be true if the current density @xmath127 were uniform , the kinetic aflvn wave term is independent of @xmath75 .
in this situation the phase fronts of kinetic alfvn waves propagating along @xmath83 are straight - line rays extending from the origin . shear in @xmath83 , occurring through nonuniformity of @xmath127 , distorts the phase fronts , as shown in fig .
1 . distortion occurs if @xmath83 has a variation that is not linear . from eq .
( 9 ) we note that the variation of @xmath83 for our chosen structure profile is linear for @xmath128 , with variations developing as @xmath129 .
therefore , it makes sense to quantify the shear by expanding @xmath130 in a taylor series about some point of interest .
obviously , the shear is zero at the origin , and becomes sizable as @xmath129 .
expanding about a reference point @xmath125 away from the origin , @xmath131 if @xmath86 varies smoothly , as is the case for a monotonically decreasing current profile , we can truncate the expansion as indicated in eq .
( 18 ) and use that expression as a general current profile .
looking at the kinetic alfvn terms of eqs .
( 14 ) and ( 15 ) , the first term will produce a uniform frequency that doppler shifts @xmath121 by the amount @xmath132 .
the second term will describe kaw propagation in an inhomogeneous medium with its attendant refraction .
we rewrite eq . ( 15 ) , substituting the expansion of eq .
( 18 ) , yielding @xmath133 \bigg[\frac{im'}{r}\tilde{\psi}_{m',\gamma'}\frac{\partial}{\partial r}-\frac{i(m - m')}{r}\frac{\partial \tilde{\psi}_{m',\gamma'}}{\partial r}\bigg]\nabla_{m - m'}^2\tilde{\psi}_{m - m',\gamma-\gamma'}&=&\frac{im}{r}\tilde{\psi}_{m,\gamma}\frac{\partial}{\partial r}j_0(r),\end{aligned}\ ] ] where @xmath134 and @xmath135 . when @xmath136|_{r_0}$ ] is large , the shear in @xmath83 refracts turbulent kaw activity .
the process can be described using asymptotic analysis . in the limit
that @xmath136|_{r_0}$ ] becomes large asymptotically , the higher derivative nonlinear term is unable to remain in the dominant asymptotic balance unless the solution develops a small scale boundary layer structure .
the layer is a singular structure .
its width must become smaller as @xmath136|_{r_0}$ ] becomes larger , otherwise the highest order derivative drops out of the balance and the equation changes order .
this is the only viable asymptotic balance for @xmath136|_{r_0}\rightarrow\infty$ ] .
the boundary layer width @xmath137 is readily estimated from dimensional analysis by noting that @xmath138 , @xmath139 , and treating @xmath136|_{r_0}\equiv j'$ ] as the diverging asymptotic parameter .
the asymptotic balance is @xmath140 yielding @xmath141 the length @xmath137 is the scale of fluctuation variation within the coherent current filament . in the simulations ,
the filaments were identified as regions of strong , localized , symmetric current surrounded by turbulent fluctuations .
consequently , @xmath137 represents a fluctuation penetration depth into the structure .
we derived the layer width @xmath137 from linear and nonlinear kinetic alfvn wave terms operating on flux in the density equation .
identical operators apply to @xmath69 in the flux equation .
hence @xmath137 is the width of a single layer pertaining to both the density and current fluctuations of refracted kaw turbulence .
this structure is shown schematically in fig . 2 .
the above analysis indicates a single layer width and gives its value .
it does not give the functional variation of current and density fluctuations within the layer , either relative or absolute .
in the simpler case of intermittency in decaying 2d navier - stokes turbulence , statistical closure theory was used to derive spatial functions describing the inhomogeneity of turbulence in the presence of a coherent vortex @xcite . there , coherent vortices suppress turbulent penetration via strong shear flow , analogous to role of refraction here . for the kaw system the closure equations are much more complicated and not amenable to the wkb analysis that gave the functional form of the boundary layer in the navier - stokes case .
however , the closure remains useful .
it provides a mathematical platform from which to calculate all aspects of the interaction of filament and turbulence , including the accelerated decay of turbulence within the boundary layer , the spatial characteristics of the layer , and the amplitudes of @xmath69 and @xmath68 .
these are necessary for calculating turbulent mixing rates of the filament current and density .
closures can be applied to intermittent turbulence even though they rely on gaussian statistics .
the filaments , which make the system non gaussian as a whole , are quasi stationary on the short time scale of turbulent evolution .
therefore , on that scale their only effect is to make the turbulence inhomogeneous .
the fast time scale statistics are a property of fast time scale nonlinearity , and remain gaussian .
the closure equations are @xmath142 @xmath143 & -&d_{nn}^{(1)}(m,\gamma)\frac{\partial^2}{\partial r^2}\nabla^2\tilde{n}_{m,\gamma}-d_{nn}^{(2)}(m,\gamma)\frac{\partial^2}{\partial r^2}\tilde{n}_{m,\gamma}=\frac{im}{r}\tilde{\psi}_{m,\gamma}\frac{d}{dr}j_0(r).\end{aligned}\ ] ] this system is complex .
the six diffusivities all contribute to the lowest order as @xmath144 .
( the diffusion coefficients and derivatives are not of the same order , but their product is . )
moreover there is varied dependence on fluctuation correlations , and there are complex turbulent decorrelation functions . for example @xmath145 where @xmath146k_1(m,\gamma ) \nonumber \\ [ 2 mm ] & + & \hat{\gamma}-d_{nn}^{(1)}(m,\gamma)\frac{\partial^2}{\partial r^2}\nabla_{m}^2-d_{nn}^{(2)}(m,\gamma)\frac{\partial^2}{\partial r^2}\bigg\},\end{aligned}\ ] ] @xmath147 and @xmath148 is the decorrelation rate for fluctuations at @xmath149 driving @xmath121 .
expressions for the other diffusivities are given in the appendix .
we now calculate turbulence properties from the closure equations .
the time scale of turbulent evolution in the filament is given by @xmath150 .
as @xmath144 this is dominated by the refraction .
hence the first terms of eq .
( 23 ) and eq .
( 24 ) must balance the second terms , which in turn , must balance the remaining nonlinear terms . if eqs .
( 23 ) and ( 24 ) are solved jointly retaining the first two terms , @xmath151 as @xmath144 . because @xmath152 and @xmath153 , @xmath154 this time scale is purely imaginary , _ i.e. _ , oscillatory , when derived from a balance with only the linear alfvn terms .
when the diffusivities are included , it is complex .
this rapid decay suppresses turbulence in the filament relative to levels outside the filament .
the width @xmath137 , as derived previously , comes from independent balances in the equations for @xmath69 and @xmath68 , and does not account for the kinetic alfvn wave dynamics that links @xmath69 and @xmath68 . to do that , eqs .
( 23 ) and ( 24 ) are combined into a single equation by operating on eq .
( 24 ) with @xmath155 and substituting from eq .
the resulting equation is eighth order in the radial derivative , and unsuitable for wkb analysis .
however , we can determine the radial scale as @xmath144 by dimensional anlaysis , taking @xmath156 and solving algebraically .
this is the same procedure used to obtain eq .
( [ delta ] ) .
formally treating @xmath137 as a small parameter , we account for the fact that the diffusion coefficients have different scalings with respect to @xmath137 , based on different numbers of radial derivatives operating on quantities within the coefficients .
arbitrarily taking @xmath157 as a reference diffusion coefficient , the definitions in the appendix show that if we define @xmath158 , @xmath159 , @xmath160 , @xmath161 , @xmath162 , and @xmath163 , then the lower case diffusivities @xmath164 , @xmath165 , @xmath166 , @xmath167 , @xmath168 , and @xmath169 are all of the same order .
we formally order the large parameter @xmath170 by taking @xmath171 and @xmath172 , where the controlling asymptotic limit becomes @xmath173 .
the relationship between @xmath174 and @xmath137 will be derived by requiring that the asymptotic balance be consistent .
after all leading order expressions are derived , @xmath174 is set equal to 1 . substituting these relations into eqs .
( 23 ) and ( 24 ) and solving , we obtain : @xmath175&-&\frac{\epsilon^2}{\delta r^2}\big[imj'\big(d_{n \psi}^{(1)}+d_{n \psi}^{(2)}+d_{\psi n}\big)-\gamma\big(d_{nn}^{(1)}+d_{nn}^{(2 ) } \nonumber \\ [ 2 mm ] -d_{\psi\psi}\big)\big]+m^2j'^2+\hat{\gamma}^2\bigg\}\tilde{\psi}_{m,\gamma}^{(i)}=\bigg[\hat{\gamma}&-&\frac{d_{nn}^{(1)}+d_{nn}^{(2)}}{\delta r^2}\bigg]s_{\psi}+\big[imj'+d_{\psi n}\big]\delta r s_n,\end{aligned}\ ] ] where @xmath176 are the turbulence sources described in the previous section .
the left hand side is a dimensional representation of a green function operator that governs the response to the sources .
the spatial response decays inward from the edge of the filament where both the sources and the shear in @xmath83 are strong .
consequently , the field @xmath177 appearing in the sources @xmath178 and @xmath179 is understood to be characteristic of the filament edge , and therefore ambient turbulence , while @xmath180 is a response accounting for the the refractive decay inside the filament .
the scale length of the response @xmath137 is found by solving the homogeneous problem , _
i.e. _ , by setting the left hand side equal to zero and solving for @xmath137 . in the limit @xmath173
, turbulence remains in the dynamics and contributes to @xmath137 only if @xmath181 .
otherwise , the dynamics is laminar .
the solution for @xmath137 is @xmath182^{1/2}\ ] ] where @xmath183 , @xmath184 , and @xmath185 .
this is the alfvnic generalization of eq .
it is more complicated but gives identical scaling .
recalling that all the lower case diffusivities have the same scaling and replacing them with a generic @xmath186 , the solution scales as @xmath187 . setting @xmath188 , @xmath189 the generic diffusivity @xmath186 can be evaluated from the definitions given for specific diffusivities in the appendix .
if the turbulent decorrelation functions are evaluated in a strong turbulence regime ( turbulence time scales @xmath190 linear time scales ) , @xmath191 , reproducing eq .
( 22 ) .
although the structure function has not been solved ( just its radial scale ) , its form in simpler cases illustrates the rapid decay of turbulence across the boundary layer , from the edge inward .
where wkb analysis is possible , the leading order spatial green function has the form @xmath192 where @xmath193 is a complex constant with positive real part , @xmath194 ( @xmath195 ) is the smaller ( larger ) of @xmath75 and @xmath196 , and @xmath197 is a positive constant determined by the order of the homogeneous operator . here
our dimensional solution of the problem , carried out by inverting eq .
( 29 ) , captures the radial integral over a structure function like that of eq .
( 34 ) . solving eq .
( 29 ) we obtain @xmath198s_{\psi}+\big[imj'+d_{\psi n}\big]\delta r s_n \bigg\ } \sim \hat{\gamma}^{-1}\frac{m}{a}\hat{\psi}(r_0)[n_0'+\delta
r j_0'].\ ] ] the temporal and spatial response to turbulent sources @xmath179 and @xmath178 at a point @xmath125 in the filament edge appears here as a structure factor of magnitude @xmath199 multiplying the source .
the product of source and response yields the value of @xmath200 inside the boundary layer . beyond @xmath137
the response decays with an envelope like that of eq .
the part of the source proportional to @xmath201 is essentially larger than the part proportional to @xmath202 by o(@xmath203 ) .
however if @xmath200 is substituted into the correlations of the equation for @xmath70 [ eq .
( 12 ) ] , the @xmath202-part yields the diagonal terms .
the density is given by the dimensional representation of eq .
( 23 ) , @xmath204^{-1}\big[s_{\psi}+\frac{d_{\psi\psi}}{\partial r^2}\tilde{\psi}_{m,\gamma}-\hat{\gamma}\tilde{\psi}_{m,\gamma}\big]\sim\frac{\tilde{\psi}_{m,\gamma}(r_0)\big[n_0'+\delta r j_0'\big]}{aj'\delta r}.\ ] ] the layer width @xmath137 is both the embodiment of the strong refraction of turbulent kaw activity in the filament by the large magnetic field shear @xmath170 , and a condition for the refraction to be sufficiently strong to modify the scales of turbulence in the filament relative to those outside it . with @xmath78 the scale of typical fluctuations of interest , the refraction is strong when @xmath205 , or @xmath206 as a condition for strong refraction it makes sense to use values for @xmath186 or @xmath68 that are typical of the turbulence in regions @xmath207 where there are no intense filaments . inside a strong filament the reduction of turbulent kaw activity represented by the structure factor @xmath199 makes @xmath208 .
accordingly , the boundary layer width @xmath209 is smaller than @xmath210^{1/2}$ ] .
the long time evolution of the filament fields @xmath211 and @xmath212 is governed by the mixing stresses of eqs .
( 12 ) and ( 13 ) .
these can now be evaluated using the boundary layer responses @xmath180 and @xmath213 derived in the previous section . because these fields are confined to the layer , the time scale @xmath73 is a mixing time across the boundary layer . for the diagonal stress components ,
the mixing is diffusive .
the asymptotic behavior of the boundary layer yields the following dimensional equivalents : @xmath138 and @xmath214 , as before ; @xmath215 ; @xmath216 ; @xmath217 ; and @xmath218 .
( the latter two expressions are inverse laplace transform relations . ) with these conventions @xmath219 \nonumber \\ & \approx&\sum_m\frac{1}{a^2j'^2}\big[\langle\tilde{b}_{\theta -m}\tilde{n}_m\rangle |_{t=0}+\langle\tilde{b}_{\theta m}^2\rangle|_{t=0}\big]\big[n_0'+\delta rj_0'\big],\end{aligned}\ ] ] where @xmath220 .
the factor @xmath221 in the right - most form makes @xmath222 large , _
i.e. _ , mixing across the boundary layer is impeded by refraction .
the turbulent fields in these expressions are filament edge fields , _
i.e. _ , they are characteristic of ambient turbulence . the mixing time for current can be obtained by operating with @xmath223 on both sides of the eq .
( 38 ) . on the left hand side @xmath224 , while on the right hand side , @xmath225 .
consequently , @xmath226\big[n_0'+\delta rj_0'\big ] .
\label{current0it}\ ] ] this time scale is much shorter because current , as a second derivative of @xmath68 , has finer scale structure . if the filament is alfvnic , _
i.e. _ , @xmath227 , the mixing time is dominated by the part of eq .
( 39 ) that is proportional to @xmath201 .
this represents off - diagonal transport of current driven by density gradient .
the diagonal transport ( driven by @xmath202 ) is current diffusion , and is slower by a factor @xmath203 . in the discussions that follow we will deal with the current diffusion time scale ,
although similar behavior will hold for the off diagonal transport .
the mixing time for density is @xmath228 here the dominant component ( proportional to @xmath201 ) is diffusive .
we evaluate these boundary layer mixing times relative to the two turbulent time scales of the system .
these are @xmath229 , the turbulent decay time in the layer , and @xmath230 , a turbulent alfvn time outside the filament . to simplify expressions
we note that alfvnic equipartition implies that @xmath231 .
we also note that @xmath232 is in the laplace transform domain , whereas @xmath201 and @xmath202 are in the time domain . under the inverse laplace transform , @xmath233 .
the scale of the filament is @xmath78 , so @xmath234 .
similarly , @xmath235 , @xmath236 , and @xmath237 .
we assume the filament is alfvnic , making @xmath227 . with these relations ,
@xmath238 the last two equalities make use of eq .
( 37 ) , and the fact that @xmath209 and the mixing fluctuations are referenced to ambient turbulence levels for which @xmath239 . equation ( 41 ) indicates that turbulent diffusion times across the mixing layer @xmath137 for both @xmath17 and @xmath127 are comparable , and are much longer than the decay times of turbulence in the layer .
the strong shear limit , previously indicated by @xmath240 , is here replaced by @xmath241 , because with a fixed radius @xmath78 , strong shear means large @xmath83 . in terms of @xmath242 , @xmath243 indicating that these diffusion times are longer than the alfvnic time of the ambient turbulence .
either of the above expressions indicates that the actual lifetime of a filament ( as opposed to the turbulent diffusion time across the edge layer ) is virtually unbounded , provided direct damping due to resistivity or collisional diffusion is negligible . during a filament lifetime turbulence
must diffuse across the scale @xmath78 , many @xmath137-layer widths from the filament edge of to its center .
however , in just a layer time @xmath244 or @xmath245 , the turbulence is reduced by many factors of @xmath246 , while the filament density or current inside of the layer remains untouched .
consequently the width of the mixing layer at the edge of the filament continuously decreases even as the time to mix across it increases .
the result is that mixing never extends to the filament core .
this analysis shows that structures identified in the simulations as current filaments correlate spatially with a coherent density field , provided the density component is not destroyed by strong collisional diffusion .
the above analysis treats the current of the filament as localized .
the current is maximum at @xmath247 and becomes zero at @xmath82 .
this makes the shear of the filament magnetic field largest in the filament edge , and zero in the center .
if it is true that the shear of this field refracts turbulent kaw activity as described above , turbulence is suppressed where the shear is large .
these properties are incorporated in the spatial variation of a single quantity known as the gaussian curvature @xcite .
the gaussian curvature is a property of vector fields that quantifies the difference between shear stresses and rotational behavior . in rectilinear coordinates the gaussian curvature @xmath248 of a vector field * a*@xmath249 is @xcite @xmath250 ^ 2+\big[\frac{\partial a_y}{\partial x}+\frac{\partial a_x}{\partial y}\big]^2-\big[\frac{\partial a_y}{\partial x}-\frac{\partial a_x}{\partial y}\big]^2.\ ] ] for the total magnetic field in our cylindrical system this can be written @xmath251 ^ 2+\big[r\frac{d}{dr}\big(\frac{b_{\theta}+\tilde{b}_{\theta}}{r}\big)+\frac{1}{r}\frac{\partial}{\partial \theta}\tilde{b}_r\big]^2-\big[j_0+\tilde{j}\big]^2.\ ] ] inside the filament , turbulence is suppressed , and @xmath248 is dominated by the filament field components @xmath83 and @xmath127 . near the center , @xmath127 is maximum and @xmath252 vanishes , making @xmath248 negative . toward the filament edge ,
@xmath170 becomes maximum as @xmath127 goes to zero , making @xmath248 positive . outside the filament @xmath248
is governed by @xmath253 , @xmath254 , and @xmath255 .
these components must be roughly in balance
. if they are not , the conditions for forming a coherent filament are repeated , and a structure should be present .
therefore , in regions where there are coherent filaments , the gaussian curvature should have a strongly negative core surrounded by a strongly positive edge .
where there are no coherent structures the gaussian curvature should be small . if this property is observed in simulations , it confirms the hypothesis that shear in the filament field refracts turbulent kaw activity in such a way as to suppress turbulent mixing of the structure .
we note that the negative - core / positive - edge structure is predicted for current filaments of either sign , positive or negative
. this type of gaussian curvature structure has been observed in recent simulations @xcite .
if the current filaments are well separated , their slow evolution relative to the decaying turbulence that surrounds them leads to a highly non gaussian pdf . assuming an initial pdf that is gaussian with variance @xmath256 , @xmath257,\ ] ] it is possible to model subsequent evolution on the basis of the time scales derived previously and the condition for strong refraction , eq . ( 37 ) .
this condition stipulates that structures form where refraction is large , _
i.e. _ , where @xmath258 .
since @xmath259 and @xmath260 , structures occur for @xmath261 , where @xmath193 is the smallest numerical factor above unity to guarantee strong refraction and mixing suppression .
given the latter , filaments reside on the tail of the pdf with high @xmath262 and low probability @xmath263 .
this probability is equal to the filament packing fraction , _
i.e. _ , the fraction of 2d space occupied by current filaments .
if , for simplicity , we assume that all filaments are of radius @xmath78 , the faction of 2d space they occupy is @xmath264 , where @xmath265 is the mean distance between filaments .
therefore , @xmath266 where we assume that @xmath267 is an even function .
this expression gives the packing fraction as a function of the critical current @xmath268 for filament formation .
it is now straightforward to construct a heuristic model for the evolution from the initial distribution .
the model applies for times that are larger than the turbulent alfvn time , but shorter than the mean time between filament mergers .
( once filaments begin merging , their number and probability begin decreasing . )
prior to that time the filament part of the distribution with @xmath269 is essentially unchanged , apart from the minor effects of slow erosion at the edge of the filaments .
the probability that a fluctuation is not a filament also remains fixed , but these fluctuations decay in time .
this means that the variance decreases while the probability remains fixed .
the rate of decay is the turbulent alfvn time @xmath242 .
therefore the distribution can be written @xmath270}\bigg ] & \hspace{1.0 cm } & \textrm { for } j
< j_c \nonumber \\ [ 0.2 in ] p(j , t)=\frac{1}{\sqrt{2\pi}\langle j_{\sigma}^2\rangle ^{1/2}}\exp\bigg[\frac{-j^2}{2\langle j_{\sigma}^2\rangle}\bigg ] & \hspace{1.8 cm } & \textrm { for } j\geq j_c\end{aligned}\ ] ] where @xmath271 remains the initial variance , and @xmath272 is a time - dependent normalization constant that maintains @xmath273 at its initial value , _
i.e. _ , @xmath274}{\int_0^{j_c}dj\exp\big[-j^2\exp[t/\tau_a]/2\langle j_{\sigma}^2 \rangle\big]}.\ ] ] the distribution @xmath275 becomes highly non gaussian as @xmath276 because one part of the distribution ( for @xmath277 ) collapses onto the @xmath278 axis and becomes a delta function @xmath279 , while the other part remains fixed .
a simple measure of the deviation from a gaussian distribution is the kurtosis ,
@xmath280 ^ 2}.\ ] ] the evolving kurtosis can be calculated directly from eq . ( 47 ) .
while the exact expression is not difficult to obtain , its asymptote is more revealing .
the kurtosis diverges from the initial gaussian value of 3 as the contribution from turbulent kinetic alfvn waves ( @xmath277 ) decays and collapses to @xmath279 .
after a few aflvn times the kurtosis is dominated by the part with @xmath269 , which , because it is stationary , represents the time asymptotic value for @xmath281 .
the time @xmath282 is the mean time to the first filament mergers .
the time asymptotic kurtosis is @xmath283 ^ 2}=\frac{3}{2}\big(\frac{l}{a}\big)^2\big[1+\frac{\langle j_{\sigma}^2\rangle}{j_c^2}+\textrm{o}\big(\frac{\langle j_{\sigma}^2\rangle^{3/2}}{j_c^3}\big)\big].\ ] ] in writing this expression , the left hand side of eq .
( 46 ) has been expanded for @xmath284 to yield @xmath285 $ ] .
the time - asymptotic kurtosis is much greater than the initial gaussian value of 3 and is characterized by the initial value of the inverse packing fraction .
once filament mergers begin , the inverse packing fraction increases above the initial value @xmath286 . if @xmath287 ^ 2 $ ] is the inverse packing fraction for @xmath288 , the above analysis suggests that the kurtosis will continue increasing as @xmath289 ^ 2 $ ] for late times .
we now consider the distribution of density .
as shown in the previous section , the density present in the current filament also has suppressed mixing and is therefore coherent , or long lived .
however , it is not spatially intermittent to the same degree as the current .
alfvnic dynamics indicate that @xmath290 , while ampere s law stipulates that the magnetic field of the filament extends into the region @xmath207 , falling off as @xmath4 .
hence the density associated with filaments also is expected to fall off as @xmath4 for @xmath207 .
this spatially extended structure makes density less isolated than current .
it produces higher probabilities for low values of density than those of decaying turbulence .
this will yield a kurtosis closer to the gaussian value of 3 than the kurtosis of the current .
however , the distribution of low - level density associated with the structure likely will not be gaussian .
it is ultimately the distribution that matters for the scattering of _ rf_-pulsar signals . to construct the density pdf we seek the mapping of density onto the spatial area it occupies .
we obtain this mapping for the filament density , assuming that the density of turbulence is low , and has effectively collapsed onto @xmath291 after a few @xmath242 , just as the current .
the density for @xmath207 goes as @xmath292 , where @xmath17 is the value of the density at @xmath82 . as shown in fig .
3 , the area occupied for a given density is @xmath293 .
this area is the probability when properly normalized , hence , @xmath294 writing @xmath295 in terms of @xmath296 using @xmath292 , @xmath297 , where @xmath298 is the normalization constant chosen so that the probability integrated over the whole filament with its @xmath4-mantle equals the packing fraction , or probability of finding the filament in some sample area . with the long , slowly decaying tail of @xmath299
it is necessary to impose a cutoff to keep the pdf integrable .
the cutoff , which we will label @xmath300 , corresponds to the low level of decaying turbulence , but otherwise need not be specified .
consequently , the normalization is determined by @xmath301 2c_n\int_{n_c}^{n_0}\frac{dn}{n^3}&=&1 \hspace{1.7 cm } \textrm{for } r_c\geq l , \end{aligned}\ ] ] where @xmath302 , the radius at which @xmath303 , is @xmath304 .
the first of the two possibilities in eq .
( 52 ) allows for a cutoff radius that is smaller than the mean distance between structures ( of radius @xmath302 ) , yielding a probability that is less than unity .
if the cutoff radius is equal to or larger than the mean separation , then the structures are space filling and the probability is unity . solving for @xmath298 , the normalized density pdf is @xmath305 p(n)=\frac{n_0 ^ 2n_c^2}{(n_0 ^ 2-n_c^2)n^3 } & \hspace{1.2cm}&r_c\geq l\hspace{0.5cm}\textrm{or}\hspace{0.5cm}\frac{a}{l}\geq\frac{n_c}{n_0}\end{aligned}\ ] ] this distribution is defined for @xmath306 .
it captures only the contribution of filaments , and ignores the density inside @xmath82 , which makes a small contribution to the pdf .
this distribution is certainly non gaussian , because it has a tail that decays slowly as @xmath299 .
however , depending on the length of the tail , which is set by @xmath300 and @xmath17 , the distribution may or may not deviate from a gaussian is a significant way over @xmath306 .
this is quantified by the kurtosis , @xmath307 ^ 2}.\ ] ] substituting from eq .
( 53 ) , we find that @xmath308}{\big[\ln\big(n_0/n_c\big)\big]^2 } , & \hspace{1 cm } & r_c < l\hspace{.5 cm}\textrm{or}\hspace{.5 cm}\frac{a}{l}<\frac{n_c}{n_0 } , \\ [ 0.2 in ] \kappa(n_0,n_c)=\frac{3}{4}\frac{n_0 ^ 2}{n_c^2}\frac{\big[1 - 2n_c^2/n_0 ^ 2+n_c^4/n_0 ^ 4\big]}{\big[\ln\big(n_0/n_c\big)\big]^2 } , & \hspace{1 cm } & r_c\geq l\hspace{.5 cm}\textrm{or}\hspace{.5 cm}\frac{a}{l}\geq\frac{n_c}{n_0}.\end{aligned}\ ] ] these expressions are smaller than the current kurtosis by a factor @xmath309 ^ 2 $ ] .
unless @xmath310 is quite large , the kurtosis may not rise significantly above 3 .
this is particularly true in simulations with limited resolution where dissipation will affect the density , either directly through a collisional diffusion , or indirectly by resistive diffusion of current filaments .
kurtosis increases if @xmath300 decreases .
however , while @xmath300 is tied to the decreasing turbulence level , regeneration of the turbulence by the @xmath4 mantle may prevent @xmath300 from becoming very small .
nonetheless , mergers of filaments will decrease the packing fraction . even if the density is space filling initially and satisfies eq .
( 56 ) , the mean filament separation will increase above @xmath302 at some point , and the kurtosis will be given by eq .
then as the inverse packing fraction increases above @xmath311 , the kurtosis will rise .
the @xmath299 falloff of the density pdf has intriguing implications for _ rf _ scattering of pulsar signals . noting that the scattering is produced by gradients of density , the extended density structure for @xmath207 yields @xmath312 .
we can construct the pdf for @xmath313 following the procedure used for the pdf of @xmath69 .
writing @xmath295 in terms of @xmath314 using @xmath315 , we recover @xmath316 where @xmath317 is a constant .
this is a levy distribution , the type of distribution inferred in the scaling of pulsar signals @xcite .
further exploration of the implications of these results to _ rf _ scattering of pulsar signals remains an important question for future work .
we have examined the formation of coherent structures in decaying kinetic alfvn wave turbulence to determine if there is a dynamical mechanism in interstellar turbulence that leads to a non gaussian pdf in the electron density . such a pdf has been inferred from scalings in pulsar scintillation measurements .
we use a model for kinetic alfvn wave turbulence that is applicable when there is a strong mean magnetic field . the nonlinearities couple density and magnetic field in the plane
perpendicular the mean field in a way that is analogous to the coupling of flow and magnetic field in reduced mhd .
the model applies at scales on the order of the ion gyroradius and smaller .
we show that the coherent current filaments previously observed to emerge from a gaussian distribution in simulations of this model @xcite result from strong refraction of turbulent kinetic alfvn waves .
the refraction occurs in the edge of intense , localized current fluctuations , and is caused by the strongly sheared magnetic field associated with the current .
this refraction localizes turbulent wave activity to the extreme edge of the filament , and impedes mixing ( turbulent diffusion ) of the filament current by the turbulence . from this analysis
we conclude that the turbulence suppression by sheared flows common in fusion plasmas @xcite has a magnetic analog in situations where there is no flow .
this leads to a further conclusion that intermittent turbulence , which is generally associated with flows , can occur in situations where there is no flow .
( by flow we mean ion motion .
electron motion is incorporated in the current . )
we have derived a condition for the strength of magnetic shear required to produce the strong refraction and suppress mixing .
we show that this condition yields a prediction for the gaussian curvature of the magnetic field .
this quantity is predicted to have large values inside the coherent current filaments , and small values everywhere else . inside filaments
the gaussian curvature is negative at the center , and positive at the edge .
the analysis shows that long - lived fluctuation structures form in the density and magnetic field , provided damping is negligible . like the current filaments , these structures
avoid mixing because of the refraction of turbulent kinetic alfvn wave activity .
hence they occur in the same physical location as the current filaments .
however the localized nature of the current filaments give the long - lived magnetic field an extended region external to the current . in this region
the field falls off as @xmath4 , where @xmath75 is distance from the center of the filament .
because kinetic alfvn wave dynamics yields an equipartition of density and magnetic field fluctuations , we posit that the long - lived density has a similar extended structure . as a result
, the connection between coherent structure and localization that is true for the current , and makes it highly intermittent , does not apply to the density .
while there is coherent long - lived density , it need not be localized .
a similar situation holds for vorticity and flow in 2d navier stokes turbulence @xcite . to explore this matter
we have used the physics of the coherent structure formation to derive heuristic probability distribution functions for the current and density . as the turbulence decays , leaving intense current fluctuations as coherent current filaments , the kurtosis of current increases to a value proportional to the packing fraction .
the kurtosis of density does not become as large , and could , under appropriate circumstances , remain close to the gaussian value of 3 .
however mergers of structures in a situation with very weak dissipation could increase the kurtosis well above 3 .
more importantly , however , the density pdf is non gaussian even when its kurtosis is not greatly different from 3 .
the @xmath4 structure external to the current gives the pdf a tail that goes as @xmath299 .
mapping the @xmath4 structure to a pdf in density gradient , the density - gradient pdf decays as @xmath318 , a levy distribution .
this suggests that the mechanism described here may play a role in the scaling of pulsar _ rf _ signals .
several aspects of this problem need additional study .
it is important to adapt these results to a steady state . generally speaking
, there is a dynamical link between decaying turbulence and turbulence in a stationary dissipation range .
hence these results , at least qualitatively , are relevant to the dissipation range .
dissipation begins to affect the spectrum at a scale that is somewhat larger ( order of magnitude ) than the nominal dissipation scale @xcite .
structures such as these would correspond to active , filamentary regions of dissipation analogous to those observed in neutral gas clouds , albeit at a much smaller scale and with no accompanying flow shear signature .
intermittent structures can extend into the stationary inertial range , but the analysis presented here must be modified . in the inertial range ,
turbulence is replenished , allowing the slow mixing of a coherent structure to continue until it is gone .
structures are also regenerated by the turbulence , and the statistics is ultimately set by a balance of mixing and regeneration rates .
the mixing rates calculated here are sufficiently slow in strong filaments , that coherent structure formation is expected even in a steady state .
there is also a possible link between structures in the larger scale range of shear alvn excitations and kaw excitations .
these questions will be explored in future work .
while gyroradius - scale kaw turbulence may arise in astrophysical contexts other than the ism , the small scales make it unlikely that astrophysical observations will be available for testing this theory .
therefore , simulations should be used to check key conclusions from the theoretical work presented here .
these include the formation of density structures , which was not reported in @xcite , the structure of the gaussian curvature , which validates the refraction hypothesis , and the existence of the @xmath4 structure in the density and its effect on the pdf
. the effect of this type of density field on _ rf _ scattering remains the underlying question , and modeling of the scattering with simulated fields should be pursued .
the authors acknowledge useful conversations with stanislav boldyrev , including his observation that the density pdf derived herein immediately leads to a levy distribution in the density gradient .
pwt also acknowledges the aspen center for physics , where part of this work was performed .
this work was supported by the national science foundation .
closures truncate the moment hierarchy that is generated when averages are taken of nonlinear equations .
the closure we have used is of the eddy damped quasi normal markovian variety , and follows the steps of the closure calculation described in @xcite .
the nonlinear decorrelation is calculated consistent with the statistical ansatz , not imposed ad hoc .
the closure equations are given in eqs .
( 23 ) and ( 24 ) .
the other diffusivities not given in eq .
( 25 ) are @xmath319 @xmath320 @xmath321 & -&\frac{(m - m')}{r}\big\langle\frac{\partial \tilde{\psi}_{m',\gamma'}}{\partial r}k_3(m - m',\gamma-\gamma')\delta w_{\gamma,\gamma'}\big(\frac{m}{r}\big)\frac{\partial \tilde{\psi}_{-m',-\gamma'}}{\partial r}\big\rangle \nonumber \\ [ 2 mm ] & -&\frac{m}{r}\big\langle k_3(m - m',\gamma-\gamma')\delta w_{\gamma,\gamma'}\frac{m'}{r}\tilde{\psi}_{m',\gamma'}\nabla^2\tilde{\psi}_{-m',-\gamma'}\big\rangle \bigg\},\end{aligned}\ ] ] @xmath322 @xmath323 \tilde{\psi}_{-m',-\gamma'}\big\rangle&-&\frac{(m - m')}{r}\big\langle\frac{\partial \tilde{\psi}_{m',\gamma'}}{\partial r}k_1(m - m',\gamma-\gamma')p_{m - m'}^{-1}\delta w_{\gamma,\gamma'}\big(\frac{m}{r}\big)\frac{\partial \tilde{\psi}_{-m',-\gamma'}}{\partial r}\big\rangle \nonumber \\ [ 2 mm ] & + & \frac{m}{r}\big\langle k_1(m - m',\gamma-\gamma')p_{m - m'}^{-1}\delta w_{\gamma,\gamma'}\frac{m'}{r}\tilde{\psi}_{-m',-\gamma'}\nabla^2\tilde{\psi}_{m',\gamma'}\big\rangle \nonumber \\ [ 2 mm ] & + & \frac{m'}{r}\big\langle \tilde{\psi}_{m',\gamma'}k_1(m - m',\gamma-\gamma')p_{m - m'}^{-1}\delta w_{\gamma,\gamma'}\frac{m'}{r}\nabla^2\tilde{\psi}_{-m',-\gamma'}\big\rangle \nonumber \\ [ 2 mm ] & + & \frac{m'}{r}\big\langle \tilde{\psi}_{m',\gamma'}k_3(m - m',\gamma-\gamma')\delta w_{\gamma,\gamma'}\frac{m'}{r}\nabla^2\tilde{n}_{-m',-\gamma'}\big\rangle \bigg\},\end{aligned}\ ] ] where @xmath324\big(imj'(r - r_0)+d_{\psi n}(m,\gamma)\frac{\partial^2}{\partial r^2}\big)^{-1},\ ] ] @xmath325 and @xmath326 and @xmath327 are given in eqs .
( 26 ) and ( 27 ) .
these expressions contain both linear wave terms and nonlinear diffusivities , and are valid in both weak and strong turbulence regimes .
outside filaments , where turbulence levels are evaluated to derive the strong refraction condition , eq .
( 37 ) , the turbulence is strong .
the strong turbulence limit of the above expressions yields the diffusivity @xmath186 that appears in eq .
( 37 ) .
armstrong , j.w . ,
cordes , j.m . , & rickett , b.j .
1981 , , 291 , 561 armstrong , j.w . ,
rickett , b.j . , & spangler , s.r .
1995 , , 443 , 209 bale , s.d . , et al .
2005 , , 94 , 215002 bhat , n.d .
ramesh , et al .
2004 , , 605 , 759 boldyrev , s. & gwinn , c.r .
2003a , , 91 , 131101 boldyrev , s. & gwinn , c.r .
2003b , , 584 , 791 boldyrev , s. & konigl , a. 2006 , , 640 , 344 craddock , g.g . , diamond , p.h . , & terry , p.w .
1991 , phys .
fluids b , 3 , 304 frisch , u. 1995 , turbulence ( cambridge : cambridge university press ) grappin , r. , velli , m. , & mangeney , a. 1991 , ann .
geophys . , 9 , 416 hazeltine , r.d .
1983 , phys .
fluids , 26 , 3242 head , m.r . & bandyopadhyay , p. 1981 , j. fluid mech . , 107 , 297 hily - blant , p. , pety , p. , & falgarone , e. 2007 , arxiv : astro - ph/0701326v1 hof , b. , et al .
2004 , science , 305 , 1594 howes , g.g .
2006 , , 651 , 590 kerr , r. 1985 , j. fluid mech . , 153 , 31 mcwilliams , j. 1984 , j. fluid mech .
, 146 , 21 politano , h. & pouquet , a. 1995 , , 52 , 636 she , z.s . & leveque , e. 1994 , , 72 , 336 smith , k.w . & terry , p.w .
2006 , bull .
soc . , 51 , 251 sutton , j.m .
1971 , mnras , 155 , 51 terry , p.w .
1989 , physica d , 37 , 542 terry , p.w . ,
newman , d.e . , & mattor , n. 1992 , phys .
fluids a , 4 , 927 terry , p.w . , fernandez , e. , & ware , a.s .
1998 , , 504 , 821 terry , p.w .
2000 , rev .
, 72 , 109 terry , p.w . ,
mckay , c. , & fernandez , e. 2001 , phys .
plasmas , 8 , 2707 waleffe , f. 1997 , phys .
fluids , 9 , 883 | spatial intermittency in decaying kinetic alfvn wave turbulence is investigated to determine if it produces non gaussian density fluctuations in the interstellar medium .
non gaussian density fluctuations have been inferred from pulsar scintillation scaling .
kinetic alfvn wave turbulence characterizes density evolution in magnetic turbulence at scales near the ion gyroradius . it is shown that intense localized current filaments in the tail of an initial gaussian probability distribution function possess a sheared magnetic field that strongly refracts the random kinetic alfvn waves responsible for turbulent decorrelation . the refraction localizes turbulence to the filament periphery
, hence it avoids mixing by the turbulence .
as the turbulence decays these long - lived filaments create a non gaussian tail .
a condition related to the shear of the filament field determines which fluctuations become coherent and which decay as random fluctuations .
the refraction also creates coherent structures in electron density .
these structures are not localized .
their spatial envelope maps into a probability distribution that decays as density to the power @xmath0 . the spatial envelope of density yields a levy distribution in the density gradient . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Rural Housing Tax Credit Act of
2003''.
SEC. 2. CREDIT FOR PURCHASE OF PRINCIPAL RESIDENCES BY FIRST-TIME RURAL
HOMEBUYERS.
(a) In General.--Subpart A of part IV of subchapter A of chapter 1
of the Internal Revenue Code of 1986 (relating to nonrefundable
personal credits) is amended by inserting after section 25B the
following:
``SEC. 25C. PURCHASE OF PRINCIPAL RESIDENCES BY FIRST-TIME RURAL
HOMEBUYERS.
``(a) Allowance of Credit.--In the case of an individual who is a
first-time homebuyer of a principal residence in a rural area during
any taxable year, there shall be allowed as a credit against the tax
imposed by this chapter for the taxable year an amount equal to the
lesser of--
``(1) 10 percent of the purchase price of the residence, or
``(2) $5,000.
``(b) Limitations.--
``(1) Limitation based on adjusted gross income.--
``(A) In general.--The amount allowed as a credit
under subsection (a) for any taxable year shall be
reduced (but not below zero) by the amount which bears
the same ratio to such amount as--
``(i) the excess of--
``(I) the taxpayer's modified
adjusted gross income for such taxable
year, over
``(II) $30,000 ($60,000 in the case
of a joint return), bears to
``(ii) $10,000 ($20,000 in the case of a
joint return).
``(B) Modified adjusted gross income.--For purposes
of subparagraph (A), the term `modified adjusted gross
income' means the adjusted gross income of the taxpayer
for the taxable year increased by any amount excluded
from gross income under section 911, 931, or 933.
``(2) Limitation based on amount of tax.--The credit
allowed under subsection (a) for any taxable year shall not
exceed the excess of--
``(A) the sum of the regular tax liability (as
defined in section 26(b)) plus the tax imposed by
section 55, over
``(B) the sum of the credits allowable under this
subpart (other than this section and section 23) and
section 27 for the taxable year.
``(3) Married individuals filing jointly.--In the case of a
husband and wife who file a joint return, the credit under this
section is allowable only if the residence is a qualified
residence with respect to both the husband and wife, and the
amount specified under subsection (a)(2) shall apply to the
joint return.
``(4) Married individuals filing separately.--In the case
of a married individual filing a separate return, subsection
(a)(2) shall be applied by substituting `$2,500' for `$5,000'.
``(5) Other taxpayers.--If 2 or more individuals who are
not married purchase a qualified residence, the amount of the
credit allowed under subsection (a) shall be allocated among
such individuals in such manner as the Secretary may prescribe,
except that the total amount of the credits allowed to all such
individuals shall not exceed $5,000.
``(c) Definitions.--For purposes of this section--
``(1) Rural area.--The term `rural area' has the meaning
given such term by section 520 of the Housing Act of 1949.
``(2) First-time homebuyer.--The term `first-time
homebuyer' has the meaning given such term by section
72(t)(8)(D)(i).
``(3) Principal residence.--The term `principal residence'
has the same meaning as when used in section 121.
``(4) Purchase and purchase price.--The terms `purchase'
and `purchase price' have the meanings provided by section
1400C(e).
``(d) Carryforward of Unused Credit.--If the credit allowable under
subsection (a) for any taxable year exceeds the limitation imposed by
subsection (b)(2) for such taxable year reduced by the sum of the
credits allowable under this subpart (other than this section and
section 23), such excess shall be carried to the succeeding taxable
year and added to the credit allowable under subsection (a) for such
taxable year.
``(e) Reporting.--If the Secretary requires information reporting
under section 6045 by a person described in subsection (e)(2) thereof
to verify the eligibility of taxpayers for the credit allowable by this
section, the exception provided by section 6045(e)(5) shall not apply.
``(f) Recapture of Credit in Case of Certain Sales.--
``(1) In general.--Except as provided in paragraph (5), if
the taxpayer--
``(A) fails to use a qualified residence as the
principal residence of the taxpayer, or
``(B) disposes of a qualified residence,
with respect to the purchase of which a credit was allowed
under subsection (a) at any time within 5 years after the date
the taxpayer acquired the property, then the tax imposed under
this chapter for the taxable year in which the disposition
occurs is increased by the credit recapture amount.
``(2) Credit recapture amount.--For purposes of paragraph
(1), the credit recapture amount is an amount equal to the sum
of--
``(A) the applicable recapture percentage of the
amount of the credit allowed to the taxpayer under this
section, plus
``(B) interest at the overpayment rate established
under section 6621 on the amount determined under
subparagraph (A) for each prior taxable year for the
period beginning on the due date for filing the return
for the prior taxable year involved.
No deduction shall be allowed under this chapter for interest
described in subparagraph (B).
``(3) Applicable recapture percentage.--
``(A) In general.--For purposes of this subsection,
the applicable recapture percentage shall be determined
from the following table:
The applicable
recapture
``If the sale occurs in:
percentage is:
Year 1............................... 100
Year 2............................... 80
Year 3............................... 60
Year 4............................... 40
Year 5............................... 20
Years 6 and thereafter............... 0.
``(B) Years.--For purposes of subparagraph (A),
year 1 shall begin on the first day of the taxable year
in which the purchase of the qualified residence
described in subsection (a) occurs.
``(4) No credits against tax.--Any increase in tax under
this subsection shall not be treated as a tax imposed by this
chapter for purposes of determining the amount of any credit
under this chapter or for purposes of section 55.
``(5) Death of owner; casualty loss; involuntary
conversion; etc.--The provisions of paragraph (1) do not apply
to--
``(A) a disposition of a qualified residence made
on account of the death of any individual having a
legal or equitable interest therein occurring during
the 5-year period to which reference is made under
paragraph (1),
``(B) a disposition of the old qualified residence
if it is substantially or completely destroyed by a
casualty described in section 165(c)(3) or compulsorily
or involuntarily converted (within the meaning of
section 1033(a)), or
``(C) a disposition pursuant to a settlement in a
divorce or legal separation proceeding where the
qualified residence is sold or the other spouse retains
such residence.
``(g) Basis Adjustment.--For purposes of this subtitle, if a credit
is allowed under this section with respect to the purchase of any
residence, the basis of such residence shall be reduced by the amount
of the credit so allowed.''.
(b) Conforming Amendments.--
(1) Subsection (a) of section 1016 of such Code (relating
to general rule for adjustments to basis) is amended by
striking ``and'' at the end of paragraph (27), by striking the
period at the end of paragraph (28) and inserting ``, and'',
and by adding at the end the following new paragraph:
``(29) in the case of a residence with respect to which a
credit was allowed under section 25C, to the extent provided in
section 25C(g).''.
(2) Section 24(b)(3)(B), as added by the Economic Growth
and Tax Relief Reconciliation Act of 2001, is amended by
striking ``23 and 25B'' and inserting ``23, 25B, and 25C''.
(3) Section 25(e)(1)(C) is amended by striking ``23 and
1400C'' and by inserting ``23, 25C, and 1400C''.
(4) Section 25(e)(1)(C), as amended by the Economic Growth
and Tax Relief Reconciliation Act of 2001, is amended by
inserting ``25C,'' after ``25B,''.
(5) Section 25B, as added by the Economic Growth and Tax
Relief Reconciliation Act of 2001, is amended by striking
``section 23'' and inserting ``sections 23 and 25C''.
(6) Section 26(a)(1), as amended by the Economic Growth and
Tax Relief Reconciliation Act of 2001, is amended by striking
``and 25B'' and inserting ``25B, and 25C''.
(7) Section 1400C(d) is amended by inserting ``and section
25C'' after ``this section''.
(8) Section 1400C(d), as amended by the Economic Growth and
Tax Relief Reconciliation Act of 2001, is amended by striking
``and 25B'' and inserting ``25B, and 25C''.
(9) The table of sections for subpart A of part IV of
subchapter A of chapter 1 is amended by inserting before the
item relating to section 26 the following:
``Sec. 25C. Purchase of principal
residences by first-time rural
homebuyers.''.
(c) Effective Date.--
(1) In general.--The amendments made by subsections (a) and
(b)(9) shall apply to purchases after the date of the enactment
of this Act, in taxable years ending after such date.
(2) Temporary conforming amendments.--The amendments made
by paragraphs (1), (3), and (7) of subsection (b) shall apply
to taxable years ending before January 1, 2004.
(3) Permanent conforming amendments.--The amendments made
by paragraphs (2), (4), (5), (6), (7), and (8) of subsection
(b) shall apply to taxable years beginning after December 31,
2003.
8 | Rural Housing Tax Credit Act of 2003 - Amends the Internal Revenue Code to allow a credit (the lesser of ten percent of the purchase price or $5,000) for the purchase of a principal residence by a first-time rural homebuyer. Establishes credit limitations based upon: (1) adjusted gross income; and (2) tax.Provides for credit recapture in the event of: (1) certain sales; or (2) failure to use as a principal residence. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Improving Teacher Diversity Act''.
SEC. 2. CENTERS OF EXCELLENCE.
Title II of the Higher Education Act of 1965 (20 U.S.C. 1021 et
seq.) is amended by adding at the end the following:
``PART C--CENTERS OF EXCELLENCE
``SEC. 231. DEFINITIONS.
``As used in this part:
``(1) Eligible institution.--The term `eligible
institution' means--
``(A) an institution of higher education that has a
teacher preparation program that meets the requirements
of section 203(b)(2) and that is--
``(i) a part B institution (as defined in
section 322);
``(ii) a Hispanic-serving institution (as
defined in section 502);
``(iii) a Tribal College or University (as
defined in section 316);
``(iv) an Alaska Native-serving institution
(as defined in section 317(b)); or
``(v) a Native Hawaiian-serving institution
(as defined in section 317(b));
``(B) a consortium of institutions described in
subparagraph (A); or
``(C) an institution described in subparagraph (A),
or a consortium described in subparagraph (B), in
partnership with any other institution of higher
education, but only if the center of excellence
established under section 232 is located at an
institution described in subparagraph (A).
``(2) Highly qualified.--The term `highly qualified' when
used with respect to an individual means that the individual is
highly qualified as determined under section 9101 of the
Elementary and Secondary Education Act of 1965 (20 U.S.C. 7801)
or section 602 of the Individuals with Disabilities Education
Act (20 U.S.C. 1401).
``(3) Scientifically based reading research.--The term
`scientifically based reading research' has the meaning given
such term in section 1208 of the Elementary and Secondary
Education Act of 1965 (20 U.S.C. 6368).
``(4) Scientifically based research.--The term
`scientifically based research' has the meaning given such term
in section 9101 of the Elementary and Secondary Education Act
of 1965 (20 U.S.C. 7801).
``SEC. 232. CENTERS OF EXCELLENCE.
``(a) Program Authorized.--From the amounts appropriated to carry
out this part, the Secretary is authorized to award competitive grants
to eligible institutions to establish centers of excellence.
``(b) Use of Funds.--Grants provided by the Secretary under this
part shall be used to ensure that current and future teachers are
highly qualified, by carrying out one or more of the following
activities:
``(1) Implementing reforms within teacher preparation
programs to ensure that such programs are preparing teachers
who are highly qualified, are able to understand scientifically
based research, and are able to use advanced technology
effectively in the classroom, including use for instructional
techniques to improve student academic achievement, by--
``(A) retraining faculty; and
``(B) designing (or redesigning) teacher
preparation programs that--
``(i) prepare teachers to close student
achievement gaps, are based on rigorous
academic content, scientifically based research
(including scientifically based reading
research), and challenging State student
academic content standards; and
``(ii) promote strong teaching skills.
``(2) Providing sustained and high-quality preservice
clinical experience, including the mentoring of prospective
teachers by exemplary teachers, substantially increasing
interaction between faculty at institutions of higher education
and new and experienced teachers, principals, and other
administrators at elementary schools or secondary schools, and
providing support, including preparation time, for such
interaction.
``(3) Developing and implementing initiatives to promote
retention of highly qualified teachers and principals,
including minority teachers and principals, including programs
that provide--
``(A) teacher or principal mentoring from exemplary
teachers or principals; or
``(B) induction and support for teachers and
principals during their first 3 years of employment as
teachers or principals, respectively.
``(4) Awarding scholarships based on financial need to help
students pay the costs of tuition, room, board, and other
expenses of completing a teacher preparation program.
``(5) Disseminating information on effective practices for
teacher preparation and successful teacher certification and
licensure assessment preparation strategies.
``(6) Activities authorized under sections 202, 203, and
204.
``(c) Application.--Any eligible institution desiring a grant under
this section shall submit an application to the Secretary at such a
time, in such a manner, and accompanied by such information as the
Secretary may require.
``(d) Minimum Grant Amount.--The minimum amount of each grant under
this part shall be $500,000.
``(e) Limitation on Administrative Expenses.--An eligible
institution that receives a grant under this part may not use more than
2 percent of the grant funds for purposes of administering the grant.
``(f) Regulations.--The Secretary shall prescribe such regulations
as may be necessary to carry out this part.
``SEC. 233. APPROPRIATIONS.
``There shall be available to the Secretary, from funds not
otherwise appropriated, $50,000,000 for the period beginning with
fiscal year 2008 and ending with fiscal year 2012, to carry out this
part beginning with academic year 2008-2009, which shall remain
available until expended. The authority to carry out this part shall
expire at the end of fiscal year 2012.''. | Improving Teacher Diversity Act - Amends title II (Teacher Quality Enhancement Grants for States and Partnerships) of the Higher Education Act of 1965 to authorize the Secretary of Education to award competitive grants to certain minority-serving institutions of higher education (IHEs), or partnerships between such IHEs and other IHEs, to establish centers of excellence for teacher education.
Requires the use of such grants to ensure that current and future teachers are highly qualified by: (1) reforming teacher preparation programs so that teachers are able to understand scientifically-based research and use advanced technology effectively in the classroom; (2) providing preservice clinical experience and mentoring to prospective teachers, and increased interaction between IHE faculty and new and experienced elementary and secondary school teachers and administrators; (3) implementing initiatives to promote the retention of highly qualified teachers and principals; (4) awarding need-based scholarships for students in teacher preparation programs; (5) disseminating information on effective teacher preparation practices; and (6) conducting certain other activities authorized under title II. |
Keep me logged in ||||| See more of Columbus Division of Police on Facebook ||||| Ohio Police say a woman was so upset by the unflattering photo detectives posted to Facebook that she called them and demanded that it be removed, leading to her arrest.
"This is a first for us," Denise Alex-Bouzounis, public information officer with the Columbus Police Department, told The Huffington Post. "She really didn't want her face out there for everyone to see."
According to Alex-Bouzounis, she posted 34-year-old Monica Hargrove's mug shot to the department's Facebook page on Sept. 10, as part of a weekly roundup called "Warrant Wednesday."
"It included her mug shot, her name and information about the crime," said Alex-Bouzounis.
The Facebook post read, in part:
"On August 30th Hargrove offered a female acquaintance a ride to a pharmacy on E. Main St. to pick up a prescription. After the acquaintance got the prescription and got back in the vehicle, Hargrove robbed the victim at gunpoint, leaving her on the side of the road."
According to The Columbus Dispatch, Hargrove had been indicted in the case for aggravated-robbery and kidnapping.
The Facebook post, which garnered 64 shares and some 54 thousand pages views, did not go unnoticed by Hargrove.
Police say the woman was so upset by the mug shot photo, which she considered unflattering, that she called within 48 hours of the post.
"She contacted the detective listed on the Facebook post and said, 'Hey, I want my picture down,'" Alex-Bouzounis said. "[The detective] said, 'Come on in and we'll talk about it.'"
And, police say, that is exactly what Hargrove did.
"She came in and he put her under arrest," said Alex-Bouzounis.
"Warrant Wednesday" has proven to be such a success for the police department that they plan to continue using Facebook to hunt down wanted individuals.
"We've had a lot of Facebook followers help turn people in," said Alex-Bouzounis. ||||| Don’t you just hate it when an unflattering photo of you ends up online?
So does Monica Hargrove, and she did what any of us might do: The 34-year-old called the person who had posted the offending image and asked that it be taken down.
But the person tagging her on Facebook wasn’t a friend. It was the Columbus police, who had a warrant for her arrest on an aggravated-robbery charge.
The division’s public-information team posted Hargrove’s mug shot on Sept. 10 on its Facebook page with a description of the charge: On Aug. 30, police said, she gave a friend a ride to a pharmacy to pick up a prescription and then robbed the friend.
Hargrove called police and said she wanted her picture taken off the page. The detective said sure, just come on down to headquarters. She did and was promptly locked up.
She since has been indicted on robbery, aggravated-robbery and kidnapping charges.
In the end, though, she did get her way: Her photo is gone from the Police Division’s Facebook page.
• • •
The American Bar Association has given lawyers the OK to Google the names of both potential jurors and those who are selected to sit on a case. So, if they want, attorneys in a criminal or civil case can look for jurors’ latest rants shared on Facebook, memes posted on Tumblr, selfies uploaded to Instagram or recipes pinned to Pinterest.
It’s not clear how many lawyers will take the association up on this. There’s just not much time to be Facebook-stalking potential jurors during voir dire, Columbus defense attorney Martin Midian said.
Typically, attorneys see jury questionnaires, with names and addresses, only as the potential jurors are filing into the courtroom.
“We might be allowed to do that,” Midian said, “but the opportunity, in reality, to do that doesn’t really exist.”
• • •
As most judges do these days, Franklin County Municipal Judge James E. Green gives a regular spiel to remind his courtroom that cellphones should be silenced, lest they be confiscated by the court.
“I am looking for an iPhone 6,” he added during a morning in Courtroom 4C last week, “so if you have one, please leave it on. I can always get the manual and wall charger from you later.”
[email protected]
@Theodore_Decker
[email protected]
@allymanning ||||| Suspect Meets Police to Demand they Take Mugshot off Facebook, Gets Arrested
“I think Ms. Hargrove’s case is the first time a suspect has called about a mug shot,” said Denise Alex, with the Columbus Police.Cops posted a mug shot of Hargrove from a previous arrest when a warrant was issued last month in connection with a robbery in East Columbus.Hargrove is accused of taking a friend to a pharmacy and then robbing the woman at gunpoint. “The woman went to pick up a prescription, Ms. Hargrove pulled a gun on her, and tossed her friend out of the car,” said Alex.After C.P.D. posted Hargrove’s photo on its Facebook page, she called police and demanded her mug shot be removed from the internet. “She said I want my photo off the CPD Facebook,” said Alex.Hargrove made arrangements to come to CPD and speak with Detectives about removing her unflattering Kodak moment; instead she was arrested and booked on robbery charges.“I am sure she was surprised, because she thought she was meeting with detectives, she should’ve known she had a warrant for her arrest,” said Alex.Hargrove was allowed to take a new mug shot, however the “unflattering” photo she demanded be pulled from cyberspace is still roaming free on the information superhighway.Hargrove was recently indicted by a County Grand Jury on kidnapping and robbery charges, and is now locked up in the Franklin County Jail. | – Ahh, vanity: It gets us every time. A 34-year-old Ohio woman was hit with robbery, aggravated-robbery, and kidnapping charges all thanks, in part, to an unflattering Facebook photo. Monica Hargrove drove a friend to pick up a prescription on Aug. 30, only to allegedly pull a gun on her, rob her, and leave her on the side of the road. A warrant was issued for her arrest, and Columbus Police posted a mugshot of Hargrove—one taken in connection with a previous arrest—on its Facebook page as part of its Sept. 10 "Warrant Wednesday" post. A rep for the Columbus Police tells the Huffington Post that Hargrove called police within 48 hours of the posting, insisting that her photo be taken down. She apparently considered it an unattractive shot. Police invited Hargrove to the station to talk about it; she actually showed up and was promptly arrested. Fox 28 notes Hargrove got the chance to take a new mugshot, and the Columbus Dispatch reports that she ultimately "got her way": The offending photo, which racked up more than 50,000 views, was taken down from the department's feed. Kind of. You can still see it here, and the Columbus Division of Police page shared a link to the Huffington Post story 16 hours ago with the infamous picture featured as well. Reads its post: "Looks like CPD & suspect Monica Hargrove are getting some national attention. The [mugshot] she wanted taken off CPD Facebook is now seen around the US and beyond!" (This, however, may be the best mugshot of all time.) |
star clusters are fundamental astrophysical calibrators , providing information that can be used to constrain the evolution of stars , stellar systems , and galaxies . from a technical perspective
, they are also ideal targets for characterizing the performance of adaptive optics ( ao ) systems , as the image quality and its variation with location across the science field can be assessed in a straight - forward manner from images of richly populated stellar fields . in this paper
we investigate the stellar contents of two star clusters at low galactic latitude and demonstrate the performance of the raven multi - object ao system .
glimpse c01 ( gc01 ) is a massive ( log(m@xmath10 ) cluster that was identified as part of the glimpse ( galactic legacy infrared mid - plane survey extraordinaire ; benjamin et al .
2003 ) survey .
the radial velocity of gc01 is consistent with it belonging to the galactic disk , although there is a 10% probability that a halo object would have a similar radial velocity ( davies et al .
dust and contamination from non - cluster sources are major obstacles for efforts to probe the stellar content of gc01 .
near - infrared ( nir ) images reveal dust lanes in and around gc01 ( ivanov et al .
2005 ) , and a bright emission feature cuts across [ 5.8 ] and [ 8.0 ] spitzer images of the cluster ( kobulnicky et al . 2005 ) .
the location of stars in the @xmath11 two - color diagram ( tcd ) indicates that a@xmath12 ranges between 12 and 18 , with no systematic dependence on location ( kobulnicky et al .
such a non - uniform dust distribution will smear features in color - magnitude diagrams ( cmds ) and luminosity functions ( lfs ) .
previous studies of gc01 have found a wide range of possible ages . using a mix of glimpse survey and shallow ground - based nir images , kobulnicky et al .
( 2005 ) conclude that gc01 is an old , massive globular cluster , located at a distance of 3.1 5.2 kpc .
they note that an old age is consistent with a lack of radio emission .
ivanov et al .
( 2005 ) construct a shallow cmd of sources in the central 20 arcsec of gc01 .
they identify a giant branch and a red clump ( rc ) , and conclude that if the former sequence is populated by old red giant branch ( rgb ) stars then [ fe / h ] @xmath13 .
ivanov et al .
( 2005 ) estimate a distance of @xmath14 kpc from the brightnesses of the rc and the tip of the giant branch .
davies et al .
( 2011 ) measure a central mass density for gc01 that exceeds that in globular clusters , but is consistent with that in dynamically unevolved young clusters , such as the arches ( e.g. espinoza , selman , & melnick 2009 ) . based on the high central density and other lines of evidence , davies et al .
( 2011 ) suggest that gc01 has an age between 400 and 800 myr , but also state that ages up to 2 gyr are not ruled out .
there are hints that gc01 may be experiencing significant evolution at the present day , making it a potentially important laboratory for studies of cluster evolution , while also providing additional clues into its age .
mirabel ( 2010 ) discusses an x - ray source that is located along the mir emission feature that slices through the cluster , and suggests that it is either the result of a pulsar wind nebula possibly associated with a cluster member or emission from a bow shock that forms as interstellar gas associated with gc01 is stripped from the cluster .
the location of the source and its energy output are consistent with the latter mechanism ( mirabal 2010 ) .
if there is an interstellar medium ( ism ) in gc01 then it opens the possibility that a young population might be present .
gc01 has a mass @xmath15 m@xmath16 ( davies et al .
there are young clusters with comparable masses in the present - day galaxy , such as westerlund 1 and 2 ( @xmath17 and @xmath18 solar ; portegies zwart , mcmillan , & gieles 2010 ; hur et al .
2015 ) , and the arches ( @xmath19 solar ; espinoza et al .
2009 ) . given the likelihood that gc01 has lost mass due to tidal effects and internal evolution , and so was more massive in the past , the existing age estimates suggest that it could be one of the most massive clusters to have formed in the galaxy during the past few gyr .
gc01 is thus of potential importance for studies of the evolution of the galactic disk .
glimpse c02 ( gc02 ) has not been as extensively studied as gc01 , likely because it is the more heavily obscured of the two , with a@xmath20 ( kurtev et al .
2008 ) . the distribution of points on the @xmath11 tcd shown in figure 3 of kurtev et al .
( 2008 ) indicates that there is substantial field star contamination within 60 arcsec of the cluster center , further complicating efforts to determine cluster properties .
still , the cmd presented by kurtev et al .
( 2008 ) includes stars as faint as @xmath21 , from which they estimate a distance of @xmath22 kpc based on the brightnesses of the rc and the tip of the red sequence , which they assume to be populated by stars evolving on the rgb .
spectroscopy of candidate cluster members and the slope of the red sequence suggest that [ fe / h ] @xmath23 , raising the prospect that gc02 may be one of the most metal - rich globular clusters in the galaxy .
the existing studies of gc01 and gc02 do not sample the main sequence turn - off ( msto ) . while reaching the mstos of these clusters
will be difficult and it is noted later in this paper that measuring the brightness of the msto in gc01 without spectroscopic information may prove to be problematic due to differential reddening it is still possible to gain additional information about their ages based on the properties of evolved cluster members , such as those that are undergoing core helium burning .
the fractional contamination from non - cluster stars is lowest in the central regions of gc01 and gc02 , although the high stellar density introduces complications due to crowding .
efforts to isolate stars in the crowded central regions of clusters require good angular resolution , and in the present paper we discuss observations that cover the @xmath24 m wavelength interval with angular resolutions between 0.1 and 0.25 arcsec fwhm of fields near the centers of both clusters .
the data were obtained with the infrared camera and spectrograph ( ircs ) on the subaru telescope , with the wavefront corrected for atmospheric distortion by the raven ao science demonstrator .
these observations demonstrate that angular resolutions close to the telescope diffraction limit can be obtained with a multi - object ao ( moao ) system at wavelengths near @xmath25 m .
the subaru observations are supplemented with archival spitzer images of both clusters .
the paper is structured as follows .
details of the observations and the steps used to process the images are presented in section 2 .
the cmds , luminosity functions ( lfs ) , and tcds that were extracted from the nir images are discussed in section 3 . a photometric analysis of cluster stars at wavelengths longward of @xmath25 m that utilizes the narrow - band ircs images and archival [ 3.6 ] and [ 4.5 ] spitzer images follows in section 4 .
the paper closes with a summary and discussion of the results in section 5 .
the data were recorded at the subaru telescope during parts of three nights in june and july 2015 .
distortions in the wavefront were corrected using the raven moao science demonstrator ( andersen et al .
2012 ; lardire et al .
2014 ) , with the corrected signal directed to the imaging arm of the ircs ( tokunaga et al .
1998 ) . the ircs @xmath26 alladin iii detector can be sampled with pixel scales of either 0.02 arcsec / pixel or 0.052 arcsec / pixel , and both modes were employed here .
core observational elements of raven include ( 1 ) three natural guide star ( ngs ) wavefront sensors ( wfss ) that can be deployed over a 3.5 arcmin diameter field , and ( 2 ) two science pick - offs , each of which contains an @xmath27 element deformable mirror ( dm ) that corrects the wavefront at that location using information gleaned from the wfss .
there is also a wfs designed for use with a laser beacon , but this was not used for these observations .
the light from the science pick - offs can feed the imaging and spectroscopic modes of the ircs .
each pick - off samples a 5.5 arcsec radius field , although vignetting and a slight overlap of the science fields when projected onto the ircs detector limits the useable field to @xmath28 arcsec .
raven was built as a pathway science demonstrator with a limited budget .
future moao systems will likely include more ngss and science pick - offs to increase the field of view and the order of correction , thereby better exploiting the multiplex advantage that can be realised with moao .
raven has three operating modes : moao , ground - layer ao ( glao ) , and classical single conjugate ao ( scao ) .
wavefront corrections for the moao and glao modes are applied with the system operating open loop
i.e. the control of the dms is based solely on the signal obtained from the wfss at that moment with no feedback from previous corrections .
the scao system runs closed - loop , in which information from past corrections is used to control the dms .
the observations discussed here were recorded in glao mode . an observing log that lists filters , central wavelengths , total exposure times , pixel sampling , fwhm , and the dates of observation
is shown in table 1 .
the total exposure time entries in this table are the number of detector co - adds ( i.e. the number of detector reads @xmath29 the number of co - adds per read ) @xmath29 the integration time per co - add .
additional information about the filters can be found on the subaru telescope website .
[ cols="^,<,^,^,^,^ , < " , ] sample completeness was estimated by running artificial star experiments .
artificial stars were assigned colors that fall along the sequences in the cluster cmds , and an artificial star was only considered to be recovered if it was detected in at least two filters with a maximum matching radius of one half the fwhm .
the dispersions in the recovered magnitudes and completeness fractions were computed after applying an iterative @xmath30 rejection filter to the mean difference between input and measured brightnesses in 0.5 magnitude intervals . with the exception of the @xmath31 observations of gc02 , the magnitude at which incompleteness sets in is defined by crowding in the nir data , rather than photon statistics .
the photometric faint limits are thus much brighter than what would otherwise be expected from images recorded with an 8 meter telescope . the @xmath32 and @xmath33 cmds of gc01 are shown in figure 2 .
the stars plotted in these cmds were matched in filter pairs ( i.e. @xmath31 and @xmath6 or @xmath34 and @xmath6 , depending on the cmd ) , rather than requiring a match in all three filters .
a maximum matching radius of one - half of the fwhm of the wider of the two psfs was adopted
sources in one filter that did not have a match within this radius are not included in the cmds .
the 50% completeness levels determined from the artificial star experiments are indicated .
the error bars in figure 2 show the @xmath35 uncertainties calculated from the artificial star experiments .
the scatter near the faint end of the cmds more - or - less matches the error bars .
however , the scatter near @xmath36 in the @xmath32 cmd of gc01 exceeds that expected from random photometric uncertainties , and we attribute this to differential reddening .
a reddening vector , with an amplitude corresponding to @xmath37a@xmath38 magnitudes , is shown in each panel of figure 2 , and it can be seen that @xmath37a@xmath5 of a few tenths of a magnitude can explain the scatter in the @xmath32 cmd near @xmath39 .
this scatter prevents us from measuring the slope of the giant branch , which might otherwise be used to estimate metallicity . the fiducial giant branch sequence from the middle panel of figure 4 of ivanov et al .
( 2005 ) is shown as a solid green line in figure 2 .
the ivanov et al .
( 2005 ) photometry is in the @xmath40 filter system , and so their measurements may differ from those in @xmath6 by up to a few hundredths of a magnitude ( e.g. table 2 of persson et al .
the ivanov et al .
( 2005 ) fiducial skirts the blue edge of the ircs@xmath41raven cmd .
assuming that the location of the ivanov et al .
( 2005 ) fiducial indicates a lower mean extinction in the outer regions of gc01 then based on the nishiyama et al .
( 2009 ) reddening law
the typical a@xmath5 towards the center of gc01 is @xmath42 magnitudes higher than at larger radii .
this higher reddening is perhaps not surprising given the broad range in a@xmath5 that is found throughout gc01 , coupled with the warm dust lane that cuts through the cluster center ( kobulnicky et al . 2005 ) .
ivanov et al .
( 2005 ) identify a concentration of stars near @xmath36 in their cmd of objects within 20 arcsec of the cluster center that they suggest is the red clump ( rc ) .
the rc in the left hand panel of their figure 4 forms a tilted sequence likely due to differential reddening and the locus of this sequence is shown in figure 2 .
there is not an obvious corresponding sequence in the ircs@xmath41raven @xmath32 cmd .
nevertheless , a concentration of stars due to the rc appears in the cmd after correcting for differential reddening ( see below ) .
the nir sed of stars near the center of gc01 is examined in figure 3 , where the @xmath11 two - color diagram ( tcd ) is shown .
the dotted line is the locus of points in figure 5 of ivanov et al .
( 2005 ) , and there is good agreement with the raven measurements .
fiducial sequences for red giants and iab supergiants from bessell & brett ( 1988 ) are also shown in figure 3 , as is a reddening vector that tracks the nishiyama et al .
( 2009 ) reddening law .
differences between various reddening laws become significant for highly obscured objects like gc01 . extrapolating the gc01 observations along the nishiyama et al .
( 2009 ) relation comes closer to intersecting the area of the tcd that contains the unreddened colors of giants than extrapolating along vectors defined by the rieke & lebofsky ( 1985 ) and the r@xmath43 cardelli et al .
( 1989 ) reddening laws .
the nishiyama et al .
( 2009 ) reddening law is thus adopted for the remainder of the paper . under ideal circumstances , reddenings for individual stars
could be estimated by projecting each point on the tcd back to the unreddened giant sequence .
however , there are large ( when compared with color differences between stars with different spectral - types ) random uncertainties in the photometric measurements .
photometric variability contributes further to the smearing , although adelman ( 2001 ) finds that variability is likely not a factor among rc stars , at least over time scales of a few years .
as scatter in the observations impedes efforts to identify an intrinsic color for individual stars , reddenings are estimated here by assuming that the stars in the gc01 cmds have a common intrinsic color .
this assumption is reasonable as our cmds sample giants with m@xmath5 between 0 and -3 , and solar metallicity isochrones generated from the marigo et al .
( 2008 ) models indicate that a narrow range of spectral - types ( k2iii to k5iii ) is expected in this m@xmath5 interval in old systems .
if the stars in the cmds are assumed to have the intrinsic @xmath44 and @xmath45 colors of a k3 iii star then the mean reddening towards the center of gc01 is a@xmath46 based on the nishiyama et al .
( 2009 ) reddening law .
the uncertainty is the standard error of the mean .
the reddening depends on the adopted intrinsic colors , and if it is assumed that the intrinsic nir colors of each star match those of
say an m1 iii giant , which falls near the middle of the giant sequence in figure 3 , then the mean extinction would be a@xmath47 .
the @xmath3 dispersion in a@xmath5 computed from the tcd is @xmath48 magnitudes .
given the scatter due to photometric errors and the assumption that all stars have the same intrinsic color then this is likely an upper limit to the smearing caused by differential reddening .
sources with a@xmath5 between 1 and 1.5 are well - mixed throughout the raven@xmath41ircs field , suggesting that significant variations in line - of - sight reddening towards stars in gc01 occur over sub - arcsec angular scales .
an arcsec corresponds to a spatial scale of @xmath49 parsecs at the distance of gc01 .
given that the obscuring material is either at the distance of gc01 or is in the foreground then the ism towards the center of gc01 contains structure over spatial scales of no more than @xmath50 au .
the cmds constructed from the unreddened photometric measurements are shown in the top panels of figure 4 .
the vertical sequences in these cmds are the direct result of adopting a single intrinsic color when computing reddenings .
the number of sources in the cmds in figures 2 and 4 are not the same , as only stars that were detected in all three nir filters have been de - reddened .
hence , the cmds in figure 4 contain fewer stars than those in figure 2 .
while the assumption of a common nir sed for all stars suppresses color - related information in the reddening - corrected photometry , the reddening - corrected cmds still contain useful information .
there is a local peak in the number of stars near k@xmath51 in figure 4 , which we identify as the rc .
these stars have @xmath52 if @xmath53 , which agrees with the rc magnitude found by ivanov et al .
( 2005 ) at larger radii .
there is also a drop in the number of stars in the @xmath54 magnitude interval fainter than the rc .
the artificial star experiments indicate that the data are complete to k@xmath55 , and so the drop in number counts immediately below the rc in the cmd is not due to sample incompleteness .
the change in number counts near k@xmath56 is examined in the lower panel of figure 4 , where cumulative number counts in 0.2 magnitude intervals are shown .
the green dashed line is a least squares fit to the cumulative counts with @xmath57 .
the rate of growth in
number counts changes significantly near k@xmath56 . models of stellar evolution predict such a change to occur at magnitudes below the rc ( see below ) .
van helshoecht & groenewegen ( 2007 ) examine the brightness of the rc in clusters that span a range of metallicities and ages .
they conclude that m@xmath5(rc)@xmath58 in systems that have metallicities within a few tenths of a dex of solar and ages between 0.3 and 8 gyr . assuming that gc01 falls within this age range and has a near - solar metallicity ,
then it has a distance modulus @xmath59 , corresponding to a distance of 5.2 kpc .
the @xmath6 lf of gc01 , constructed from the de - reddened @xmath33 cmd , is shown in figure 5 .
the range of magnitudes has been restricted to those where artificial star experiments predict that the data are complete .
a 0.5 magnitude bin width was adopted as it is the smallest that would produce meaningful numbers of stars per bin in this magnitude range .
the discussion that follows will not change significantly if a different starting point for binning is adopted .
the models that are compared with the gc01 lf ( see below ) were constructed using the same binning parameters as the observations in an effort to further mitigate against binning errors .
the shape of the lf in figure 5 provides clues to the age of gc01 , although features in lfs can be affected by stellar variability and uncertainties in the line - of - sight extinction , both of which cause smearing along the magnitude axis .
davidge ( 2000 ) discusses the @xmath6 lf of the metal - rich globular cluster ngc 6528 , and those data provide an empirical point of comparison for gc01 .
the lf of ngc 6528 shown in figure 3 of davidge ( 2000 ) climbs towards fainter magnitudes with the rc forming a pronounced peak .
a smaller peak due to the rgb - bump ( iben 1968 ) is seen @xmath60 magnitude fainter than the rc .
finally , there is a marked jump in the ngc 6528 lf @xmath61 magnitudes fainter than the rc that is due to the onset of the sub - giant branch ( sgb ) .
there are similarities and differences between the gc01 and ngc 6528 lfs .
the amplitude of the rc with respect to fainter stars in gc01 is comparable to that in ngc 6528 .
however , when considered over a wide range of magnitudes the lf of gc01 is more - or - less flat , and the ratio of stars that are brighter than the rc to those that are fainter is higher in gc01 than in ngc 6528 .
unfortunately , the gc01 data do not go faint enough to sample the magnitude where the onset of the sgb occurs in ngc 6528 .
model lfs of simple stellar systems ( ssps ) constructed from the marigo et al .
( 2008 ) isochrones are compared with the gc01 lf in figure 5 .
the distance modulus applied to each model was set to match the brightness of the rc predicted by that model .
the models have been scaled along the vertical axis to match the observations between k@xmath62 and 12 .
this magnitude interval contains the majority of detected stars and is where the sample is statistically complete .
the amplitude of the rc in the lf contains information about age .
the models in figure 5 demonstrate that the amplitude of the drop to the faintward side of the rc is age - sensitive .
similar behaviour can be seen in the compilation of open cluster cmds examined by van helshoecht & groenewegen ( 2007 ) that were used to establish their m@xmath5 calibration of the rc .
the 9 clusters in their figure 8 that show few if any stars faintward of the rc ( ic4651 , ngc 2090 , ngc 2380 , ngc 2477 , ngc 2527 , ngc 3680 , ngc 3960 , ngc 5822 , and ngc 7789 ) have a mean age log(t@xmath63 ) @xmath64 , where the uncertainty is the formal error of the mean and the ages are taken from table 3 of van helshoecht & groenewegen ( 2007 ) . in contrast , the mean age of the clusters that have a well - defined sequence faintward of the rc ( be 39 , mell 66 , ngc 188 , ngc 1817 , ngc 2243 , ngc 2506 , ngc 2582 , ngc 6633 , ngc 6791 , ngc 6819 ) is log(t@xmath63 ) @xmath65 .
the difference in mean age between these two groups is significant at the @xmath66 level .
the lfs of the log(t@xmath63)=8 and log(t@xmath63)=9 populations greatly over - estimate the amplitude of the drop in the lf faintward of @xmath67 , although the former model matches the lf shape and number of stars when k@xmath68 . in both cases the difference between the k@xmath56 and 12.5 bins is exceeded by the models at more than the @xmath69 level .
the log(t@xmath63 ) = 9 model also predicts a steep rise in the number counts at magnitudes brighter than the rc and a large increase in number counts at k@xmath70 due to the onset of the main sequence .
corresponding features are not seen in the observations .
the log(t@xmath63)=9.9 model matches best the entire lf , although the agreement is far from ideal as the model does not reproduce the overall flat nature of the gc01 lf .
in fact , the bright portions of the gc01 lf show similarities to the log(t@xmath63)=8 model , while the fainter portions show properties that are consistent with the log(t@xmath63 ) = 9.9 model . while none of the models provide an ideal match with the observations , the comparisons in figure 5 suggest that gc01 might contain a sizeable population of stars with log(t@xmath63 ) @xmath71 based on the drop in number counts faintward of the rc .
the @xmath32 and @xmath33 cmds of gc02 are shown in figure 6 .
the cmds were restricted to sources imaged in pick - off # 1 , as only a single exposure per filter was recorded with the cluster centered in pick - off # 2 , thereby preventing the suppression of bad pixels .
the error bars show the @xmath72 dispersion predicted from the artificial star experiments .
the @xmath32 cmd contains only a modest number of objects , due to the high line - of - sight extinction towards gc02 , which limits the depth of the @xmath31 observations .
the solid line on the @xmath32 cmd is the fiducial sequence from figure 2 of kurtev et al .
this relation passes through the points in our cmds , suggesting that the reddening towards the center of gc02 is similar to that at larger radii .
this agreement also suggests that unlike gc01 there is probably not substantial differential reddening near the center of gc02 .
the @xmath33 cmd of gc02 is more richly populated than the @xmath32 cmd , owing to the lower line - of - sight extinction in @xmath34 when compared with @xmath31 . if a@xmath73 mag ( see below ) then the total extinction in @xmath31 as @xmath74 magnitudes higher than in @xmath34 , and this accounts for the difference in 50% completeness levels between the two cmds in figure 6 .
the scatter in the @xmath33 cmd is comparable to that in the gc01 cmd in figure 2 , although the gc02 @xmath33 cmd goes 2 - 3 magnitudes deeper .
the difference in photometric depth is due to the lower density of sources at a given @xmath6 in gc02 , with the result that crowding sets in at a fainter magnitude than in gc01 . in the appendix
it is shown that the [ 3.6 ] surface brightness near the center of gc02 is @xmath74 magnitudes / arcsec@xmath75 lower than in gc01 . if it is assumed that the two clusters have the same distances and that their lfs have the same shape but are scaled according to surface brightness then this lower surface brightness can account for much of the difference in depths between the gc01 and gc02 observations . the locations of points in gc02 on the @xmath11 tcd is shown in figure 3 . as was the case with gc01 , the nishiyama et al .
( 2009 ) reddening vector links the fiducial and observed sequences . applying the procedure discussed in section 3.1 ,
the mean reddening based on the tcd is @xmath76 if the stars have a k3iii spectral - type .
this is based on only a handful of measurements , and an estimate that involves more points can be obtained using the mean color in the @xmath33 cmd . assuming a k3iii spectral - type then e(h - k ) = 1.26 , so that a@xmath77 , with an estimated uncertainty of @xmath78 magnitude .
this reddening is adopted for the remainder of the paper given the larger number of points involved in its calculation . if the rc occurs near @xmath79 ( kurtev et al .
2008 ) then the distance modulus of gc02 is @xmath80 , corresponding to a distance of @xmath81 kpc . as with gc01
, the rc magnitude calibration from van helshoecht & groenewegen ( 2007 ) has been adopted , and the @xmath82 magnitude uncertainty in the m@xmath5 of the rc has been included when calculating the uncertainty in the distance .
the majority of stars in the cmds are fainter than the rc , indicating that ircs@xmath41raven may have detected stars in gc02 that are evolving on the lower giant branch .
this opens the possibility of estimating an age based on the presence / absence of the sgb .
the @xmath6 lf of gc02 , with number counts taken from the @xmath33 cmd , is shown in figure 7 .
there is an increase in number counts between @xmath83 and 17 , at which point the lf levels off .
there are no stars detected near @xmath84 , likely due to the modest density of stars in this field .
the lf of gc02 thus differs from that of gc01 in figure 5 , which is flat in the 1.5 magnitude interval fainter than the rc .
kurtev et al .
( 2008 ) suggest that gc02 is an old metal - rich globular cluster . as such
, it should have a lf that is similar to those of the old , metal - rich clusters ngc 6528 and liller 1 .
however , the onset of the sgb that occurs @xmath85 magnitudes in @xmath6 fainter than the rc in the lfs of the metal - rich globular clusters ngc 6528 and liller 1 that are shown in figure 3 of davidge ( 2000 ) is not evident in the gc02 lf .
the absence of a sgb could indicate that the distance modulus of gc02 is in error , although in section 4.2.2 it is shown that the brightness of the rgb - tip in gc02 measured from spitzer images is consistent with that found by kurtev et al .
( 2008 ) .
another possibility is that gc02 may have an age that is very different from that of ngc 6528 and liller 1 .
model lfs constructed from solar - metallicity isochrones from marigo et al .
( 2008 ) are compared with the observations in figure 7 .
the log(t@xmath63 ) = 9.2 and 9.9 models are shown for a distance modulus of 13.7 , and these predict that the rc occurs in the @xmath86 to 14.5 interval , as observed
. however , the rc in the log(t@xmath63 ) = 9.0 model occurs @xmath54 magnitudes fainter than found by kurtev et al .
( 2008 ) if the distance modulus is 13.7 .
a distance modulus of 13.2 was thus assumed for this model to force agreement with the observed magnitude of the rc .
there are sizeable error bars at all magnitudes in figure 7 , and the modest number of stars means that the amplitude of the rc with respect to stars in adjacent magnitude bins , which was used to explore the age of gc01 , can not be used to constrain the age of gc02 . neither the log(t@xmath63 ) = 9.2 or 9.9 models match the overall shape of the lf .
while the log(t@xmath63 ) = 9.9 model agrees with the number counts between @xmath83 and 17 , the model counts climb when @xmath87 due to the onset of the sgb , and this is not seen in the observations .
a similar disagreement is also seen near the faint end of the log(t@xmath63 ) = 9.2 model .
observations at wavelengths @xmath88 m provide information about the properties of late - type cluster members .
the line - of - sight extinction at these wavelengths is lower than at shorter wavelengths , and when compared with the nir and visible regions there is also improved contrast between the reddest stars and the ( bluer ) main body of the cluster .
this raises the possibility that bright red stars in the dense central cluster regions might be resolved with only minimal contamination from bluer , intrinsically fainter cluster members .
finally , the seds of stars at these wavelengths provide checks on the reddenings measured at shorter wavelengths .
two datasets are used in this section to examine the photometric properties of stars in gc01 and gc02 at wavelengths longward of @xmath89 m .
one dataset consists of the narrow - band images that were recorded with raven@xmath41ircs and were described in section 2 .
while having a modest science field , these images have angular resolutions that approach the diffraction limit of an 8 meter telescope , and so provide checks on crowding among bright red stars in datasets that have poorer angular resolutions .
these data are also used to extend the seds of bright stars in both clusters to wavelengths longward of @xmath89 m , and are used to check reddening .
the other dataset consists of [ 3.6 ] and [ 4.5 ] images that were recorded as part of the glimpse survey .
the spitzer observations cover a large area on the sky , allowing a comprehensive census of the brightest stars in and around the clusters .
details of the glimpse survey are discussed by benjamin et al .
the survey was conducted in all four irac bands with an exposure time per 1.2 arcsec pixel of 2 seconds .
the images used here were extracted from post - basic calibrated data ( pbcd ) mosaics that have been re - sampled to 0.6 arcsec pixels .
@xmath90 degree sections of the pbcd mosaics that are centered on both clusters were downloaded from the nasa / ipac infrared science archive .
the angular resolution of the [ 3.6 ] and [ 4.5 ] observations is @xmath91 arcsec fwhm ( fazio et al .
2004 ) , potentially complicating efforts to resolve individual stars near the cluster centers .
a [ 3.6 ] image of each cluster is shown in figure 8 .
photometric measurements of the spitzer images of both clusters and of the raven gc01 observations were made with allstar ( stetson & harris 1988 ) , with psfs constructed using the procedures described in section 3 . because of the low stellar density , stellar brightnesses in the gc02 raven h@xmath92o data were measured with the phot routine in daophot ( stetson 1987 ) .
the spitzer photometry was calibrated using the zeropoints listed in table 7 of reach et al .
the calibration of the narrow - band measurements is based on observations of gliese 748 ( gl748 ) .
the sed of gl748 in the @xmath93 m interval is assumed to follow that of gl273 , which has the same spectral - type as gl748 ( m3.5v ) and has flux densities tabulated by rayner et al .
the magnitudes measured from the narrow - band observations are in the ab system ( oke & gunn 1983 ) .
gl748 is a binary system , with a component separation of 0.1 - 0.2 arcsec , that is listed as a photometric standard by elias et al ( 1982 ) and leggett et al .
however , after the raven@xmath41ircs data were obtained , we became aware of the work of franz et al .
( 1998 ) , who found that the difference in @xmath94 magnitude between the two components differed by up to 0.24 magnitudes over a 2.5 year period .
the standard deviation of the magnitude difference obtained over 14 different epochs is @xmath95 magnitudes .
while this is a source of concern , the narrow - band filters sample the tail end of the sed of both components of gl748 , and so the sed shape and hence color of gl748 at wavelengths @xmath96
m likely does not vary significantly with time .
the uncertainty in the photometric calibration of the narrow band measurements is estimated to be @xmath78 magnitudes , and the zeropoints are shown in table 2 .
this calibration of the narrow - band photometry indicates that the overall throughput of raven@xmath41ircs drops considerably with wavelength when @xmath97 m . we note that the raven optics were not designed to work at these wavelengths , and so the poor throughput is not a surprise .
the @xmath98 and @xmath99 cmds of gc01 are shown in figure 9 .
the reddening vector has a near - vertical trajectory in both cmds , and so differential reddening mainly blurs features in the cmds along the magnitude axis .
the smearing is expected to be @xmath95 magnitudes along the pah axis based on the dispersion in the extinction found from the nir tcd .
smearing along the color axis is modest , and the giant branch of gc01 is clearly seen in both cmds .
the narrow - band measurements can be used to check the reddening estimated from the tcd in the previous section .
the mean sed of gc01 stars in the @xmath24 m interval , normalized to the signal in @xmath6 , are shown in the top panel of figure 10 .
also plotted in figure 10 is the sed of the k3iii star hr8925 based on the @xmath100 magitudes and the flux density measurements given by rayner et al .
the sed of hr8925 has been reddened by applying the nishiyama et al .
( 2009 ) reddening law for a@xmath101 and 1.4 , which is the @xmath72 range in a@xmath5 found for gc01 in section 3.1 .
the mean gc01 seds between 1 and @xmath102 m match that of the reddened hr8925 sed the long wavelength measurements are thus consistent with the extinction found at shorter wavelengths .
the pah observations of gc01 have an angular resolution that is @xmath103 finer than the nir measurements , and so can be used to assess crowding in the nir data .
the pah lf constructed from sources in the @xmath98 cmds is shown in figure 11 .
the onset of the pah lf in figure 11 occurs near @xmath104 . also shown in figure 11
is the @xmath6 lf of gc01 , where only sources that are in the same area that was observed through the pah filter have been counted .
the pah measurements have not been corrected for differential reddening , and so for consistency the @xmath6 number counts in figure 11 were taken from the @xmath33 cmd , which also was not corrected for differential extinction .
the @xmath6 lf in figure 11 has also been shifted along the magnitude axis by an amount equal to the @xmath105 color of a k giant that is viewed through a@xmath106 magnitudes of extinction .
if sources blend together they will appear as a single object that is brighter than the individual components .
if the frequency of blending is high among the most luminous members of a system then a population of objects that is brighter than the individual brightest stars will be seen , and the overall effect of blending on the lf will be to shift it along the magnitude axis to brighter values .
the two lfs in figure 11 agree over a @xmath61 magnitude interval at the bright end , suggesting that crowding is not an issue among the brightest stars in the @xmath6 observations .
the @xmath107,[3.6]-[4.5])$ ] cmds of sources in different annuli centered on gc01 are shown in figure 12 .
the vertical plume in @xmath107 , [ 3.6]-[4.5])$ ] cmds can be populated by a diverse mix of stars with a wide range of effective temperatures .
unlike at shorter wavelengths and with the exception of all but the coolest sources , there is only a small dispersion in the intrinsic [ 3.6][4.5 ] colors of stars as the [ 3.6 ] and [ 4.5 ] filters sample the descending red edge of the sed . comparisons between star counts made from the spitzer observations at various distances from the cluster center indicate that cluster members dominate the number counts out to 30 48 arcsec from the center of gc01 . of the 80 stars with [ 4.5 ] between 8 and 12 in the 30 48 arcsec cmd in figure 12 , source counts in the @xmath108 arcsec cmd suggest that only 15 of these are field stars if non - cluster stars are assumed to be uniformly distributed .
the number of field stars is actually slightly lower than this as there is modest contamination from the outer regions of gc01 at radii @xmath109 arcsec ( see the appendix ) .
the @xmath107 , [ 3.6]-[4.5])$ ] cmd of objects between 18 and 48 arcsec from the center of gc01 is compared with isochrones from marigo et al .
( 2008 ) in figure 13 .
a distance modulus of 13.6 and a@xmath106 has been assumed , with extinction applied according to the nishiyama et al .
( 2009 ) reddening law .
the 60% silicate and 40% alox mix for circumstellar dust from groenewegen ( 2006 ) has been adopted , although the models are not sensitive to the chemistry of the circumstellar envelope at these wavelengths ( e.g. davidge 2014 ) .
models with solar and half - solar metallicities are shown .
boyer et al .
( 2015 ) define extreme agb stars to have unreddened [ 3.6][4.5 ] colors @xmath110 .
if such objects are present in gc01 then they will form a population of objects with [ 4.5 ] @xmath111 that will also have [ 3.6]-[4.5 ] colors that exceed those of the gc01 locus as defined when [ 4.5 ] @xmath111 .
while we can not distinguish between agb stars that do not have warm circumstellar dust envelopes and rgb stars based solely on [ 3.6][4.5 ] colors , cluster members that are brighter than the rgb - tip should be evolving on the agb .
the peak observed stellar brightness in a system depends not only on its age and metallicity , but also on the overall mass of the system , as there is a low probability of occupation in the portions of cmds that sample rapid phases of evolution .
the isochrones predict that the agb may extend to [ 4.8 ] @xmath112 at the distance of gc01 .
there are objects as bright as [ 4.5 ] = 6 in the innermost annulus .
given that all stars in the middle two panels have [ 4.5 ] @xmath113 then if the stars with [ 4.5 ] = 6 are blends they must be unresolved asterisms that are made up of multiple stars .
such blending is feasible given the density of moderately bright objects in our field ( e.g. figure 11 ) .
the cmds of objects located between 18 and 48 arcsec from the cluster center indicates that there is a drop in star counts when [ 4.5 ] @xmath114 .
this is not due to saturation in the cores of stellar images , as numerous stars that have [ 4.5 ] between 8 and 6 magnitudes are seen in the right hand panel of figure 12 .
we note that [ 4.5 ] = 8 is more - or - less consistent with the peak brightness found by ivanov et al .
( 2005 ) in @xmath6 if it is assumed that the stars do not have excess thermal emission , as expected given their [ 3.6][4.5 ] colors .
the expected location of the rgb - tip is indicated for each model , and unless gc01 has an age @xmath115 gyr the vast majority of stars detected in the spitzer images within 60 arcsec of the center of gc01 are evolving on the rgb . to the extent that the distance modulus and models are correct and that the rgb - tip brightness occurs near [ 4.5 ] = 8 then the models suggest that gc01 has an age between 1 and 2.5 gyr .
the [ 4.5 ] lf of gc01 stars in the 18 48 arcsec interval is shown in figure 14 .
the lf is restricted to [ 4.5 ] @xmath116 , as this is where the artificial star experiments suggest that sample completeness exceeds 50% . a statistical correction for non - cluster stars
was made by subtracting number counts of sources that are more than 60 arcsec from the cluster center after scaling to account for differences in area .
while diffuse cluster light can be traced to radii in excess of 60 arcsec ( see the appendix ) , the density of cluster stars at these radii is much lower than in the inner regions of the cluster , and the star counts are dominated by field stars .
in fact , the correction for field stars produces only a modest change in number counts in the gc01 lf .
artificial star experiments indicate that the uncertainties in [ 4.5 ] are @xmath117 magnitude in the magnitude range shown , and so bin - to - bin blurring is modest
. comparisons are made with model lfs in figure 14 .
the model lfs do not vary greatly with metallicity in the range of magnitudes considered , and so only solar metallicity models are shown .
the models have been shifted along the vertical axis to match the number counts between [ 4.5 ] = 9 and 11 to avoid magnitudes where the sample is not complete .
the overall shape of the lf is consistent with that predicted by models of stellar evolution .
this is a robust result , that is largely independent of the assumed age . given that only a modest number of stars were detected in the h@xmath92o observations of gc02 , we do not examine the @xmath118 cmd of this cluster .
the mean sed of bright stars near the center of gc02 in the @xmath119 m interval is examined in the lower panel of figure 10 .
it should be recalled that gc02 was not observed through the pah and h3 + filters because of the inherent faintness of the member stars ( section 2 ) .
the lack of pah and h3 + measurements notwithstanding , the h@xmath92o observations extend the mean sed well past @xmath120 m .
the dashed red lines show the sed of the k3iii star hr8925 reddened by @xmath78 magnitude about the a@xmath5 found from the @xmath33 cmd .
the seds of bright stars observed near the center of gc02 are consistent with them being highly reddened late - type giants .
the ( [ 4.5 ] , [ 3.6]-[4.5 ] ) cmds of objects in four radial intervals centered on gc02 are shown in figure 15 .
a vertical plume of cluster members is evident at small radii .
there are no objects near the top of the gc02 cmd with [ 3.6][4.5 ] colors that fall redward of the main locus of points , which would be candidate highly evolved agb stars belonging to gc02 .
the brightest stars in the 24 36 arcsec cmd have magnitudes that are comparable to those in the 0 18 arcsec cmd , suggesting that the brightest objects detected near the center of gc02 in the [ 3.6 ] and [ 4.5 ] images may not be blends .
the cmd of gc02 is compared with isochrones from marigo et al .
( 2008 ) in figure 16 .
there is a @xmath121 magnitude offset along the [ 3.6][4.5 ] axis between the gc02 sequence and the models .
a similar offset is not seen in the gc01 photometry ( e.g. figure 13 ) .
we have compared our psf - based photometric measurements for a sample of isolated objects with [ 3.6 ] between 9 and 10 with those in published glimpse source catalogs made from aperture measurements , and find agreement to within a few hundredths of a magnitude .
thus , the offset in [ 3.6][4.5 ] is not the result of errors in our photometric measurements .
we are unsure as to the origin of the offset in [ 3.6][4.5 ] color , although there is a tendency in the @xmath122 arcsec cmd for objects with [ 4.5 ] @xmath123 to have larger [ 3.6][4.5 ] colors than those with [ 4.5 ] @xmath124 .
there is heavy extinction towards gc02 , and a correlation between magnitude and color will occur if the brightest stars , many of which are presumably nearby if they are not cluster members , are subject to lower levels of extinction than the fainter objects , which presumably tend to be more distant , and so have a greater chance of being more heavily obscured .
however , the size of the offset in [ 3.6][4.5 ] is hard to explain with reddening .
uncertainties in a@xmath5 of a few magnitudes have only a minor impact on the position of the models at these wavelengths , and the extinction towards gc02 would have to be a@xmath125 to produce the color difference . as for the possibility of an abnormal reddening law towards gc02 , the variations in line of sight extinction that are seen at visible wavelengths are much reduced at wavelengths longward of @xmath126 m ( e.g. indebetouw et al . 2005 ) .
given the unexplained red [ 3.6][4.5 ] colors we caution that the [ 3.6 ] and [ 4.5 ] photometry of gc02 may have uncertainties of a few tenths of a magnitude .
uncertainties in the photometry on the scale of 10 - 20% notwithstanding , the models predict that stars evolving on the agb in gc02 will depart from a near - vertical trend @xmath127 magnitudes above [ 4.5]=9.5 , which is the brightness that we assign to the rgb - tip .
the uncertainties in the calibration of the spitzer data near gc02 indicate that the rgb - tip brightness is uncertain by a few tenths of a magnitude .
still , this [ 4.5 ] magnitude for the rgb - tip corresponds to @xmath128 , which is the magnitude of the brightest star along the cluster ridgeline drawn in the middle panel of figure 2 of kurtev et al .
the isochrones suggest that the majority of stars detected in the spitzer images are evolving on the rgb , although there is one object with [ 4.5 ] = 7.3 that may be on the agb if it is a cluster member .
the intrinsic brightness of the rgb - tip in gc02 is similar to that in gc01 , and if the rgb - tip occurs near [ 4.5]=9.5 then the isochrones predict an age between 1 and 2.5 gyr . the [ 4.5 ] lf of stars between 0 and 36 arcsec in gc02 is shown in figure 17
the entries have been corrected for non - cluster sources by subtracting the lf of objects with @xmath122 arcsec after adjusting for differences in areal coverage . as was the case for gc01 ,
while stars that belong to gc02 are present at radii @xmath109 arcsec , their number density is low when compared with those at smaller radii ( e.g. the appendix ) , and stars in the field dominate the number counts .
the fractional contamination by non - cluster stars becomes significant when [ 4.5 ] @xmath123 , and so only this part of the lf is shown .
the [ 4.5 ] lf is compared with solar metallicity model lfs in figure 17 .
the statistical significance of any difference between the observations and the various models in figure 17 is low .
the shallow nature of the glimpse survey prevents the rc from being sampled in gc02 , and this severly limits conclusions that might otherwise be drawn from comparisons with model lfs .
ao - corrected images that span the @xmath24 m wavelength interval have been used to probe the stellar contents of the star clusters gc01 and gc02 .
these clusters are heavily reddened and are subject to significant contamination from non - cluster stars owing to their location at low galactic latitudes . nir and mir imaging of their central regions , where the fractional contamination from foreground and background stars is lowest , offers a promising means of determining their age and distance .
@xmath100 and narrow - band images , with the latter sampling the @xmath129 m wavelength interval , were recorded with the ircs and the raven ao science demonstrator on the subaru telescope .
stars in the narrow - band images have a fwhm that is within a few hundredths of an arcsec of the telescope diffraction limit , demonstrating that good image quality can be delivered by moao systems that work in open - loop . while the narrow - band images are shallower than the nir images , they provide a means of checking if crowding has affected photometric measurements obtained from images that have poorer angular resolutions
. the narrow - band images also allow the seds of cluster stars to be extended into the mir , providing additional wavelength leverage for checking reddening estimates .
the seds obtained here cover the @xmath24 m wavelength interval , and are consistent with the brightest objects in each field being heavily reddened k giants .
the combined nir and mir seds are consistent with the mean reddenings obtained from the nir photometry , which are a@xmath130 for gc01 and a@xmath131 for gc02 .
archival [ 3.6 ] and [ 4.5 ] spitzer images that were recorded for the glimpse survey have also been examined . while having an angular resolution that is almost an order of magnitude larger than that of the ircs @xmath41 raven images , the angular coverage of the glimpse survey allows a statistical assessment of non - cluster sources to be made .
the cmds constructed from the spitzer data provide information about the luminosity and spatial distribution of bright stars in each cluster .
the rgb - tip measurements obtained from the spitzer data are consistent with those made previously in the nir .
we preface our discussion of these clusters with a cautionary note .
the bright stellar content in the central regions of some dynamically evolved clusters is not representative of what is seen outside of their cores ( e.g. davidge 1995 ) , and this may bias efforts to probe stellar content .
number counts made from the spitzer images suggest that stars in gc01 can be detected in statistically significant numbers with respect to foreground / background objects out to at least 30 arcsec from the cluster center , and the same holds for gc02 .
assuming a distance of 5.2 kpc for gc01 , then 30 arcsec corresponds to @xmath132 parsec , which is comparable to the core radius of a typical globular cluster , and is an order of magnitude smaller than the typical half light radius ( e.g. van den bergh & mackey 2004 ) . in the appendix
we show that light from gc01 and gc02 can be traced out to radii of at least 100 arcsec .
thus , a rich population of cluster members awaits discovery in the large areas of gc01 and gc02 that have not been explored to date , and these stars will undoubtedly provide additional clues about the distance and age of these clusters . contamination from non - cluster stars presents a daunting obstacle for efforts to identify cluster stars at large radii .
gc01 is of interest for studies of the evolution of the galactic disk because it is one of the most massive clusters that may have formed during intermediate epochs ( davies et al .
the formation of large , compact clusters is often associated with interactions and/or starburst events ( e.g. ashman & zepf 2001 ) .
does the age of gc01 coincide with a past event that may have influenced galactic evolution ?
davies et al .
( 2011 ) conclude that gc01 is not an old globular cluster , and assign it an age between 0.3 and 2 gyr , with the most probable age between 0.4 and 0.8 gyr .
davies et al .
( 2011 ) further suggest that the formation of gc01 may be linked to a past encounter between the galactic disk and the magellanic clouds .
in fact , rezaei kh .
et al . ( 2014 ) find peaks in the sfrs of the lmc and smc @xmath133 gyr in the past , which they suggest may be linked to an interaction with the galaxy .
the observations discussed here do not support the formation of gc01 within the past gyr , although the amplitude of the rc in the @xmath6 lf is consistent with the older end of the davies et al .
( 2011 ) age range . while our data suggest that gc01 may be too old to have formed as part of the most recent interaction with the magellanic clouds , it does not rule out its formation during previous interactions .
this being said , proper motion measurements suggest that the magellanic clouds may either be on their first approach to the galaxy or that their orbital period about the galaxy is much longer than once thought ( besla et al .
2007 ) . in any event
, the presence of young clusters with masses @xmath134m@xmath16 like westerlund 1 and the arches suggests that clusters with masses approaching that of gc01 may form naturally throughout the lifetime of the galaxy , without the need of an external trigger .
uncertainties in the origins of gc01 notwithstanding , its study may provide clues into the evolution of compact intermediate age clusters
. there are hints that star formation is occuring along the gc01 line of sight .
spitzer images of gc01 show a prominent dust lane projected against the cluster ( kobulnicky et al .
2005 ) , and the star - to - star differences in mean extinction found in section 3 indicate that the dust distribution is clumpy , with a characteristic size that is consistent with that of individual stellar systems ( e.g. larson 1995 ) .
there are also candidate young stellar objects ( ysos ) seen near gc01 on the sky ( kobulnicky et al .
it is not known if the dust clumps and candidate ysos are physically associated with the cluster , or are chance superpositions .
goudfrooij et al .
( 2014 ) present evidence for multiple periods of star - forming activity in massive lmc clusters , and investigate the characteristics of clusters where such activity might occur .
mechanisms other than multiple episodes of star formation have been proposed to explain the properties of these clusters ( e.g. brandt & huang 2015 ; niederhofer et al .
2015 , and references therein ) .
the estimated mass of gc01 falls within the range of lmc clusters where it has been suggested that multiple episodes of star formation have occured , and it is intriguing that the @xmath6 lf of gc01 shows characteristics at the bright end that are consistent with young and intermediate age populations , while the faint end of the lf is more consistent with that of an old population .
the age estimate gleaned here from evolved stars can be checked by measuring the brightness of the msto . however , differential reddening smears the photometric measurements , thereby complicating this task .
one strategy to reduce the impact of differential reddening would be to use deep ao - corrected integral field unit spectroscopy in the nir to identify candidate msto stars .
if spectral types can be established then intrinsic colors can be assigned , making it possible to construct a de - reddened cmd that samples the msto .
while gc01 is viewed through a@xmath135 magnitudes of extinction , spectra of its integrated light at optical wavelengths will also provide insights into its stellar content and metallicity .
the detection of deep balmer absorption lines would be one signature of an intermediate age population .
the metallicity of gc01 could also be measured from the strengths of various atomic and molecular features in the integrated spectrum .
given that gc01 falls within the solar circle then a solar or supersolar metallicity would argue that it formed _ in situ_. a metallicity that is one - half solar would be consistent with it having formed from material that likely originated outside of the solar circle if its age is less than a few gyr .
gc02 is a challenging target for stellar content studies as it is heavily extincted , although the absorbing material appears to be uniformly distributed .
kurtev et al .
( 2008 ) consider gc02 to be an old , metal - rich globular cluster .
however , the the @xmath6 lf constructed from the raven@xmath41ircs observations does not show the onset of the sgb that is expected if the cluster is old .
tighter constraints on the age of gc02 could be obtained using deeper , diffraction - limited nir images .
for example , if gc02 is old but is more distant than assumed here then the sgb should show up with deeper images . if gc02 has an intermediate age and
there is no large - scale differential reddening then it should be possible to detect the msto in the @xmath32 cmd of gc02 .
given the distance and mean reddening of gc02 then the msto should occur near @xmath136 and @xmath137 if stars as young as 1 gyr are present .
while the @xmath33 cmd of gc02 in figure 6 does reach the required depth in @xmath6 , the expected separation in color between the giant branch and 1 gyr main sequence stars on the @xmath33 cmd at this brightness is only @xmath121 magnitudes , which is comparable to the uncertainties in the photometry at this magnitude .
the integrated spectrum of gc02 at visible - red wavelengths should also contain deep hydrogen lines if it has an age @xmath138 gyr .
published light profiles of gc01 ( kobulnicky et al . 2005 ; ivanov et al .
2005 ; davies et al . 2011 ) and gc02 ( kurtev et al .
2008 ) are based on star counts and/or isophotal measurements .
these are restricted to the central few tens of arcsec in each cluster , as this is where the number density of cluster members exceeds that of field stars .
the light profiles of gc01 and gc02 can be extracted out to much larger radii if the brightest resolved field stars are suppressed and the cluster light profiles are azimuthally smoothed to boost the signal - to - noise ratio .
we demonstrate this using glimpse [ 3.6 ] images .
the [ 3.6 ] image of each cluster was rotated about the cluster center in 15@xmath139 increments and the rotated images were then combined by taking the median flux at each rotated pixel location .
this process multiplexes the faint signal from the cluster light profiles at large radii and suppresses signal from individual objects , the majority of which will be field stars at large radii .
some artifacts of individual stars survive the median combination procedure , and these were suppressed by applying a @xmath140 arcsec ( i.e. @xmath141 the fwhm of the [ 3.6 ] psf ) running top - hat filter .
this procedure tacitly assumes circular isophotes , and obliterates information over the angular scale of the smoothing filter , with the result that the light distribution near the cluster center can not be tracked .
the [ 3.6 ] surface brightness profile of each cluster is shown in figure a1 , and light can be traced out to distances of at least 100 arcsec from the cluster centers .
gc01 is much more centrally concentrated than gc02 , and the light profile of the former may be truncated at radii @xmath142 arcsec .
signal from gc02 can be traced out to at least 200 arcsec . | we discuss images of the star clusters glimpse c01 ( gc01 ) and glimpse c02 ( gc02 ) that were recorded with the subaru ircs .
distortions in the wavefront were corrected with the raven adaptive optics ( ao ) science demonstrator , allowing individual stars in the central regions of both clusters where the fractional contamination from non - cluster objects is lowest to be imaged .
in addition to @xmath0 and @xmath1 images , both clusters were observed through a narrow - band filter centered near 3.05@xmath2 m ; gc01 was also observed through two other narrow - band filters that sample longer wavelengths .
stars in the narrow - band images have a fwhm that is close to the telescope diffraction limit , demonstrating that open loop ao systems like raven can deliver exceptional image quality .
the near - infrared color magnitude diagram of gc01 is smeared by non - uniform extinction with a @xmath3 dispersion @xmath4 magnitudes .
spatial variations in a@xmath5 are not related in a systematic way to location in the field .
the red clump is identified in the @xmath6 luminosity function ( lf ) of gc01 , and a distance modulus of 13.6 is found .
the @xmath6 lf of gc01 is consistent with a system that is dominated by stars with an age @xmath7 gyr . as for gc02 ,
the @xmath6 lf is flat for @xmath8 , and the absence of a sub - giant branch argues against an old age if the cluster is at a distance of @xmath9 kpc .
archival spitzer [ 3.6 ] and [ 4.5 ] images of the clusters are also examined , and the red giant branch - tip is identified .
it is demonstrated in the appendix that the [ 3.6 ] surface brightness profiles of both clusters can be traced out to radii of at least 100 arcsec .
= 1.0 cm |
the deceleration parameter , @xmath0 , is poorly known at present .
this is why many parameterizations of this key quantity , such as @xmath1 , @xmath2 , @xmath3 , @xmath4 , @xmath5 , and more complex than these , has been proposed to reconstruct @xmath6 from observational data ( see e.g. refs .
- ) . however , the first parameterization is appropriate for @xmath7 only , and the others diverge in the far future ( as @xmath8 ) .
here we propose three model independent parameterizations of @xmath6 with two free parameters only , valid from matter domination ( @xmath9 ) onwards ( i.e. , up to @xmath10 ) , based on practical and theoretical reasons and independent of any cosmological model .
they obey by construction the asymptotic conditions , @xmath11 , @xmath12 , and a further condition , @xmath13 , which is valid at least when @xmath14 .
the first one arises because at sufficiently high redshift the universe was matter dominated .
the other conditions are based on the second law of thermodynamics when account is made of the entropy of the causal horizon .
the latter dominates over all other entropy sources@xcite and is proportional to the horizon area , @xmath15 .
then , the second law of thermodynamics@xcite imposes @xmath16 at all times , and @xmath17 at least at late times ( derivatives are taken with respect to the scale factor ) .
this translates into @xmath18 ( at any redshift ) , and that @xmath14 and @xmath19 as @xmath20 ( see ref . for details ) .
usually one parameterizes a function in any specific interval by interpolating it between the two end points of the interval ( modulo one first knows the value taken by the function at these two points ) . in actual fact
, the parameterizations of @xmath6 proposed so far have just one fixed point : the asymptotic value at high redshift ( @xmath21 must converge to @xmath22 when @xmath9 ) . the other , @xmath23 , is not in reality a fixed point because the value of the deceleration parameter at @xmath24 is not very well known and it is therefore left free . the parameterizations proposed here have two fixed points , one at the far past ( @xmath25 ) , and other at the far future ( @xmath26 ) .
the second fixed point conforms to the thermodynamical constraints imposed by the second law .
we believe this means a clear advantage over previous parameterizations of @xmath6 , with just one fixed point .
while in the literature it can be found parameterizations that also fix @xmath21 at @xmath10 they do so arbitrarily , i.e. , not grounded on sound physics .
we propose three parameterizations of @xmath6 , namely : @xmath27 @xmath28 and @xmath29 all of them satisfy the conditions stated above .
their two free parameters , @xmath30 and @xmath31 , were constrained using data from sn ia ( 557 data points ) , bao combined with cmb ( 7 data points ) and the history of the hubble factor ( 24 data points ) .
table [ aba : tbl1 ] shows their best fit values and their 1@xmath32 confidence levels .
likewise , figure [ aba : fig1 ] depicts the corresponding @xmath6 graphs and that of the @xmath33cdm model fitted to the same sets of data .
details can be found in ref . .
[ aba : tbl1 ] table [ aba : tbl2 ] gives hubble s constant , @xmath34 ( in km / s / mpc ) , the age of the universe , @xmath35 ( in gyr ) , the deceleration parameter , @xmath23 , and the redshift , @xmath36 , of the transition deceleration - acceleration predicted for the three parameterizations , and the flat @xmath33cdm model .
the latter is included for comparison .
[ aba : tbl2 ]
the three parameterizations proposed here rest solely on the assumptions that the universe is homogeneous and isotropic at large scales and on the second law of thermodynamics .
they agree very well with each other ( especially the first and third ) and with the @xmath33cdm model .
likewise , they also concord with the measurements reported by daly _
et al . _ ,
, in the redshift interval @xmath37 .
it is worthy of mention that , as argued in ref . , our restriction to spatially flat models ( @xmath38 ) is , in reality , very light and well justified .
this work was partially supported by the chilean grant fondecyt n@xmath39 1110230 .
elgary and t. multamki , _ jcap _ 09(2006)002 .
c.a . shapiro and m.s .
turner , _ astrophys .
j. _ * 649 * , 563 ( 2006 ) .
y. gong and a. wang , _ phys .
* d 75 * , 043520 ( 2007 ) .
cunha and j.a.s .
lima , _ mon .
notices r. astron .
soc . _ * 390 * , 210 ( 2008 ) .
l. xu and j. lu , _ mod .
lett . _ * a 24 * , 369 ( 2009 ) .
b. santos , j.c .
carvalho , and j.s .
alcaniz , arxiv:1009.2733 .
r. nair , s. jhingan , and d. jain , _ jcap _ 01(2012)018 .
c.a . egan and c.h .
lineweaver , _ astrophys .
j. _ * 710 * , 1825 ( 2010 ) .
callen , _ thermodynamics _
( j. wiley , new york , 1960 ) .
s. del campo , i. duran , r. herrera , and d. pavn , _ phys .
* d 86 * , 083509 ( 2012 ) .
daly _ et al .
_ , _ astrophys
. j. _ * 677 * , 1 ( 2008 ) . | we propose and constrain with the latest observational data three parameterizations of the deceleration parameter , valid from the matter era to the far future .
they are well behaved and do not diverge at any redshift . on the other hand , they are model independent in the sense that in constructing them the only assumption made was that the universe is homogeneous and isotropic at large scales . |
Media playback is unsupported on your device Media caption Hugh Schofield is at the Nanterre building where Mr Sarkozy is being held
French ex-President Nicolas Sarkozy remains in custody for questioning over alleged influence peddling.
Mr Sarkozy was detained near Paris on Tuesday morning in an unprecedented step against a former president.
He is being questioned about whether he sought inside information from a judge concerning an investigation into campaign funding.
In another development, his lawyer, Thierry Herzog, has been placed under formal investigation in the same case.
Mr Sarkozy is hoping to challenge again for the presidency in 2017 and the inquiry is seen as a blow to his hopes of returning to office.
Investigators are trying to find out whether Mr Sarkozy, 59, who was president from 2007 to 2012, had promised a prestigious role in Monaco to a high-ranking judge, Gilbert Azibert, in exchange for information about an investigation into alleged illegal campaign funding.
They are looking into claims that Mr Sarkozy was warned his phone was being bugged as part of the funding probe.
Mr Azibert, one of the most senior judges at the court of appeal, was called in for questioning on Monday as was another judge, Patrick Sassoust.
Analysis: The BBC's Hugh Schofield in Paris
The drip-drip of allegations about Mr Sarkozy, money-raising and misuse of influence, continue to disrupt his much-touted return to frontline politics. Over the past two years the French have become used to regular stories in the press raising awkward questions about their former president's ethics.
Worried by the prying of investigators into claims of illegal party fund-raising, it is alleged that Mr Sarkozy used a judge as point-man in the High Court of Appeal to tell him how proceedings against him were progressing. More serious is whether this judge tried to influence decisions in Mr Sarkozy's favour.
Mr Sarkozy's supporters accuse the investigators of themselves being politically influenced - by the ruling left. How come, they ask, that every time Mr Sarkozy makes a move back towards political life, the media are fed a new twist in the investigations? One side says it is dogged police work. The other says it is harassment.
Sarkozy and France's investigators
This is thought to be the first time a former French head of state has been held in police custody.
His predecessor, Jacques Chirac, was given a suspended prison sentence in 2011 for embezzlement and breach of trust while he was mayor of Paris. But he was never questioned in custody.
Image copyright Reuters Image caption The latest developments are seen as a blow to Mr Sarkozy's attempts to stand again for the presidency
Image copyright AFP Image caption Mr Sarkozy's car arrived at the anti-corruption office in Nanterre shortly before 08:00 (06:00 GMT)
Image copyright Reuters Image caption The building quickly drew heavy media presence
Image copyright AFP Image caption High-ranking judge Gilbert Azibert was questioned on Monday
Investigators will be able to hold Mr Sarkozy for an initial period of 24 hours but can extend custody for another day. He is being held in Nanterre.
Government spokesman Stephane Le Foll denied any political pressure had been placed on the judicial system to prosecute Mr Sarkozy.
"The justice system is investigating and will follow this through to the end. Nicolas Sarkozy can face justice just like anyone else," Mr Le Foll said.
Mr Sarkozy's allies rallied to support him.
Mayor of Nice, Christian Estrosi, tweeted: "Never has any former president been the victim of such treatment, such an outburst of hatred."
Diaries
An investigation was launched in February into whether Mr Sarkozy had sought inside information about the inquiry into his 2007 election campaign funding.
Image copyright AFP Image caption It has been alleged that Muammar Gaddafi helped fund Mr Sarkozy's 2007 election campaign
It has been claimed that late Libyan leader Muammar Gaddafi helped fund the campaign.
It is alleged that Mr Sarkozy was kept informed of proceedings against him while a decision was made over whether his work diaries - seized as part of the funding inquiry - should be kept in the hands of the justice system.
The Court of Cassation ruled in March 2014 that the diaries should not be returned.
Investigators believe the former president was tipped off that his phone was being bugged as part of the inquiry.
Mr Sarkozy insists the allegations against him are politically motivated.
But the BBC's Hugh Schofield in Paris says it is clear they represent another obstacle in the way of his planned return to frontline French politics.
The former president is seeking to regain the leadership of the centre-right UMP party later this year. ||||| Image copyright AFP Image caption Mr Sarkozy became the first ex-president of France to be detained in police custody
Former French president Nicolas Sarkozy's political ambitions have been overshadowed by investigations since he left the Elysee Palace in 2012.
In the latest development, he is to face trial on charges of corruption and abuse of power for allegedly seeking to influence a judge who was looking into suspected illegal financing of his election campaign.
In separate investigations he has also been accused of receiving campaign funding from the late Libyan leader Muammar Gaddafi and is to face trial for allegedly overspending campaign limits in 2012. He denies any wrongdoing.
What is the latest development?
The case for which Mr Sarkozy now faces trial centres around wiretapped phone calls in which he allegedly discussed the idea of offering a prestigious role in Monaco to a high-ranking judge in exchange for information on a financing investigation.
Prosecutors say Mr Sarkozy's lawyer, Thierry Herzog, tried to obtain information from the judge, Gilbert Azibert, about the investigation centred on alleged illicit payments from L'Oreal heiress Liliane Bettencourt to help Mr Sarkozy win the 2007 election. Mr Sarkozy was later cleared of taking any such funds.
The case surfaced in 2014 and Mr Sarkozy became the first former French head of state to be held in police custody before being formally placed under investigation.
He is expected to stand trial along with Mr Herzog and Mr Azibert. They all deny any wrongdoing
What is the Gaddafi case about?
In March 2018, Mr Sarkozy was questioned in police custody over long-standing claims that he received illicit funding from Col Gaddafi for his 2007 election campaign.
French-Lebanese businessman Ziad Takieddine has previously told the French news website Mediapart that in 2006-07 he handed over three suitcases stuffed with cash to Mr Sarkozy and Claude Guéant, who was his chief of staff.
Image copyright EPA Image caption Mr Sarkozy clinched trade deals for France with Libya's Gaddafi in 2007
Mr Takieddine alleged that the cash came from Gaddafi and totalled €5m (£4.4m; $6.2m).
Mr Guéant, who was managing Mr Sarkozy's presidential campaign at the time, told the franceinfo website that he had "never seen a penny of Libyan financing".
Mr Sarkozy also denies any wrongdoing and says some former Gaddafi regime officials want revenge for his decision to send French warplanes during the 2011 Libyan uprising.
What else is Mr Sarkozy accused of?
The other case for which Mr Sarkozy has been told to stand trial is known as the Bygmalion affair and centres on claims that Mr Sarkozy's party, then known as the UMP, worked with a friendly PR company to hide the true cost of his 2012 presidential election campaign.
France sets a €22.5m (£19m; $24m) limit on campaign spending, and it is alleged the firm Bygmalion provided a series of false invoices for €18m to Mr Sarkozy's party rather than the campaign. Investigators say that the false accounting enabled the party to spend well over the limit.
Image copyright Getty Images
Mr Sarkozy himself is accused of knowingly exceeding the spending limit by setting up campaign rallies even though he had been warned of the risk. He is appealing against the order to stand trial.
Employees at Bygmalion have admitted knowledge of the ruse and Mr Sarkozy is among 14 people caught up in the affair to face trial. The other suspects include ex-UMP colleague Eric Cesari, campaign heads Guillaume Lambert and Jerome Lavrilleux as well as Bygmalion staff.
As well as illegal campaign financing, the accusations involve forgery, abuse of trust, fraud, and complicity in illegal financing.
Mr Sarkozy lost the 2012 race and failed in his bid to run again in the 2017 presidential election.
What has Mr Sarkozy said?
Mr Sarkozy wrote about the Bygmalion scandal in a book published in 2015.
"It will no doubt be hard to believe, but I swear it is the strict truth: I knew nothing about this company until the scandal broke," he said.
Regarding claims of influence peddling, he has firmly denied doing anything "contrary to the values of the republic or the rule of law".
He has spoken of "political interference", suggesting that the judges who had ordered that he be questioned in custody had an "intention to humiliate". | – For what appears to be the first time in history, a French ex-head of state has been detained by police, the BBC reports: Nicolas Sarkozy is being held amid reports he traded influence for information. Officials are investigating whether the former—and would-be future—president offered a high-profile job to a judge in order to get information about a campaign funding investigation. The detention could be bad news for Sarkozy's bid to win the presidency again in 2017; his party is currently "rudderless," notes the BBC in a separate piece, and facing an investigation into allegations that it faked some $13.6 million in invoices in 2012. Sarkozy was allegedly told his phone was tapped in the earlier probe, which addressed funding for his 2007 campaign; he was said to be taking donations from Moammar Gadhafi, the Guardian reports. The phone-tapping revealed what officials called a "traffic of influence." The new claims suggest Sarkozy had judge Gilbert Azibert keep him updated on how the funding probe was progressing—and perhaps asked Azibert to influence the case on his behalf. But Sarkozy's supporters, the BBC notes, argue that the current government is deliberately targeting Sarkozy out of political motivations. Sarkozy's lawyer, Azibert, and another judge have also been detained. |
the first measurement of radiative muon capture ( rmc ) on hydrogen , ^-+p_+ n + , has been reported by a triumf group @xcite , and the value of the induced pseudoscalar constant @xmath0 was deduced to be about 1.5 times larger than that predicted by the partially conserved axial current ( pcac ) or that obtained from one - loop order heavy - baryon chiral perturbation theory ( hbchpt ) calculations @xcite . in ref .
@xcite the photon spectrum from rmc on a proton was obtained within the context of hbchpt up to next - to - next - to leading order ( nnlo ) , i.e. , to one loop order .
the results simply confirm a next - to - leading order ( nlo ) hbchpt calculation @xcite and the earlier theoretical predictions @xcite based on a phenomenological tree - level feynman graph approach .
furthermore , the results of ref .
@xcite indicated that the chiral series converges rapidly , and thus suggest that the discrepancy between experiment and theory observed for rmc on a proton can not be explained by higher order corrections within hbchpt .
since then , many analyses have been reported , incorporating a variety of new elements and suggestions , but all have essentially confirmed earlier results and concluded that the existing discrepancy is still unexplained @xcite . as it appears that a nnlo calculation which includes all diagrams through one - loop order converges sufficiently , the only possibilities for significant improvement would seem to come from effects outside the context of hbchpt , or perhaps from terms originating in the wess - zumino lagrangian .
these wess - zumino terms turn out to be negligible however , as shown in ref .
furthermore , all possible expressions in the amplitude which can be composed of the characteristic operators involved in the reaction , namely the polarization vectors of the photon and the lepton current , the three - momenta of the outgoing photon and of the exchanged weak vector boson , and the spin operator of the nucleon , emerge already in the one - loop order .
therefore , higher order contributions in the hbchpt perturbation series will give corrections only to the coefficients of these operator expressions and should be small , in view of the rapid convergence of the chiral series in this reaction .
this led us to the conclusion that something other than the ingredients of the hadronic vertices may in fact be the source of the problem .
for example , there may be difficulties in our understanding of the atomic and molecular states of the muonic atom in hydrogen .
in particular the dependence of the photon spectrum on the initial muonic atom states is non - negligible , so that it is important to try to find a quantity which is less sensitive to the atomic and molecular states but , at the same time , is sensitive to the pseudoscalar constant . quite recently ,
some alternative scenarios for possible resolution of the `` @xmath0 puzzle '' have been suggested by two groups . in ref .
@xcite , the photon spectrum corresponding to the experiment of ref .
@xcite was fitted by adjusting a parameter @xmath1 , with @xmath2 giving the fraction of spin @xmath3 ortho @xmath4-@xmath5-@xmath4 molecular state in liquid hydrogen .
a value @xmath6 was obtained , which is smaller than the theoretical prediction @xmath7 @xcite and would correspond to a 10 to 20 % component of the spin 3/2 state .
found for rmc .
that result was obtained however using a formula relating the liquid hydrogen and ortho molecular rates which did not correspond to the experimental conditions of the omc experiment @xcite . using an appropriate formula @xcite
one finds that @xmath7 results in a value which is in good agreement with the omc data .
however if one considers the uncertainties in the data and in some of the parameters one finds that values of @xmath1 as small as @xmath8 are possible , which is consistent , but only marginally so , with the result found for rmc . ] in ref .
@xcite , on the other hand , the authors speculate that the `` @xmath0 puzzle '' can be explained by accumulation of small effects and variations of parameters , or perhaps by an isospin breaking effect . as we have observed , the present situation viewed from the context of hbchpt can be summarized as follows .
all symmetries of qcd are respected order by order in this theory and the chiral expansion is rapidly converging .
the rapid convergence is fortunate , since to improve the theory by calculating higher orders would require including all of the many possible diagrams of the chiral order under consideration and would normally introduce a large number of new low energy constants which would have to be constrained by experiments .
furthermore , the hbchpt results agree fairly well with those obtained from the standard diagram approach , so that all theoretical approaches are reasonably consistent , and unable to explain the rmc data with the predicted value of @xmath0 .
it is probably important to remeasure the photon spectrum in rmc , or to measure more precisely the rate for ordinary muon capture ( omc ) , @xmath9 , as has been proposed @xcite .
alternatively , one could consider performing a rather more sophisticated experiment which would be sensitive to some different combination of the ingredients of the problem .
in that vein , we want to propose here to measure the polarization of the outgoing photon .
measuring the photon polarization enables us to choose the most important graphs which involve pion poles and therefore to enhance the dependence of the result on the pseudoscalar coupling constant @xmath0 . in the usual transverse gauge by far
the most important diagram for rmc is the one where the photon is emitted from the leptonic current .
the pseudoscalar coupling constant is an important contributor to this diagram , since @xmath0 is so much larger than @xmath10 or @xmath11 , but its importance is not enhanced by the pion pole because the momentum transfer in this diagram is always spacelike .
therefore , to concentrate on the pseudoscalar constant , we would like to find the channel where this diagram is blocked .
the polarization experiment blocks this channel .
the rationale is simple and transparent . since the neutrino is left - handed , the photon emitted from the leptonic current is right - handed .
this was shown for v and a couplings in ref .
@xcite and generalized to include the induced couplings as well in ref . @xcite .
a measurement of a left - handed photon filters out the photon from the leptonic current , and is thus sensitive to radiation from the hadronic current .
the sensitivity to @xmath0 comes from the fact that some parts of the hadronic current , and in particular some parts containing pion poles , are of leading order by the power counting rules of hbchpt .
the photon circular polarization in rmc ( to be defined explicitly below ) has been considered before in the context of a phenomenological treatment of the weak nucleon current parameterized by form factors @xcite .
there it was shown that the circular polarization ( and also the photon asymmetry relative to the muon spin ) could be written as @xmath12 where @xmath13 is the nucleon mass and where the coefficient of the @xmath14 term involves the various coupling constants .
we will discuss below the expansion scheme in powers of @xmath15 corresponding to this theorem and its connection to the power counting scheme of hbchpt .
the feynman graphs contributing to rmc on a proton can be classified into the two classes shown in fig . 1 : ( a ) the first corresponds to those graphs where the muon radiates , and ( b ) the second to the graphs where the hadron radiates .
the amplitude of the process can then be written as the sum of two diagrams , m_fi = ^ * _ , [ eq;amplitude ] where @xmath16 is the electric charge , @xmath17 is the fermi constant , @xmath18 is a kobayashi - maskawa matrix element , and @xmath19 is the polarization vector of photon .
the hadron matrix elements with three and four legs are denoted by @xmath20 and @xmath21 .
their properties have been studied in ref .
@xcite , and are briefly discussed in the next section .
the lepton matrix elements with three and four legs , @xmath22 and @xmath23 are given by _ & = & |u__(1-_5)u _ , [ eq;jl ] + m _ & = & |u__(1-_5 ) _ u _ , [ eq;ml ] where @xmath5 ( @xmath24 ) is four momentum of muon ( photon ) , @xmath25 is the muon mass , and @xmath26 ( @xmath27 ) is the dirac spinor for the muon ( neutrino ) .
first , we study the lepton matrix elements involving a polarized photon . in the laboratory frame
we assume that the @xmath28-axis of our coordinate system coincides with the neutrino direction and the @xmath29-@xmath28 plane includes the photon trajectory .
thus we have = ( 0,0,1 ) , = ( sin,0,cos ) , where @xmath30 ( @xmath31 ) is the unit vector of the neutrino ( photon ) momentum and @xmath32 is the angle between neutrino and photon , @xmath33 . in the transverse ( coulomb ) gauge the polarization vectors of the photon are given by ^*_l=(-cos ,-
i , sin ) , ^*_r=(cos ,- i ,- sin ) , where subscripts @xmath34 and @xmath35 stand for the left- and right - handed polarization state , respectively . in this frame
we can rewrite eqs .
( [ eq;jl ] ) and ( [ eq;ml ] ) in terms of components of four vectors for each spin state , ^(+ ) & & ^_t = 2 ( 0,-1,-i,0 ) , [ eq;jp ] + j^(- ) & & ^_l = 2 ( 1,0,0,1 ) , [ eq;jm ] + m^(+,r ) & = & 2 ( 1+cos , sin , isin , 1+cos ) , [ eq;mpr ] + m^(-,r ) & = & 2 ( sin , 1-cos , i(1-cos ) , sin ) , [ eq;mmr ] + m^(,l ) & = & 0 , [ eq;zero ] where @xmath36 . signs ( @xmath37 ) and @xmath38=(@xmath35 , @xmath34 ) in the parenthesis of l.h.s . of the equations denote , respectively , up and down muon spin state along the @xmath28-axis , and right- and left - handed photon polarization state . and
@xmath39 , as depicted in fig .
[ fig;amplitude ] , via eqs .
( [ eq;jp ] ) and ( [ eq;jm ] ) . ] eqs .
( [ eq;mpr ] ) , ( [ eq;mmr ] ) , and ( [ eq;zero ] ) show that the photons radiated from the muon line are totally right - handed polarized @xcite . if one measures the left - handed photons , the amplitude of eq .
( [ eq;amplitude ] ) is reduced to m^(l)_fi=_m^(l ) , [ eq;right ] where @xmath40 is the part of @xmath41 producing only left - handed photons , where the spin indices of proton and neutron are suppressed .
therefore we can investigate the part of the hadron four - point matrix element @xmath42 which produces left - handed photons , without the interference of the lepton radiating diagram containing the weak nucleon current @xmath20 , by measuring the left circularly polarized photons . the circular polarization @xmath43 , which is defined by where @xmath44 ( @xmath45 ) is the spectrum of right - handed ( left - handed ) photons is obtained by @xmath46 .
] , has the property that @xmath47 for the muon radiating diagram of fig .
[ fig;amplitude ] ( a ) @xcite . therefore , for @xmath48 , the deviation from one , @xmath49 , should come entirely from the contribution of @xmath50 .
hbchpt@xcite is a low energy effective field theory of qcd , which has a systematic expansion scheme in terms of @xmath51 , where @xmath52 is a typical four - momentum scale characterizing the process in question , @xmath53 is the chiral scale with @xmath54 1 gev , and where @xmath55 is the pion decay constant .
@xmath52 must be small , typically of the order of the pion mass @xmath56 .
a typical scale @xmath52 in muon capture ( both omc and rmc ) is the muon mass @xmath57 mev , and hence @xmath58 0.1 .
one therefore expects a rapid convergence of relevant chiral perturbation series for muon capture and the explicit hbchpt calculations are consistent with this expectation @xcite .
the effective lagrangian is expanded as = _
| = l_0 + l_1 + l_2 + , where the subscript @xmath59 denotes the order of terms , @xmath60 , with @xmath61 the number of nucleon lines and @xmath62 the number of derivatives or powers of @xmath56 involved in a vertex .
@xmath63 , @xmath64 , and @xmath65 are the leading order ( lo ) , next - to leading order ( nlo ) , and next - to - next - to leading order ( nnlo ) parts of the lagrangian , respectively , and their explicit form has been given in ref .
@xcite . in passing
, we should note that the @xmath64 includes the terms of @xmath66 which are corrections to the leading order lagrangian . in the nnlo lagrangian we have seven unknown constants , the so - called _ low energy constants _ ( lec s ) , which are not determined by symmetry but must be fixed by experiments .
three of the seven lec s appear in the three point vertex functions of @xmath20 , and they are fixed by the vector and axial vector radius and the goldberger - treiman discrepancy @xcite .
one of the remaining four constants is fixed via a rare pion decay @xcite , and the remaining three constants are estimated using the @xmath67(1232 ) and @xmath68 saturation method @xcite .
therefore there are no undetermined parameters in the calculation .
let us look at the diagrams involving the hadron matrix elements @xmath20 and @xmath21 in fig .
[ fig;rmc ] .
( see the caption of the figure for more details . ) the lo , nlo , and nnlo diagrams are drawn in the first line , the second line , and the third and fourth lines in fig .
[ fig;rmc ] , respectively .
since , as noted earlier @xcite , the series converges well , we expect those diagrams in the first line to be the most important .
both left- and right - handed photons are emitted from the hadron matrix element @xmath21 , and all the leading order diagrams of @xmath21 ( m0a , m0b , m0c ) contain a pion pole .
observe that two different momentum transfers appear in the pion poles in the m0 diagrams .
for m0c and the lower pole of m0b , the momentum transfer @xmath69 is relevant .
@xmath70 is always spacelike , has no significant @xmath71 dependence , and is generally @xmath72 . on the other hand for the m0a diagram and the upper pole of m0b
the relevant momentum transfer is @xmath73 .
this depends on @xmath71 via @xmath74 and becomes @xmath75 near the upper end of the photon spectrum .
thus one is much closer to the pion pole for these diagrams .
this means that , other factors being equal , these diagrams will be enhanced relative to those involving @xmath76 . now let us discuss the theorem of ref .
@xcite and the connection between the standard feynman diagram approach to rmc and the hbchpt approach described here . in hbchpt the most important diagram contributing to the hadronic pieces of fig .
[ fig;rmc ] is the seagull diagram , m0a .
this is just the standard kroll - ruderman term , which however is not explicitly seen in the diagrams of the relativistic phenomenological model ( fig . 1 of ref .
@xcite ) , since that model used a pseudoscalar pion - nucleon coupling . had pseudovector coupling been used it would have appeared explicitly .
it can however be directly identified as part of the diagram @xmath77 in fig . 1 ( b ) of ref .
@xcite where the photon radiates from proton , the proton propagates , and interacts with the lepton current , where the vertex of the weak nucleon current is described by the weak form factors .
the m0a diagram is included in the contribution from the negative energy propagation of the proton in the @xmath77 diagram .
( m0b and m0c can be also identified as parts of ( d ) and ( e ) in fig . 1 of ref
. @xcite , respectively . ) in the phenomenological model the amplitude @xmath77 can be expanded in terms of @xmath78 as m_b & = & ^_n \ { g_v .
+ & & + g_a + & & .
+ g_p(q_w)^*}_p+o(1/m_n^2 ) , [ eq;one_over_m ] where the nucleon weak form factors are denoted by @xmath10 for vector , @xmath11 for axial vector , and @xmath79 for pseudoscalar form factors .
@xmath80 is the proton anomalous moment .
we confirm the result of the theorem @xcite that all the terms in eq .
( [ eq;one_over_m ] ) are @xmath78 corrections . in this approach
the form factors are phenomenological parameters .
the @xmath0 dependent term is formally of order @xmath78 , but the form factor @xmath0 happens to be numerically large .
the connection to the hbchpt approach can be made via the goldberger - treiman relation which tells us that the pseudoscalar form factor has the structure due to pion propagation , i.e. a pion pole , and is given explicitly by @xmath81 .
in hbchpt this expression , rather than @xmath0 , will appear in all the pion pole terms and the @xmath13 in the numerator will cancel the @xmath13 appearing in the denominator , thus pushing this term to one lower order in the expansion than it is in the expansion of the phenomenological relativistic model@xcite .
we are now in a position to discuss what is known regarding the polarization observables of the muon capture .
as mentioned before , a general theorem tells us that @xmath82 is formally @xmath83@xcite .
using a phenomenological treatment of the weak nucleon current parameterized by the form factors one can show that hadron matrix elements are of order @xmath84 and @xmath85 in the @xmath78 expansion @xcite .
hence , @xmath86 in this model .
however , @xmath82 is not particularly small , as also noted in @xcite , because it contains a term proportional to @xmath87 , and @xmath0 is large , as is explained in the previous paragraph .
so to summarize , one can understand the connection between the theorem derived by expansion of the relativistic phenomenological model in ref .
@xcite and the corresponding hbchpt expansion by noting that there is a one to one correspondence between the @xmath88 , @xmath78 , and @xmath15 terms in the expansion of the model and the lo , nlo , and nnlo terms of hbchpt , except for the pion pole terms which appear at one lower order in hbchpt because the @xmath13 in the numerator of @xmath0 has been explicitly extracted .
in figs . [ fig;right - pol ] , [ fig;circ - pol ] , [ fig;spectrum - left ] and [ fig;spectrum - right ] we plot various of our results for the spectrum and circular polarization of photons , all calculated in hbchpt up to nnlo .
there are two major issues to discuss .
first , what is the sensitivity to @xmath0 of the spectrum of left - handed photons and the circular polarization and , second , how sensitive are these results to uncertainties in our knowledge of the muon atomic states .
let us first study the sensitivity of the polarization observables to the value of @xmath0 . in figs [ fig;right - pol ] and [ fig;circ - pol ]
we plot the spectrum of left - handed photons and the photon circular polarization , respectively , in the `` experimental state '' ( 6.1 % atomic hyperfine singlet state , 85.4 % ortho @xmath4-@xmath5-@xmath4 state , and 8.5 % para @xmath4-@xmath5-@xmath4 state ) reported in ref .
@xcite for the photon energy @xmath89 60 mev to 100 mev .
we plot three lines which are obtained by using the hbchpt up to nnlo and the relativistic phenomenological model @xcite with two @xmath0 values , @xmath90 and 1.5 , where @xmath91 is the goldberger - treiman prediction for @xmath0 at the momentum transfer corresponding to omc in hydrogen .
one finds that the results are quite sensitive to the value of @xmath0 as expected .
the results of hbchpt and the model with @xmath0=@xmath92 are in good agreement in the both figures which confirms that the same basic ingredients are in both models and that the other higher order corrections in hbchpt and terms not included in the relativistic model are in fact small .
the case of @xmath93 gives a photon spectrum larger by about a factor of three than the case of @xmath90 .
therefore our result shows the strong sensitivity of the polarized photon spectrum to the different values of the pseudoscalar coupling over the experimentally accessible photon energy region .
this is in contrast to the unpolarized photon spectrum where the difference of photon spectra with the two different values of @xmath0 is only of the order of 30 - 40% in the measurable region .
the circular polarization is also sensitive and differs for the two values of @xmath0 by a more or less constant amount 0.2 over the whole relevant region of photon energy .
consider now the question of the sensitivity of the results to aspects of the muon s atomic or molecular state .
the photon spectrum can always be represented by a linear combination of the spectrum of singlet and that of triplet state capture .
the coefficient of each state is determined by the particular target , liquid or gas , by the amount of delay between the muon stop and the beginning of counting , and by the formulas incorporating the various atomic and molecular transition rates which describe the transitions from capture , through singlet , ortho @xmath4-@xmath5-@xmath4 and para @xmath4-@xmath5-@xmath4 molecular states .
it is known that there are some ambiguities in the parameters of these formulas , particularly with regard to the ortho - para transition rate @xcite and to the possible inclusion of a spin @xmath3 component in the ortho molecule @xcite . in figs .
[ fig;spectrum - left ] and [ fig;spectrum - right ] we plot our results for the spectra of left- and right - handed photons , respectively , for each spin state . the solid , long - dashed , short - dashed , and dotted lines correspond to singlet , triplet , statistical , and ortho states , respectively .
> from these figures one can see immediately some general features .
the spectrum of right - handed photons , which is also essentially the spectrum of unpolarized photons , is much larger than that for left - handed photons . specifically by comparing the two figures we find that the rate for right - handed photons is about 2.5 times larger than that for left - handed photons for the singlet state and 17.3 times larger for the triplet state , when the spectra are integrated over the photon energy @xmath94 60 to 99 mev . under the experimental conditions of the triumf experiment @xcite
, the ortho molecular state is dominant , so that in these conditions one would have about one - tenth as many left - handed photons as right - handed ones .
presumably this enhancement of right - handed photons is due to the strong enhancement of the triplet state and to the fact that the muon radiating diagram dominates , and , as was noted above produces purely right - handed photons .
more specifically , with regard to the question of sensitivity to the atomic and molecular states , we note that if the spectra of singlet and triplet states were the same , the relative amounts would not matter and there would be no sensitivity . from the figures we see that , while this is not the case , the singlet is in fact much more important , and closer to the triplet , for the left - handed photon case than for the right - handed one .
numerically the ratio of the singlet to triplet state spectra , when integrated over the photon energy , is 0.34 for left - handed photons and 0.05 for right - handed photons .
this means that the left - handed photon case will depend less strongly on the relative amounts of singlet and triplet than the right - handed case .
but one should also take into account the result above that the left - handed spectrum is much more sensitive to @xmath0 than the right - handed ( or unpolarized ) spectrum .
thus one concludes that a measurement of the spectrum of left - handed photons , or equivalently the circular polarization of the photons , as we propose here , should be significantly less sensitive to the atomic and molecular ambiguities per unit of sensitivity to @xmath0 than is the right - handed or unpolarized spectrum .
we have discussed rmc on the proton in the case when the measured photon is polarized and have shown that the spectrum involving left - handed photons and the photon circular polarization are quite sensitive to the pseudoscalar coupling constant @xmath0 .
they are somewhat less sensitive than the unpolarized case to the atomic and molecular spin state as well .
this is because the dominant diagram with radiation from the muon vanishes when only left - handed photons are considered and because the chiral counting rules of hbchpt select only the pion poles in the leading order contribution from the other diagrams .
thus these observables include the various ingredients of the problem in a somewhat different way than does the unpolarized spectrum and so their measurement may help resolve the current disagreement between theory and experiment based just on the unpolarized spectrum .
the measurement of polarized photons in rmc on the proton is technically extremely challenging .
the spectrum of left - handed photons is only one order of magnitude smaller than that of the unpolarized photons .
however to measure the polarization of the photon one needs an additional scattering through an electromagnetic interaction or alternatively needs to measure the angular distributions of the electron - positron pair produced when the photon is stopped . hence to obtain the same order of precision as that of an unpolarized rmc experiment , the polarization experiment must accumulate more events , say by as much as four orders of magnitude , than the unpolarized experiment .
such measurement is probably impossible with current muon beams and techniques , but may become feasible with the very intense muon beams which are now being discussed .
one should also note that there is an alternative quantity which could be measured , namely the angular asymmetry of the photon relative to the muon spin . by virtue of the general theorem of ref .
@xcite this quantity has generally the same features and sensitivities as does @xmath43 .
it is much easier to measure , since one does not need to rescatter the photon , and in fact has been measured in nuclei @xcite .
however in the case of the proton , the muon loses almost all of its initial polarization as it is captured into atomic orbit .
hence the suppression factor , due now to the low residual polarization of the muon , may be just as large as for the polarized photon observables we have considered here . on the other hand , in nuclei the capture rate for rmc increases proportional to @xmath95 , where @xmath96 is the number of protons in the nucleus .
this makes measurements of the unpolarized rate in nuclei relatively easy @xcite .
so it may be feasible to measure the polarized photon observables in rmc on heavy nuclei .
indeed , the pion pole still gives the leading contribution and the general features remain the same , although there are the not insignificant complications in both calculations and interpretation introduced by the nuclear structure .
we would like to thank m. rho for his comments and helpful discussions .
sa thanks t .- s . park , f. myhrer , and k. kubodera for comments and discussions .
dpm is very grateful to v. vento for his warm hospitality during his stay at university of valencia .
this work is supported in part by korea bk21 program , by kosef 1999 - 2 - 111 - 005 - 5 and krf grant no .
2000 - 015-dp0072 , by nsf grant no .
phy-9900756 and int-9730847 , and by the natural sciences and engineering research council of canada .
r. cutkosky , phys . rev . * 107 * ( 1957 ) 330 ; k. huang , c. n. yang , and t. d. lee , phys . rev .
* 108 * ( 1957 ) 1340 ; j. bernstein , phys . rev .
* 115 * ( 1959 ) 694 ; g. k. manacher and l. wolfenstein , phys . rev .
* 116 * ( 1959 ) 782 . | we discuss the measurement of polarized photons arising from radiative muon capture .
the spectrum of left circularly polarized photons or equivalently the circular polarization of the photons emitted in radiative muon capture on hydrogen is quite sensitive to the strength of the induced pseudoscalar coupling constant @xmath0 .
a measurement of either of these quantities , although very difficult , might be sufficient to resolve the present puzzle resulting from the disagreement between the theoretical prediction for @xmath0 and the results of a recent experiment .
this sensitivity results from the absence of left - handed radiation from the muon line and from the fact that the leading parts of the radiation from the hadronic lines , as determined from the chiral power counting rules of heavy - baryon chiral perturbation theory , all contain pion poles . |
the measurement of chemical abundances from stellar spectra relies on a series of assumptions about the physical properties of the stellar atmosphere .
ideally a model atmosphere should be recoverable from the observed electromagnetic spectrum , but due to observational limitations models are either constructed from a few physical principles , that lead to a closed system of differential equations and boundary conditions , or modeled from some observed spectral features constrained by some theoretical bases .
such models are here referred to as theoretical and empirical ( or semi - empirical ) model atmospheres , respectively .
the comparison between model atmospheres derived by different methods can be used to test our actual knowledge on the structure of the stellar atmosphere .
good agreement exists between theoretical and empirical models for the temperature stratification of the solar photosphere . unlike the solar case , where the high quality of the spectroscopic observations has motivated both the empirical modeling and theoretical studies of its atmosphere ,
the analyses of more distant late - type stars are commonly carried out using relatively simple theoretical models for their photospheres . as an example
, it is rare to find studies in the literature analyzing in detail the likely errors that occur when interpreting the stellar spectra with model atmospheres based on approximations such as local thermodynamical equilibrium ( lte ) .
previous efforts to model empirically the photospheres and chromospheres of cool stars others than the sun , such as those by mckle et al .
( 1975 ) for arcturus , ruland et al .
( 1980 ) for pollux , magain ( 1985 ) for the metal poor sub - giant hd140283 , or thatcher et al .
( 1991 ) for @xmath0 eri were severely limited by the quality of the spectroscopic observations .
technical advances in astronomical instrumentation have made it possible to acquire data more comparable to that for the sun .
extremely high resolving power and signal - to - noise ratio is feasible for many stars , at least to seventh magnitude . in this environment
, we have reconsidered the possibility of semi - empirical modeling the photospheres of cool stars by developing an inversion code of stellar spectra .
the method has been previously tested with the sun ( see allende prieto et al .
1998 ) , demonstrating that it is able to recover the depth - stratification of the solar photosphere from * normalized * spectral line profiles .
the procedure involves the assumption that the stellar photosphere is plane - parallel , in lte , in steady state , and in hydrostatic equilibrium .
the star is assumed to rotate as a solid body .
magnetic fields are neglected .
we have selected two well - known nearby stars for this study : the metal - poor g8 dwarf gmb1830 ( hd103095 ; hr4550 ; [ fe / h ] = @xmath2 where @xmath3(m ) is the number density of the nuclei of the element m and `` h '' refers to hydrogen . ]
@xmath4 1.3 ) and the solar - like metallicity k2v @xmath0 eri ( hd22049 ; hr1084 ) .
gmb1830 is the brightest star ( @xmath5 ) that is significantly metal deficient .
it has been studied widely making use of theoretical model atmospheres and high resolution spectroscopic observations , e.g. smith , lambert , & ruck ( 1992 ) and balachandran & carney ( 1996 ) .
the star was reported to show radial velocity variations ( beardsley , gatewood , & kamper 1974 ) , but subsequent detailed studies ( griffin 1984 ; heintz 1984 ) did not confirm the variations .
the star shows a periodic variation of the emission in the ca ii h and k lines , likely reflecting a solar - like activity cycle with a period of about 7 years ( wilson 1978 ; radick et al .
@xmath0 eri is a young and active dwarf surrounded by a ring of dust at a distance of 60 au ( greaves et al .
its line bisectors , magnetic activity , and temperature have been observed to vary by gray & baliunas ( 1995 ) .
we have obtained high - quality spectroscopic data for these stars and followed the inversion procedure previously applied to the sun .
next section describes the observations and the database employed in the study .
3 describes the details of the inversion procedure , and 4 the retrieved model atmospheres and their comparison with observations , while 5 discusses and summarizes the main conclusions .
optical spectroscopic observations were carried out in 1996 february using the higher resolution camera of the _
2dcoud _ echelle spectrograph ( tull et al . 1995 ) coupled to the harlan j. smith telescope at mcdonald observatory ( mt .
locke , texas ) .
the cross - disperser and the availability of a @xmath6 pixels ccd detector made it possible to gather up to 300 in a single exposure .
the set - up provided a resolving power of @xmath7 200000 .
as many 1/2 hour exposures were acquired as were needed to reach a signal to noise ratio ( snr ) of @xmath4 300800 .
table 1 describes the observational program .
a very careful data reduction was applied using the iraf software package , and consisted of : overscan ( bias ) and scattered light subtraction , flatfielding , extraction of one - dimensional spectra , wavelength calibration and continuum normalization .
wavelength calibration was performed for each individual image using @xmath4 300 th - ar lines spread over the detector . the possibility of acquiring daylight spectra with the same spectrograph allowed us to perform a few interesting tests .
comparison of the wavelengths of 60 lines in a single daylight spectra ( snr @xmath4 400 - 600 , depending on the spectral order ) with the high accuracy wavelengths measured in the solar flux spectrum by allende prieto & garca lpez ( 1998 ) showed that the rms difference was at the level of 58 m s@xmath8 ( @xmath9 pixel ) . before co - adding the individual one - dimensional spectra , they were first cross - correlated to correct for the change in doppler shifts and instrumental drifts .
more details are given in allende prieto et al .
( 1999a ) .
measurements of the optical continuum flux are available for the two stars , although the fluxes are not on an absolute scale .
the breger ( 1976 ) catalogue includes both stars , and gmb1830 was also observed by peterson & carney ( 1979 ) and carney ( 1983 ) .
the iue satellite observed both stars , and their uv fluxes are on an absolute scale , providing complementary information , but as @xmath0 eri is a chromospherically active star , its uv spectrum rich in emission lines is not adequate to study the star s photosphere .
the available spectra of gmb1830 ( table 2 ) covering wavelengths redder than 2000 were critically compared , and averaged .
the velocity shifts between individual spectra were found to be smaller than @xmath4 1 , unimportant so for the analysis of the continuum .
interstellar extinction was considered negligible .
the inversion code assumes hydrostatic equilibrium .
therefore , gravity must be known to attempt the inversion .
the chemical abundances of the elements responsible for the atomic lines we use as input data are derived in the inversion process , but an initial appraisal of the overall chemical composition is necessary . following a procedure strictly identical to that applied by allende prieto et al .
( 1999b ) to more than two hundred cool stars of different metallicities , we have derived the _ trigonometric _ gravities for the two nearby stars studied here from the _ hipparcos _ parallaxes , finding @xmath10 dex for gmb1830 and @xmath11 dex for @xmath0 eri .
spectroscopic studies assign to gmb1830 metallicities in the range 1.2 @xmath12 [ fe / h ] @xmath12 1.4 ( see smith et al .
analyses of @xmath0 eri point to slightly lower than solar metallicities , typically within 0.2 @xmath12 [ fe / h ] @xmath12 0.0 ( drake & smith 1993 )
. the line profiles entering the inversion code miss ( multi - line inversion of stellar spectra ) were carefully selected following the same criteria employed for the solar case ( allende prieto et al .
1998 ) : they should be included in the compilation of solar lines by meylan et al .
( 1993 ) , their transition probabilities should have been measured by blackwell and collaborators in oxford ( e.g. blackwell & shallis 1979 ) , and they should be weaker than 80 m in equivalent width ( w@xmath13 ) , to minimize both departures from lte , and the underestimate of the line damping by using the unsld approximation ( see , e.g. , ryan 1998 ) .
these criteria provided 13 lines of iron , calcium , titanium , and chromium in the spectral range covered for gmb1830 , and 10 of them were also useful for @xmath0 eri ( the restriction of the lines w@xmath13 to be smaller than 80 m was slightly relaxed for @xmath0 eri ) .
this is a significantly smaller number of lines than the set employed for the solar inversion but , as we shall demonstrate below , the atmospheric structure can still be derived with confidence .
the wings of the ca i 6162 line were included , as discussed for the solar inversion by allende prieto et al .
( 1998 ) , using the theoretical estimates of spielfiedel et al .
( 1991 ) for the damping due to collisions with hydrogen atoms .
the line data is listed in table 3 : all lines were used for the modeling of gmb1830 , and those employed for @xmath0 eri are identified with an asterisk .
the inversion proceeds analogously to the solar case , starting from an isothermal model photosphere ( @xmath14= 5000 k ) , increasing progressively the number of nodes until either no significant improvement in the fit of the line profiles is achieved or the temperature structure shows wiggles , which are evidence that the degree of the chosen polynomial is too high .
the solar abundances ( anders & grevesse 1989 ) were taken as starting point for @xmath0 eri , while the abundances of the elements heavier than helium were scaled by a factor 0.0316 ( [ m / h ] = 1.5 ) for gmb1830 .
the abundances assumed at the beginning do not determine the final result .
1 shows the evolution of the iron abundance through the inversion procedure for gmb1830 , assuming different initial guesses .
an initial iron abundance that was more than @xmath4 0.3 dex from [ fe / h ] = 1.5 provided a significantly different final abundance and a model photosphere that did not fit adequately the observed line profiles .
the rotational velocity and the gaussian macroturbulence were allowed to vary , while the microturbulence was assumed to be solar ( @xmath4 0.6 km s@xmath8 ) , negligible , or larger than 1 km s@xmath8 , finally keeping the best value from the point of view of the @xmath15 criterion between observed and synthetic spectra .
a gaussian profile was used to represent for the instrumental profile .
uncertainties have been estimated following the algorithm described by snchez almeida ( 1997 ) : @xmath16 where @xmath17 represents the standard covariance matrix , and @xmath3 is the number of free parameters .
that is , errors are evaluated as the standard least - square estimate ( see , for instance , press et al 1988 ) , augmented by the square root of the ratio between the number of data and the number of free parameters . with this correction , uncertainties are reliable even in the case the minimum of @xmath15 has not been reached .
we can make use of the previously studied solar case to estimate the boundaries of the depth coverage reached with the current set of spectral lines .
the solar inversion model was derived from the solar line spectrum in the fts atlas of kurucz et al .
( 1984 ) , with a higher signal - to - noise ratio and spectral resolution than our stellar spectra .
the mcdonald day - light spectra were acquired with the same spectrograph and similar signal to noise as the stellar spectra , and provide the possibility to carry out a test of the results obtained with the mcdonald setup .
thus , we have repeated the application of the miss code to the solar spectrum , using the 10 lines selected in common for gmb1830 and @xmath0 eri extracted from the mcdonald day - light spectra .
2 shows the retrieved model ( solid line : error bars are shown ) compared with the solar model obtained from the inversion of 40 lines in the kurucz et al .
( 1984 ) atlas ( dashed line ) , as described in allende prieto et al .
. the smaller sample of lines narrows the photospheric region covered .
nonetheless , the agreement is reasonable for @xmath18 , with differences smaller than 200 k suggesting that the ten selected spectral lines map approximately this part of the solar photosphere . the derived model stellar photospheres are shown in fig . 3 , and compared with theoretical model photospheres from the grid by kurucz ( 1992 ) , and the exact solution to the gray atmosphere ( @xmath19 $ ] ) of the assigned effective temperatures ( see below ) .
the quality of the final fit to the observed line spectra is shown in fig . 4 for the two stars .
there are marked differences between the temperature stratification of the semi - empirical models and their purely theoretical counterparts . while the star s gravity is determined with high accuracy from the trigonometric parallaxes measured by hipparcos ( nissen , h@xmath20 g & schuster 1997 ; allende prieto et al .
1999b ) and some additional hypotheses , the effective temperature typically gives discrepant values when derived from different methods such as photometry , balmer lines , excitation equilibrium , the spectral energy distribution , or temperature sensitive line ratios .
we treat the effective temperature of the appropriate theoretical model as an unknown parameter .
metallicity influences only weakly the considered spectral features .
the derived rotational velocities are 2.5 and 2.1 km s@xmath8 for gmb1830 and @xmath0 eri , respectively .
they compare well with those empirically measured by fekel ( 1997 ) : 2.2 and 2.0 km s@xmath8 .
mayor and collaborators derived 1.5 km s@xmath8 for @xmath0 eri from the coravel measurements ( benz & mayor 1984 ) , while gray ( 1984 ) gives 2.2 km s@xmath8 .
smith et al .
( 1992 ) derived 1.8 km s@xmath8 for the combination of the different broadening mechanisms : instrumental , macroturbulence and rotation .
however , our rotational velocities should be taken with caution .
the inversion code is not able to cleanly unravel the gaussian macroturbulence from the rotational broadening profile .
moreover , the use of the van der waals approximation for the collisional broadening with neutral hydrogen is expected to underestimate , systematically , the collisional broadening , and should produce larger - than - real estimates for the rotation - macroturbulence broadening .
the derived gaussian macroturbulence is 0.0 and 1.5 km s@xmath8 for gmb1830 and @xmath0 eri , respectively , and in both cases the preferred microturbulence was 0.6 km s@xmath8 .
obviously , the abundances obtained directly from the inversion are only those of the elements whose lines are represented in the sample selected as input data .
these are : calcium , titanium , chromium , and iron .
the results appear in table 4 .
the derived ratio of iron to calcium abundances for gmb1830 agrees very well with that found by smith et al .
( 1992 ) making use of marcs model atmospheres ( gustafsson et al .
1975 ) , and balachandran & carney ( 1996 ) making use of those of kurucz ( 1992 ) .
but the iron abundance with respect to the sun derived by both groups , is @xmath4 0.1 dex higher than ours .
the comparison of the abundances of these elements in @xmath0 eri with the determination by drake & smith ( 1993 ) shows a discrepancy for calcium of @xmath21 dex ( a(ca ) = 6.26 6.39 ) . ] and a difference of 0.28 dex for iron : our result is [ fe / h ] = @xmath22 dex , and theirs was [ fe / h ] = @xmath23 .
these and others inconsistencies found for @xmath0 eri ( see drake & smith ( 1993 ) and 5 of this paper ) might be partly related to the magnetic activity of the star .
we have made use of several spectroscopic indicators to test the depth stratification of the stellar photosphere : the optical spectral energy distribution , weak metal lines spanning a wide range in excitation potential , and collisionally enhanced wings of strong metallic lines .
the optical continuum and the excitation balance of weak metal lines are highly sensitive to temperature .
the wings of the very few strong metal lines for which detailed theoretical calculations or laboratory measurements of their damping constants is available ( lambert 1993 ) are reliable estimators of the pressure in the line forming region ( see , e.g. , edvardsson 1988 , anstee , omara & ross 1997 ) .
other tools are available , such as the wings of the balmer lines ( fuhrmann , axer & gehren 1993 ) , but have not been included here because they are more complicate to interpret .
the reader is referred to fuhrmann et al .
( 1993 ) for an extensive discussion on the analysis of hydrogen lines .
while most of the spectrophotometric measurements in the literature do not provide an estimate of their accuracy , the availability of different * independent * determinations allows us to derive empirically their precision for the case of gmb1830 .
figure 5a compares the observed optical fluxes with the models prediction normalized at 7500 (@xmath24 1.33 ) , the reddest wavelength where all the different observational sources have data .
independently observed fluxes are represented by filled circles ( breger 1976 ) , open circles ( peterson & carney 1979 ) and asterisks ( carney 1983 ) .
the true continuum ( no line blanketing ) , given the low metallicity of the star , is expected to fall very close to the observed continuum , except in the blue part of the spectrum , where it should be higher , consistent with the presence of many absorption lines .
the prediction of the miss model ( solid line ) shows this behavior .
it is shown in the figure that , for the fixed gravity and metallicity ( @xmath25 = 4.68 ; [ fe / h ] = 1.3 ) , a theoretical model atmosphere with an effective temperature @xmath26 5050 k reproduces the observations .
this was already pointed out by balachandran & carney ( 1996 ) .
the fluxes for the theoretical ( kurucz 1992 ) models take into account the presence of lines ( unlike the miss continuum ) and are therefore directly comparable with the observations . the effective temperature derived from the optical continuum is consistent , as expected , with that recently derived by alonso , arribas , & martnez roger ( 1996 ) making use of the infrared flux method ( irfm ; blackwell et al . 1990 ) : 5029 k. the absolutely calibrated uv spectra of gmb1830 in the iue final archive ( iuefa ) offer us the possibility to carry out an independent test . combining the apparent brightness of the star in the johnson v band , v = 6.42 , and the _ hipparcos _ parallax , @xmath27 = 0.109 mas , we arrive at an absolute magnitude for this star @xmath28 . using this value and the star s metallicity to choose an isochrone from @xmath29-elements enhanced models of bergbusch
& vandenberg ( 1992 ) , quite independently of the assumed star s age due to the fact that the star has not evolved from the main sequence , we find the stellar mass to be m = 0.64 @xmath30 0.05 m@xmath31 , in agreement with the older estimate by smith et al .
( 1992 ) , and the stellar radius r = 0.61 @xmath30 0.05 r@xmath32 . the radius and the parallax
directly provide the dilution factor of the flux as the light travels from the star to earth , making it possible to compare models surface fluxes and the iue observations .
5b re - enforces the conclusion previously obtained from the slope of the optical continuum , that the @xmath33 of the theoretical model atmosphere is close to 5050 k. unfortunately , at the present stage we can not carry out a detailed spectral synthesis , including the many lines present in this spectral range , with the miss model . however , it is unclear whether the lines used here for the modeling are able to constrain the layers of the photosphere where the uv continuum is forming .
the optical continuum of @xmath0 eri , as appears in breger s catalog , has been represented in fig . 6 . again
, the miss model ( solid line ) predicts a slope compatible with the observations .
the effective temperature for a theoretical model that fits the continuum slope is somewhat hotter than @xmath26 4850 k ( dashed line ) , but cooler than 5200 k. alonso et al .
( 1996 ) derived 5076 k , and this is the temperature that we assign to the theoretical models shown in fig .
we recall that the chromospheric activity of @xmath0 eri dominates the uv spectrum of the star , excluding the possibility of studying the photosphere from this spectral region . using the isochrones of bergbusch & vandenberg ( 1992 )
we find that @xmath0 eri s mass is m = 0.76 @xmath34 m@xmath32 , and its radius r = 0.55 @xmath30 0.05 r@xmath32 .
the highly accurate determinations of the transition probabilities for a large sample of neutral iron lines by obrian et al .
( 1991 ) provide an independent test of the semi - empirical model .
we have identified 12 iron lines in obrian et al.s list within our spectral range , covering a significant range in excitation potential and equivalent width to explore the excitation equilibrium of neutral iron for the considered model atmospheres .
the lines are listed in table 5 , with their measured equivalent widths .
the miss model for gmb1830 , with the derived solar - like microturbulence , does not exhibit a significant dependence of the derived iron abundance on the equivalent width .
the upper panel of fig .
7 shows the differences between the abundances observed and predicted by miss , as derived from the differences between observed and predicted equivalent widths .
the slope of the linear ( least - squares ) model is @xmath35 .
the lower panel shows that the excitation equilibrium is satisfied : the slope of the derived abundance against excitation potential is @xmath36 .
the fe i lines in the obrian et al s list identified in the spectrum of @xmath0 eri are the same ones observed in gmb1830 , except for @xmath37 5321 .
of course , given the higher metallicity , the lines are stronger in this case .
figure 8 shows that the microturbulence retrieved in the modeling process for @xmath0 eri induces no significant gradient in the abundance as derived from lines of different strength .
the excitation equilibrium is satisfied for this set of lines as well : the slope of the abundance differences as a function of the excitation potential is @xmath38 .
the wings of the ca i @xmath37 6162 line were used as input for the semi - empirical modeling , and the spectral region close to the line is very useful as weak calcium lines are present , allowing a test of the retrieved model and calcium abundance . in fig . 9 ( upper panel )
the observed spectrum is shown ( dots ) , and compared with the synthesis using the miss structure ( solid line ) .
the miss model reproduces nicely not only the observed wings of the strong line , as imposed in the modeling process , but also the surrounding calcium and iron lines , with the derived calcium abundance : a(ca ) = 5.27 , or [ ca / h ] = 1.09 , which is 0.3 dex higher than the derived iron abundance .
the result , that fully agrees with the analyses using marcs model atmospheres by smith et al .
( 1992 ) , reflects the well - known over - abundance of @xmath29-elements in metal - poor stars .
departures from lte are expected in the core of the @xmath37 6162 line .
the lower panel of fig .
9 shows the same spectral region for @xmath0 eri . the observed spectrum ( dots )
is nicely reproduced by the miss model with [ ca / h ] = @xmath39 .
the oscillator strengths were extracted from the vienna atomic line database ( vald ) , and have been tested against the solar spectrum ( allende prieto et al .
we have applied an inversion method to normalized line profiles in the optical spectra of the metal - poor dwarf gmb1830 and the solar - metallicity dwarf @xmath0 eridani .
this demonstrates the viability of the empirical modeling to stars other than the sun , to which the inversion had been previously applied ( allende prieto et al .
the semi - empirical models reproduce very well weak - to - moderate lines of neutral atoms , and satisfy the excitation equilibrium of iron .
the models also fit the wings of strong lines , and the slope of the optical continuum .
the derived model atmospheres are slightly different from the theoretical models of a similar effective temperature , showing a steeper temperature gradient
. these differences must correspond to missing ingredients in the theoretical modeling . in our view , a likely candidate is stellar granulation .
the semi - empirical models are one - dimensional , static , and time - independent too , but flux - constancy is no longer imposed .
this flexibility provides room for missing physics in the complex dynamical interplay between matter and radiation .
therefore studying and analyzing the differences between theoretical and semi - empirical structures may help us to recognize which physical effects are lacking .
the mean temperature structure derived from numerical simulations of solar granulation ( stein & nordlund 1998 ) has shown a steeper gradient for the layers outwards than @xmath40 , resembling the behavior of the semi - empirical models presented here .
this effect was also apparent in the semi - empirical model for the solar photosphere we derived using the same technique .
the differences between the semi - empirical model and the flux - constant models for gmb1830 do not affect significantly the abundances previously published for this star .
it is of interest that the absolute abundance of li measured by deliyannis et al .
( 1994 ) in gmb1830 , namely a(li ) = 0.27 dex , does not change by more than 0.01 dex when using the semi - empirical model for this star .
if departures from lte are significant , the semi - empirical models would adapt themselves to reproduce the line profiles under lte .
this effect has been named * nlte masking * and has been invoked to explain the differences between the holweger & mller ( 1974 ) empirical solar model and solar nlte models by rutten & kostik ( 1982 ) . quantifying the importance of departures from lte
should be performed through detailed calculation of model atmospheres .
hauschildt , allard & baron ( 1999 ) have already stepped forward to this , computing models for the sun and vega , but these studies need to be extended to a wide range of physical parameters . at this point , we can take a glimpse at the consistency of the results provided by the inversion procedure checking the iron abundance that comes out from the analysis of ionized iron lines . in late - type dwarfs , such as those analyzed here , most of the iron is in form of fe@xmath41 ions and therefore , departures from lte ionization equilibrium are unlikely to disturb the abundances derived from lines of this species .
smith et al .
( 1992 ) and drake & smith ( 1993 ) analyzed four and three fe ii lines in the spectra of gmb1830 and @xmath0 eri , respectively . using their atomic data , we synthesized the lines with the semi - empirical models and the abundances retrieved from neutral lines , finding that the agreement between observed and predicted equivalent widths for gmb1830 is excellent , always better than 1 m .
conversely , the equivalent width predicted for the fe ii lines of @xmath0 eri , are systematically smaller than the observations , leading to a higher iron abundance by 0.2 dex than the fe i lines , which may be an indicator of departures from lte ( we recall that this star exhibits magnetic activity ) .
it is worthwhile to note that feltzing & gustafsson ( 1998 ) found further evidence of overionization ( compared to lte predictions ) for several k dwarfs .
socas - navarro , ruiz cobo , & trujillo bueno ( 1998 ) have developed a nlte inversion procedure oriented to the study of the solar chromosphere .
the implementation of the method to stars is highly desirable , and its application to @xmath0 eri may bring into agreement the abundances of neutral and ionized lines .
understanding of the atmospheric structure and the line formation in metal - poor stars is of particular relevance .
detailed abundance analyses on these stars provide precious information on the chemical evolution of the galaxy , how metals are synthesized in stellar interiors , or even the yields of the primordial nucleosynthesis . very recently ,
asplund et al . (
1999 ) have computed the first hydrodynamical simulations of surface convection for metal - poor stars , similar to those of stein & nordlund ( 1998 ) for the sun .
the mean temperature structures they derive for hd140283 ( [ fe / h ] @xmath42 ) and hd84937 ( [ fe / h ] @xmath43 ) show again a steeper gradient in the layers of @xmath1 than the flux - constant stratification of the corresponding one - dimensional models .
this turns out to have important consequences for the derived lithium abundance , indicating that lithium abundances could have been overestimated by @xmath44 dex in metal - poor stars using one - dimensional model atmospheres .
data of similar quality to those presented in this paper , and even wider spectral coverage have been collected for hd140283 during the past few years ( allende prieto et al .
1999a ) , and should provide an alternative semi - empirical model for this star in the near future .
we thank martin asplund , luis ramn bellot rubio , manolo collados , klaus fuhrmann , bengt gustafsson , and nataliya shchukina for fruitful discussions .
suchitra balachandran and bruce carney have kindly provided measurements of the optical continuum of gmb1830 , and benjamn montesinos helped with the iue data .
we are grateful to the staff at mcdonald observatory for their professional support .
this research has been partially supported by the nsf ( grant ast-9618414 ) , the spanish dges ( projects pb92 - 0434-c02 - 01 and pb95 - 1132-c02 - 01 ) , and the robert a. welch foundation of houston , texas .
nos / kitt peak fts data used here were produced by nsf / noao .
we have made use of vald , the iue final archive , data from the _ hipparcos _ astrometric mission of the esa , and the simbad database , operated at cds ( strasbourg , france ) .
cccc ca i & 6166.445@xmath45 & 2.52 & 1.142 ca i & 6499.642@xmath45 & 2.52 & 0.818 ca i & 6162.166@xmath45 & 1.89 & 0.097 ti i & 5490.165 & 1.46 & 0.877 ti
i & 6258.101@xmath45 & 1.44 & 0.299 cr i & 5312.872@xmath45 & 3.45 & 0.562 cr i & 5300.743@xmath45 & 0.98 & 2.129 fe i & 5225.524@xmath45 & 0.11 & 4.790 fe i & 5956.711@xmath45 & 0.86 & 4.610 fe i & 6151.614@xmath45 & 2.18 & 3.300 fe i & 6173.352 & 2.22 & 2.880 fe i & 6750.149@xmath45 & 2.42 & 2.620 ccccc 1830 & fe & 7 & 6.09 & @xmath46 `` & ca & 3 & 5.27 & @xmath47 '' & ti & 3 & 3.78 & @xmath47 `` & cr & 2 & 4.26 & @xmath48 @xmath0 eri & fe & 6 & 7.67 & @xmath22 '' & ca & 3 & 6.39 & @xmath39 `` & ti & 2 & 4.80 & @xmath49 '' & cr & 2 & 5.73 & @xmath50 rrrrr 5223.18 & 3.63 & @xmath51 & 7.6 & 40.5 5225.52 & 0.11 & @xmath52 & 60.7 & 110.5 5321.11 & 4.43 & @xmath47 & 10.6 & 5856.08 & 4.29 & @xmath53 & 6.2 & 43.1 5956.71 & 0.86 & @xmath54 & 32.1 & 83.8 6165.35 & 4.14 & @xmath55 & 10.6 & 58.2 5288.53 & 3.69 & @xmath56 & 18.9 & 67.4 5379.58 & 3.69 & @xmath56 & 21.2 & 73.6 5464.28 & 4.14 & @xmath57 & 10.4 & 46.8 6151.61 & 2.18 & @xmath58 & 23.4 & 72.8 6498.95 & 0.96 & @xmath59 & 25.8 & 78.1 6750.14 & 2.42 & @xmath60 & 44.6 & 103.1 | an inversion technique to recover lte one - dimensional model photospheres for late - type stars , which was previously applied to the sun ( allende prieto et al .
1998 ) , is now employed to reconstruct , semi - empirically , the photospheres of cooler dwarfs : the metal - poor groombridge 1830 and the active star of solar - metallicity @xmath0 eridani .
the model atmospheres we find reproduce satisfactorily all the considered weak - to - moderate neutral lines of metals , satisfying in detail the excitation equilibrium of iron , the wings of strong lines , and the slope of the optical continuum .
the retrieved models show a slightly steeper temperature gradient than flux - constant model atmospheres in the layers where @xmath1 .
we argue that these differences should reflect missing ingredients in the flux - constant models and point to granular - like inhomogeneities as the best candidate .
the iron ionization equilibrium is well satisfied by the model for gmb1830 , but not for @xmath0 eri , for which a discrepancy of 0.2 dex between the logarithmic iron abundance derived from neutral and singly ionized lines may signal departures from lte .
the chemical abundances of calcium , titanium , chromium , and iron derived with the empirical models from neutral lines do not differ much from previous analyses based on flux - constant atmospheric structures . |
previous reports have demonstrated that helicobacter pylori infection is significantly less prevalent in patients with gastroesophageal reflux disease ( gerd ) compared to control subjects without gerd , indicating that h. pylori infection has a potentially protective role in the development of gerd .
this protective role is cancelled by the successful eradication of h. pylori , which can lead to an increase in newly developing gerd at least in some areas such as asian countries [ 2 , 3 , 4 , 5 , 6 ] .
gerd is a well - known risk factor for complications such as barrett 's esophagus or esophageal adenocarcinoma , so there has been concern that the de novo development and the persistence of gerd after h. pylori eradication may result in increased risk for esophageal adenocarcinoma . until now , the long - term course of newly developing gerd after h. pylori eradication remains unknown , and there has been no report documenting a case who developed esophageal adenocarcinoma after eradication therapy .
a 75-year - old man underwent endoscopic hemostatic therapy ( pure ethanol injection method ) for hemorrhagic gastric ulcer of the gastric body in september 2002 .
after healing of the gastric ulcer , the patient was diagnosed to be infected with h. pylori by serum antibody and urea breath test . h. pylori eradication therapy was performed using lansoprazole 30 mg , amoxicillin 750 mg and clarithromycin 200 mg twice daily for a week in february 2003 .
eradication was confirmed successful by urea breath test , and he was able to cease taking regular medication in august 2003 . in august 2007 , he underwent an examination for gastric cancer screening using x - ray .
an irregular tumor approximately 3 cm in diameter was detected in the lower esophagus , so he consulted our hospital for further examination and treatment .
he had neither had any upper abdominal symptoms nor acid - suppressive therapy since the previous treatment for gastric ulcer .
endoscopic examination showed that the tumor was located in the 1 o'clock position of the lower end of the esophagus ( fig .
1a ) , and the lower margin of the tumor almost coincided with the esophagogastric junction ( fig .
a short segment of columnar - lined esophagus with squamous islands ( short - segment barrett 's epithelium ) was observed in the 11 o'clock position of the esophagogastric junction near the tumor ( fig .
mild reflux esophagitis corresponding to los angeles grade a , minor hiatal hernia and scarring from the previously treated gastric ulcer were also present .
surgical resection of the lower esophagus and the proximal stomach was performed in november 2007 . in the resected samples ,
the tumor was localized in the lower end of the esophagus as observed by endoscopy ( fig .
pathological examination showed that the tumor invaded into the muscularis propria of the esophageal wall but had no evidence of lymph node metastasis .
intestinal metaplasia and submucosal esophageal glands were histologically confirmed in the coexistent short segment of barrett 's epithelium near the tumor ( fig .
retrospective assessment of previous endoscopic examination performed in december 2002 prior to eradication therapy did not show any apparent signs of tumor in the lower esophagus ( fig .
thus , the tumor was considered as a newly developed esophageal adenocarcinoma after the successful eradication of h. pylori .
in this case report , we describe a male patient with newly developed esophageal adenocarcinoma discovered 5 years after the eradication of h. pylori .
no evidence of this tumor was found in the previous endoscopic examination performed before and immediately after eradication therapy .
previously reported an alarming observation that reflux esophagitis newly developed in up to 26% of patients with duodenal ulcer after the clearance of h. pylori , whereas it was present only in 13% of those with persistent infection .
barrett 's esophagus and adenocarcinoma related to it have been recognized as a complication of gerd , so their report raised a special concern that h. pylori eradication therapy may be a potential risk factor for developing esophageal adenocarcinoma .
thus far , there has been no report documenting newly developed esophageal adenocarcinoma after eradication therapy .
although many studies have been conducted to address the relation between gerd and h. pylori eradication , whether or not gerd can be significantly increased or exacerbated after h. pylori eradication is still controversial , probably due to the demographic or geographic differences or varying follow - up periods in the previous studies . however , in asian countries such as japan or korea , it has been consistently reported that eradication therapy significantly increases the risk for newly developing erosive esophagitis [ 2 , 3 , 4 , 5 , 6 ] .
, who described a patient having a 3-cm - long barrett 's epithelium through erosive esophagitis 5 years after spontaneous clearance of h. pylori . in the current case , when esophageal adenocarcinoma was discovered , mild erosive esophagitis and hiatal hernia were concurrently present , although those findings were secondarily induced by the development of the tumor .
in addition , coexistent short - segment barrett 's epithelium seen in the 11 o'clock position of the esophagogastric junction was likely to have increased its length with some squamous islands when the endoscopic views before and after eradication therapy were compared retrospectively .
these observations imply that gerd was exacerbated after the successful eradication of h. pylori in our patient .
gastric acid is one of the most critically causative factors in the reflux materials for esophageal mucosal injury in gerd . as we and other investigators have reported , the recovery of gastric acid secretion after eradication is the key pathogenic factor for newly developing erosive esophagitis in addition to individual predisposition to gastroesophageal reflux such as hiatal hernia , male sex or increased body mass index [ 2 , 3 , 4 , 5 , 6 , 11 ] .
based on our previous report showing that gastric acid secretion significantly increases after cure of h. pylori infection in patients with gastric ulcer , the acidity of gastric acid was probably increased after eradication therapy as well in the present patient , who also suffered from gastric ulcer .
thus , it is conceivable that our patient would be the first case who newly developed esophageal adenocarcinoma through gerd after eradication therapy .
the involvement of refluxed gastric acid in the carcinogenesis of barrett 's esophagus has been supported by previous in vitro and ex vivo studies , which have demonstrated that abnormal esophageal acid exposure can increase proliferation or cause dna injury in barrett 's epithelium .
successful eradication of h. pylori has many advantages in preventing the relapse of gastroduodenal ulcer , leading mucosa - associated lymphoid tissue lymphoma to remission status , or reducing the risk of gastric cancer . on the other hand ,
newly developed gerd or erosive esophagitis have been reported as a potential disadvantage of h. pylori eradication therapy , but the majority of those conditions after eradication have been reported to be mild and not progressively exacerbated for several years .
therefore , a potentially increased risk of gerd after h. pylori eradication therapy should not preclude the practice of eradication therapy .
however , as suggested in this case report , further long - term follow - up will be needed in consideration of the emergence of gerd and its subsequent complications including esophageal cancer even in subjects with successful eradication of h. pylori .
| a 75-year - old man underwent endoscopic hemostatic therapy for hemorrhagic gastric ulcer in september 2002 .
after healing of the gastric ulcer , he underwent helicobacter pylori eradication therapy in february 2003 . in august 2007 ,
an irregular tumor was detected in the lower esophagus at annual checkup for gastric cancer screening using x - ray .
endoscopic examination showed that the lower margin of the tumor almost coincided with the esophagogastric junction and that a short segment of barrett 's epithelium existed near the tumor .
biopsies of the tumor showed moderately to poorly differentiated adenocarcinoma .
mild reflux esophagitis and minor hiatal hernia was also observed , and the previously treated gastric ulcer was not recurrent .
absence of h. pylori was confirmed by serum antibody and urea breath test .
surgical resection of the lower esophagus and proximal stomach was performed .
the tumor invaded into the muscularis propria of the esophageal wall but had no evidence of lymph node metastasis .
based on macroscopic and pathological findings , the tumor was recognized as esophageal adenocarcinoma .
previous endoscopic examination did not detect any apparent signs of tumor in the esophagogastric junction .
as far as we know , this is the first report documenting a newly developed esophageal adenocarcinoma after the successful eradication of h. pylori . |
non - linearity is ubiquitously present in nature , e.g. , fluid turbulence@xcite , extinction / survival of species in ecological systems@xcite , finance@xcite , the rings of saturn@xcite , and others .
consistently , the study of low - dimensional non - linear maps plays a significant role for a better understanding of complex problems , like the ones just stated . in a classical context ,
a main characterisation of the dynamical state of a non - linear system consists in the analysis of its sensitivity to initial conditions . from this standpoint ,
the concept of chaos emerged as tantamount of strong sensitivity to initial conditions@xcite .
in other words , a system is said to be chaotic if the distance between two close initial points increases _ exponentially _ in time .
the appropriate theoretical frame to study chaotic and regular behaviour of non - linear dynamical systems is , since long , well established .
it is not so for the region in between , _ edge of chaos _ , which has only recently started to be appropriately characterised , by means of the so - called nonextensive statistical mechanical concepts @xcite .
in this article we study the sensitivity to initial conditions at this intermediate region for the classical kicked top map and its dependence on the perturbation parameter @xmath5 . the sensitivity to initial conditions
is defined through @xmath9 , where @xmath10 represents the difference , at time @xmath11 , between two trajectories in phase space .
when the system is in a chaotic state , @xmath12 increases , as stated previously , through an exponential law , i.e. , @xmath13 where @xmath14 is the maximum lyapunov exponent ( the underscript @xmath15 will become transparent right a - head ) .
equation ( [ sens2 ] ) can also be regarded as the solution of the differential equation , @xmath16 .
in addition to this , the exponential form has a special relation with the boltzmann - gibbs entropy @xmath17 .
indeed , the optimization of @xmath18 under a fixed mean value of some variable @xmath19 yields @xmath20 , where @xmath21 is the corresponding lagrange parameter . excepting for some pioneering work@xcite , during many years the references to the sensitivity at the edge of chaos were restricted to mentioning that it corresponds to @xmath22 , with trajectories diverging as a power law of time@xcite . with the emergence of nonextensive statistical mechanics@xcite ,
a new interest on that subject has appeared , and it has been possible to show details , first numerically @xcite@xcite@xcite@xcite@xcite and then analytically ( for one - dimensional dissipative unimodal maps)@xcite .
albeit @xmath14 vanishes , it is possible to express the divergence between trajectories with a form which conveniently generalizes the exponential function , namely the @xmath3-exponential form @xmath23 where @xmath24 ^{1/\left ( 1-q\right ) } \ ; ( \exp _ { 1}\,x = e^{x})$ ] , and @xmath25 represents the generalised lyapunov coefficient ( the subscript @xmath26 stands for _ sensitivity_)@xcite . equation ( [ q - sens1 ] ) can be looked as the solution of @xmath27 .
analogously to strong chaos ( i.e. , @xmath28 ) , if we optimize the entropy @xcite @xmath29 under the same type of constraint as before , we obtain @xmath30 , where @xmath31 generalizes @xmath32 .
the classical kicked top corresponds to a map on the unit sphere @xmath33 , corresponding to the following application @xmath34 where @xmath5 denotes the _ kick strength_. it is straighforward to verify that the determinant of the jacobian matrix of ( 3 ) equals one , meaning that this map is _
conservative_. it is therefore quite analogous to hamiltonian conservative systems , the phase space of which consists of a mixing of regular ( the famous _ _ kam - tori__@xcite ) and chaotic regions characterised , respectively , by a linear ( @xmath35 ) and exponential ( @xmath36 ) time evolution of @xmath12 @xcite .
the region of separation presents a @xmath37-exponential law for its sensitivity . in fig .
1(a ) we exhibit a trajectory where the various regions can be seen . in figs . 1 ( b , c , d ) we see the time evolution of @xmath12 for the three possible stages : regular , edge of chaos and chaotic , respectively
. kicked top , where chaotic and regular regions are visible .
the spherical phase space is projected onto @xmath38 plane by multiplying the @xmath19 and @xmath39 coordinates of each point by @xmath40 where @xmath41 and @xmath42 .
( b - d ) time dependence of the sensitivity @xmath43 to initial conditions ( with @xmath44 ) at ( b ) regular region ( @xmath45 ; _ linear _ evolution ) , ( c ) edge of chaos ( @xmath46 ; _ @xmath47-exponential _ evolution ) , and ( d ) chaotic region ( @xmath48 , _ exponential _ evolution ) . , title="fig:",width=264 ] kicked top , where chaotic and regular regions are visible .
the spherical phase space is projected onto @xmath38 plane by multiplying the @xmath19 and @xmath39 coordinates of each point by @xmath40 where @xmath41 and @xmath42 .
( b - d ) time dependence of the sensitivity @xmath43 to initial conditions ( with @xmath44 ) at ( b ) regular region ( @xmath45 ; _ linear _ evolution ) , ( c ) edge of chaos ( @xmath46 ; _ @xmath47-exponential _ evolution ) , and ( d ) chaotic region ( @xmath48 , _ exponential _ evolution ) .
, title="fig:",width=264 ] kicked top , where chaotic and regular regions are visible .
the spherical phase space is projected onto @xmath38 plane by multiplying the @xmath19 and @xmath39 coordinates of each point by @xmath40 where @xmath41 and @xmath42 .
( b - d ) time dependence of the sensitivity @xmath43 to initial conditions ( with @xmath44 ) at ( b ) regular region ( @xmath45 ; _ linear _ evolution ) , ( c ) edge of chaos ( @xmath46 ; _ @xmath47-exponential _ evolution ) , and ( d ) chaotic region ( @xmath48 , _ exponential _ evolution ) . ,
title="fig:",width=264 ] kicked top , where chaotic and regular regions are visible .
the spherical phase space is projected onto @xmath38 plane by multiplying the @xmath19 and @xmath39 coordinates of each point by @xmath40 where @xmath41 and @xmath42 .
( b - d ) time dependence of the sensitivity @xmath43 to initial conditions ( with @xmath44 ) at ( b ) regular region ( @xmath45 ; _ linear _ evolution ) , ( c ) edge of chaos ( @xmath46 ; _ @xmath47-exponential _ evolution ) , and ( d ) chaotic region ( @xmath48 , _ exponential _ evolution ) .
, title="fig:",width=264 ] it is worthy mentioning at this point that the quantum version of this map constitutes a paradigmatic example of quantum chaos . at its threshold to chaos , it has been verified a nonextensive behaviour ( for details see @xcite ) .
we analysed here the sensitivity to initial conditions on the verge of chaos of ( 3 ) , for several values of the kick strength @xmath49 $ ] averaged over a set of ( typically 50 ) initial conditions for each value of @xmath5 .
more precisely , for fixed @xmath5 , aided by its typical orbits , we determined a set of points in the regular - chaos border and then , for these points , determined the average value of @xmath12 at time @xmath11 .
see typical results in figs . 2 and 3 .
to initial conditions , for typical values of @xmath5 . _
insets : _ same data but using a @xmath50- ordinate , where @xmath51 ( @xmath52 ) . with this _ @xmath3-logarithm _
ordinate , the slope of the straight line is simply @xmath25.,title="fig:",width=264 ] to initial conditions , for typical values of @xmath5 .
_ insets : _ same data but using a @xmath50- ordinate , where @xmath51 ( @xmath52 ) . with this _ @xmath3-logarithm _
ordinate , the slope of the straight line is simply @xmath25.,title="fig:",width=264 ] to initial conditions , for typical values of @xmath5 .
_ insets : _ same data but using a @xmath50- ordinate , where @xmath51 ( @xmath52 ) . with this _ @xmath3-logarithm _
ordinate , the slope of the straight line is simply @xmath25.,title="fig:",width=264 ] to initial conditions , for typical values of @xmath5 .
_ insets : _ same data but using a @xmath50- ordinate , where @xmath51 ( @xmath52 ) . with this _ @xmath3-logarithm _
ordinate , the slope of the straight line is simply @xmath25.,title="fig:",width=264 ] we verify that the increase of @xmath5 induces a gradual approach of @xmath37 to @xmath15 .
this behaviour is in accordance with what was verified@xcite for a non - linear system composed by two simplectically coupled standard maps .
summarising , we numerically analysed the sensitivity to initial conditions at the edge of chaos of the conservative classical kicked top , and found that its time evolution exhibits a @xmath37-exponential behavior in all cases . for @xmath55 , the phase space is composed by a regular region , where the sensitivity depends linearly on time , hence @xmath35 .
as @xmath5 increases , the top is more perturbed , hence chaotic regions emerge in phase space . above some critical value @xmath7 , the chaotic region fulfils the entire phase space .
consistently , the usual exponential dependence ( i.e. , @xmath36 ) is recovered .
these results can be useful to understand , within a nonextensive statistical mechanical framework , the everlasting _
metastable _ states that are known to exist in systems composed of many symplectically coupled maps@xcite , as well as in isolated many - body long - range - interacting classical hamiltonians@xcite .
l. borland , _ phys .
_ * 89 * , 098701 ( 2002 ) ; j.d .
farmer , _ toward agent - based models for investment _ in _ developments in quantitative investment models _
darnell ( assn for investment management , 2001 ) .
p. grassberger and m. scheunert , _
* 26 * , 697 ( 1981 ) ; t. schneider , a. politi and d. wurtz , _ z. phys . b _ * 66 * , 469 ( 1987 ) ; g. anania and a. politi , _ europhys .
lett . _ * 7 * , 119 ( 1988 ) ; h. hata , t. horita and h. mori , _ progr .
phys . _ * 82 * , 897 ( 1989 ) . c. tsallis , _ j. stat .
phys . _ * 52 * , 479 ( 1988 ) .
y. weinstein , s. lloyd and c. tsallis , _ phys .
lett . _ * 89 * , 214101 ( 2002 ) ; also in _ decoherence and entropy in complex systems _ , ed .
elze , lecture notes in physics ( springer , heidelberg , 2003 ) . | we focus on the frontier between the chaotic and regular regions for the classical version of the quantum kicked top .
we show that the sensitivity to the initial conditions is numerically well characterised by @xmath0 , where @xmath1^{\frac{1}{1-q } } \;(e_1^x = e^x)$ ] , and @xmath2 is the @xmath3-generalization of the lyapunov coefficient , a result that is consistent with nonextensive statistical mechanics , based on the entropy @xmath4 ) .
our analysis shows that @xmath3 monotonically increases from zero to unity when the kicked - top perturbation parameter @xmath5 increases from zero ( unperturbed top ) to @xmath6 , where @xmath7 .
the entropic index @xmath3 remains equal to unity for @xmath8 , parameter values for which the phase space is fully chaotic . |
Comey gets the 'United' treatment on 'New Yorker' cover
Skip in Skip x Embed x Share CLOSE The latest cover of the 'New Yorker' merges two highly publicized involuntary removals. The artist puts the firing of James Comey into the controversial scene of a United Airlines passenger being dragged off a flight. USA TODAY
The New Yorker, long been known for its comedic covers that flirt with controversy, is taking on the firing of FBI Director James Comey and in the process comparing it to the outrage-inspiring story of the man who was forcibly removed from a United Airlines flight.
The cover for the May 22 issue of the magazine shows a complacent Comey being dragged down the aisle of a plane by Attorney General Jeff Sessions who is wearing a police uniform. Looking over his shoulder is the plane's captain, President Trump.
"It’s probably a bit of a leap," said artist Barry Blitt about his drawing for the cover, which is titled Ejected. "James Comey is six feet eight — he probably would have been happy to give up his seat in a cramped cabin."
An early look at next week's cover, “Ejected,” by Barry Blitt: https://t.co/HJUOYaH8qkpic.twitter.com/kCAqsAOUn5 — The New Yorker (@NewYorker) May 11, 2017
The cover instantly evokes the April 9 incident in which David Dao, a Louisville, Ky., doctor, refused to give up his seat on United flight after being involuntarily bumped. Police yanked Dao from his seat and dragged him off the plane, giving him a concussion and two lost teeth in the process.
Both the United incident and the Comey firing sparked waves of public outrage. It remains to be seen if the outcry over Comey will prove as ephemeral as that which followed Dao's rough removal.
The cover for the May 22 edition of 'The New Yorker' depicts former FBI director James Comey being dragged off a flight by Attorney General Jeff Sessions. (Photo: Barry Blitt, The New Yorker)
Read more:
Read or Share this story: https://www.usatoday.com/story/news/politics/onpolitics/2017/05/12/james-comey-new-yorker-cover/319035001/ ||||| These crawls are part of an effort to archive pages as they are created and archive the pages that they refer to. That way, as the pages that are referenced are changed or taken from the web, a link to the version that was live when the page was written will be preserved.Then the Internet Archive hopes that references to these archived pages will be put in place of a link that would be otherwise be broken, or a companion link to allow people to see what was originally intended by a page's authors.The goal is to fix all broken links on the web . Crawls of supported "No More 404" sites. | – A New Yorker cover generating a lot of buzz likens former FBI director James Comey to somebody else who suffered a very controversial removal: United Airlines passenger David Dao. The cover for the May 22 edition shows Comey being dragged off a plane by Attorney General Jeff Sessions in a police uniform, with President Trump as the pilot looking on, USA Today reports. "It's probably a bit of a leap," artist Barry Blitt says of his cover. At 6-foot-8, Blitt observes in brief comments to the New Yorker, Comey "probably would have been happy to give up his seat in a cramped cabin." Blitt has provided many New Yorker covers over the years, including the 2008 Obama fist bump one, and the "Ejected" cover is being praised as one of his finest. Plenty of sketches came in, but Blitt "sent this one where, with an easy glide of his pen, he outlined the heart of the issue, giving voice to the outrage we feel on both sides of the aisle," art director Francoise Mouly tells the Washington Post. Mouly calls the artwork an example of how "deep artists can go by embracing nonsense and illogic—a response on par with Trump’s actions." (Here's how late-night hosts reacted to Comey's firing.) |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Food Allergen Consumer Protection
Act''.
SEC. 2. FINDINGS.
The Congress finds as follows:
(1) Approximately 7,000,000 Americans suffer from food
allergies. Every year roughly 30,000 people receive emergency
room treatment due to the ingestion of allergenic foods, and an
estimated 150 Americans die from anaphylactic shock caused by a
food allergy.
(2) Eight major foods--milk, egg, fish, Crustacea, tree
nuts, wheat, peanuts, and soybeans--cause 90 percent of
allergic reactions. At present, there is no cure for food
allergies. A food allergic consumer depends on a product's
label to obtain accurate and reliable ingredient information so
as to avoid food allergens.
(3) Current Food and Drug Administration regulations exempt
spices, flavorings, and certain colorings and additives from
ingredient labeling requirements that would allow consumers to
avoid those to which they are allergic. Such unlabeled food
allergens may pose a serious health threat to those susceptible
to food allergies.
(4) A recent Food and Drug Administration study found that
25 percent of bakery products, ice creams, and candies that
were inspected failed to list peanuts and eggs, which can cause
potentially fatal allergic reactions. The mislabeling of foods
puts those with a food allergy at constant risk.
(5) In that study, the Food and Drug Administration found
that only slightly more than half of inspected manufacturers
checked their products to ensure that all ingredients were
accurately reflected on the labels. Furthermore, the number of
recalls because of unlabeled allergens rose to 121 in 2000 from
about 35 a decade earlier. In part, mislabeling occurs because
potentially fatal allergens are introduced into the
manufacturing process when production lines and cooking
utensils are shared or used to produce multiple products.
(6) Individuals who have food allergies may outgrow their
allergy if they strictly avoid consuming the allergen. However,
some scientists believe that because low levels of allergens
are unintentionally present in foods, those with an allergy are
unable to keep from being repeatedly exposed to the very foods
they are allergic to. Good manufacturing practices can minimize
the unintentional presence of food allergens. In addition, when
good manufacturing practices cannot eliminate the potential for
cross-contamination, an advisory label on the product can
provide additional consumer protection.
(7) The Food and Drug Administration is the Nation's
principal consumer protection agency, charged with protecting
and promoting public health through premarket and postmarket
regulation of food. The agency must have both the necessary
authority to ensure that foods are properly labeled and
produced using good manufacturing practices and the ability to
penalize manufacturers who violate our food safety laws.
(8) Americans deserve to have confidence in the safety and
labeling of the food on their tables.
SEC. 3. FOOD LABELING; REQUIREMENT OF INFORMATION REGARDING ALLERGENIC
SUBSTANCES.
(a) In General.--Section 403 of the Federal Food, Drug, and
Cosmetic Act (21 U.S.C. 343) is amended by adding at the end the
following:
``(t)(1) If it is not a raw agricultural commodity and it is, or it
intentionally bears or contains, a known food allergen, unless its
label bears, in bold face type, the common or usual name of the known
food allergen and the common or usual name of the food source described
in subparagraph (3)(A) from which the known food allergen is derived,
except that the name of the food source is not required when the common
or usual name of the known food allergen plainly identifies the food
source.
``(2) The information required under this paragraph may appear in
labeling other than the label only if the Secretary finds that such
other labeling is sufficient to protect the public health. A finding by
the Secretary under this subparagraph is effective upon publication in
the Federal Register as a notice (including any change in an earlier
finding under this subparagraph).
``(3) For purposes of this Act, the term `known food allergen'
means any of the following:
``(A) Milk, egg, fish, Crustacea, tree nuts, wheat,
peanuts, and soybeans.
``(B) A proteinaceous substance derived from a food
specified in clause (A), unless the Secretary determines that
the substance does not cause an allergic response that poses a
risk to human health.
``(C) Other grains containing gluten (rye, barley, oats,
and triticale).
``(D) In addition, any food that the Secretary by
regulation determines causes an allergic or other adverse
response that poses a risk to human health.
``(4) Notwithstanding paragraph (g), (i), or (k), or any other law,
the labeling requirement under this paragraph applies to spices,
flavorings, colorings, or incidental additives that are, or that bear
or contain, a known food allergen.
``(u) If it is a raw agricultural commodity that is, or bears or
contains, a known food allergen, unless it has a label or other
labeling that bears in bold face type the common or usual name of the
known food allergen and the Secretary has found that the label or other
labeling is sufficient to protect the public health. A finding by the
Secretary under this paragraph is effective upon publication in the
Federal Register as a notice (including any change in an earlier
finding under this paragraph).
``(w) If the labeling required under paragraphs (g), (i), (k), (t),
(u), or (v)--
``(1) does not use a single, easy-to-read type style that
is black on a white background, using upper and lower case
letters and with no letters touching;
``(2) does not use at least 8 point type with at least one
point leading (i.e., space between two lines of text), provided
the total surface area of the food package available to bear
labeling exceeds 12 square inches; or
``(3) does not comply with regulations issued by the
Secretary to make it easy for consumers to read and use such
labeling by requiring a format that is comparable to the format
required for the disclosure of nutrition information in the
food label under section 101.9(d)(1) of title 21, Code of
Federal Regulations.''.
(b) Civil Penalties.--Section 303(g)(2) of the Federal Food, Drug,
and Cosmetic Act (21 U.S.C. 333(g)(2)) is amended--
(1) in subparagraph (A), by striking ``section 402(a)(2)(B)
shall be subject'' and inserting the following: ``section
402(a)(2)(B) or regulations under this chapter to minimize the
unintended presence of allergens in food, or that is misbranded
within the meaning of section 403(t), 403(u), 403(v), or
403(w), shall be subject''; and
(2) in subparagraph (B), by inserting ``or misbranded''
after ``adulterated'' each place such term appears.
(c) Conforming Amendment.--Section 201 of the Federal Food, Drug,
and Cosmetic Act (21 U.S.C. 321) is amended by adding at the end the
following:
``(ll) The term `known food allergen' has the meaning given such
term in section 403(t)(3).''.
(d) Effective Date.--The amendments made by this section take effect
upon the expiration of the 180-day period beginning on the date of the
enactment of this Act.
SEC. 4. UNINTENTIONAL PRESENCE OF KNOWN FOOD ALLERGENS.
(a) Food Labeling of Such Food Allergens.--Section 403 of the
Federal Food, Drug, and Cosmetic Act, as amended by section 3(a) of
this Act, is amended by inserting after paragraph (u) the following:
``(v) If the presence of a known food allergen in the food is
unintentional and its labeling bears a statement that the food may bear
or contain the known food allergen, or any similar statement, unless
the statement is made in compliance with regulations issued by the
Secretary to provide for advisory labeling of the known food
allergen.''.
(b) Effective Date.--The amendment made by subsection (a) takes
effect upon the expiration of the four-year period beginning on the
date of the enactment of this Act, except with respect to the authority
of the Secretary of Health and Human Services to engage in rulemaking
in accordance with section 5.
SEC. 5. REGULATIONS.
(a) In General.--
(1) Regulations.--Not later than one year after the date of
the enactment of this Act, the Secretary of Health and Human
Services (in this section referred to as the ``Secretary'')
shall issue a proposed rule under sections 402, 403, and 701(a)
of the Federal Food, Drug, and Cosmetic Act to implement the
amendments made by this Act. Not later than two years after
such date of enactment, the Secretary shall promulgate a final
rule under such sections.
(2) Effective date.--The final rule promulgated under
paragraph (1) takes effect upon the expiration of the four-year
period beginning on the date of the enactment of this Act. If a
final rule under such paragraph has not been promulgated as of
the expiration of such period, then upon such expiration the
proposed rule under such paragraph takes effect as if the
proposed rule were a final rule.
(b) Unintentional Presence of Known Food Allergens.--
(1) Good manufacturing practices; records.--Regulations
under subsection (a) shall require the use of good
manufacturing practices to minimize, to the extent practicable,
the unintentional presence of allergens in food. Such
regulations shall include appropriate record keeping and record
inspection requirements.
(2) Advisory labeling.--In the regulations under subsection
(a), the Secretary shall authorize the use of advisory labeling
for a known food allergen when the Secretary has determined
that good manufacturing practices required under the
regulations will not eliminate the unintentional presence of
the known food allergen and its presence in the food poses a
risk to human health, and the regulations shall otherwise
prohibit the use of such labeling.
(c) Ingredient Labeling Generally.--In regulations under subsection
(a), the Secretary shall prescribe a format for labeling, as provided
for under section 403(w)(3) of the Federal, Food, Drug, and Cosmetic
Act.
(d) Review by Office of Management and Budget.--If the Office of
Management and Budget (in this section referred to as ``OMB'') is to
review proposed or final rules under this Act, OMB shall complete its
review in 10 working days, after which the rule shall be published
immediately in the Federal Register. If OMB fails to complete its
review of either the proposed rule or the final rule in 10 working
days, the Secretary shall provide the rule to the Office of the Federal
Register, which shall publish the rule, and it shall have full effect
(subject to applicable effective dates specified in this Act) without
review by OMB. If the Secretary does not complete the proposed or final
rule so as to provide OMB with 10 working days to review the rule and
have it published in the Federal Register within the time frames for
publication of the rule specified in this section, the rule shall be
published without review by OMB.
SEC. 6. FOOD LABELING; INCLUSION OF TELEPHONE NUMBER.
(a) In General.--Section 403(e) of the Federal Food, Drug, and
Cosmetic Act (21 U.S.C. 343(e)) is amended--
(1) by striking ``and (2)'' and inserting the following:
``(2) in the case of a manufacturer, packer, or distributor
whose annual gross sales made or business done in sales to
consumers equals or exceeds $500,000, a toll-free telephone
number (staffed during reasonable business hours) for the
manufacturer, packer, or distributor (including one to
accommodate telecommunications devices for deaf persons,
commonly known as TDDs); or in the case of a manufacturer,
packer, or distributor whose annual gross sales made or
business done in sales are less than $500,000, the mailing
address or the address of the Internet site for the
manufacturer, packer, or distributor; and (3)''; and
(2) by striking ``clause (2)'' and inserting ``clause
(3)''.
(b) Effective Date.--The amendments made by subsection (a) take
effect upon the expiration of the 180-day period beginning on the date
of the enactment of this Act.
SEC. 7. DATA ON FOOD-RELATED ALLERGIC RESPONSES.
(a) In General.--Consistent with the findings of the study
conducted under subsection (b), the Secretary of Health and Human
Services (in this section referred to as the ``Secretary''), acting
through the Director of the Centers for Disease Control and Prevention
and in consultation with the Commissioner of Foods and Drugs, shall
improve the collection of, and (beginning 18 months after the date of
the enactment of this Act) annually publish, national data on--
(1) the prevalence of food allergies, and
(2) the incidence of deaths, injuries, including
anaphylactic shock, hospitalizations, and physician visits, and
the utilization of drugs, associated with allergic responses to
foods.
(b) Study.--Not later than one year after the date of the enactment
of this Act, the Secretary, in consultation with consumers, providers,
State governments, and other relevant parties, shall complete a study
for the purposes of--
(1) determining whether existing systems for the reporting,
collection and analysis of national data accurately capture
information on the subjects specified in subsection (a); and
(2) identifying new or alternative systems, or enhancements
to existing systems, for the reporting collection and analysis
of national data necessary to fulfill the purpose of subsection
(a).
(c) Public and Provider Education.--The Secretary shall, directly
or through contracts with public or private entities, educate
physicians and other health providers to improve the reporting,
collection, and analysis of data on the subjects specified in
subsection (a).
(d) Child Fatality Review Teams.--Insofar as is practicable,
activities developed or expanded under this section shall include
utilization of child fatality review teams in identifying and assessing
child deaths associated with allergic responses to foods.
(e) Reports to Congress.--Not later than 18 months after the date
of the enactment of this Act, the Secretary shall submit to the
Congress a report on the progress made with respect to subsections (a)
through (d).
(f) Authorization of Appropriations.--For the purpose of carrying
out this section, there are authorized to be appropriated $10,000,000
for fiscal year 2003, and such sums as may be necessary for each
subsequent fiscal year.
(g) Effective Date.--This section takes effect on the date of the
enactment of this Act.
SEC. 8. FOOD ALLERGIES RESEARCH.
(a) In General.--The Secretary of Health and Human Services,
through the National Institutes of Health, shall convene a panel of
nationally recognized experts to review current basic and clinical
research efforts related to food allergies. The panel shall develop a
plan, including recommendations for expenditures, for expanding,
intensifying, and coordinating research activities concerning food
allergies.
(b) Report to Congress.--Not later than 180 days after the date of
the enactment of this Act, the Secretary of Health and Human Services
shall submit a plan under subsection (a) to the Committee on Energy and
Commerce in the House of Representatives and the Committee on Health,
Education, Labor, and Pensions in the Senate.
(c) Effective Date.--This section takes effect on the date of the
enactment of this Act.
SEC. 9. CERTAIN FEDERAL RECOMMENDATIONS REGARDING AVOIDING AND
RESPONDING TO FOOD-RELATED ALLERGIC RESPONSES.
The Secretary of Health and Human Services shall carry out the
following:
(1) Develop and appropriately disseminate recommendations
on--
(A) training emergency medical technicians with
respect to administering epinephrine auto-injector
devices; and
(B) the need for emergency vehicles to maintain
supplies of such devices.
(2) Activities to increase the awareness by the restaurant
industry of public or private guidelines and recommendations
for training in preparing allergen-free foods, including the
Food Allergy and Anaphylaxis Network and Food Allergy
Initiative's document entitled ``Food Allergy Training Guide
for Restaurants and Good Services''.
(3) With respect to food prepared for students by
elementary and secondary schools, develop and appropriately
disseminate recommendations for the preparation of allergen-
free foods, with priority given to the issue of life-
threatening food allergies. | Food Allergen Consumer Protection Act - Amends the Federal Food, Drug, and Cosmetic Act to require food labels to identify known food allergens contained therein or be deemed misbranded, without regard as to whether or not the presence of an allergen is intentional or unintentional.Defines "known food allergen" to include milk, eggs, fish, Crustacea, tree nuts, wheat, peanuts, soybeans, other grains containing gluten, and any food the Secretary of Health and Human Services determines to cause allergic or adverse responses which endanger human health. Includes spices, flavorings, colorings, or incidental additives that are or contain a known food allergen.Sets forth special requirements for raw agricultural commodities which are or contain a known food allergen.Sets forth criteria for labels, requiring a format comparable to that required for the disclosure of nutrition information. Requires certain manufacturers, packers, or distributors to include a toll-free telephone number on such label.Establishes civil penalties for violations of this Act.Requires the Secretary to issue rules which address the use of good manufacturing practices to minimize the unintentional presence of allergens in food and advisory labeling if such allergens may be unintentionally present.Requires the Secretary, acting through the Director of the Centers for Disease Control, to annually publish national data on the prevalence of food allergies and the incidence of deaths and injuries. Requires the Secretary to study the adequacy of existing data collection systems and possible alternative systems as well as educate health providers on improving data collection and analysis. |
in the study of low - mass stellar objects , the presence or absence of the li i 6708 resonance line has played an important role in ascertaining whether the object is a brown dwarf .
however , the use of this so - called _ lithium test _
@xcite to determine substellarity has some drawbacks .
l dwarfs which lie just below the bottom or at the edge of the hydrogen - burning main - sequence may have some period in their early evolution of lithium burning , depleting their lithium abundance , decreasing the strength of the li i resonance line , and thereby suggesting they are main - sequence stars .
furthermore , the depletion of lithium is age dependent , which in turn can be used as a clock under the correct conditions ( see e.g. * ? ? ?
* ; * ? ? ?
* ; * ? ? ?
* and references therein ) .
the reduction of the strength of the li i resonance line can also occur in lower temperature objects ( @xmath5 k ) , near the l / t dwarf interface , due to the sequestering of lithium into molecular species such as licl , lih , and lioh @xcite . in either case
, conclusions drawn from the lithium test alone ( like age determination , or substellarity in the case of l dwarfs ) may be inaccurate .
thermochemical equilibrium calculations of cool dwarf atmospheres @xcite suggest that licl is the dominant li - bearing gas over an extended domain of the temperature - pressure diagram .
licl has a large dipole moment in its ground electronic state which may give rise to an intense rovibrational line spectrum in the mid - infrared near 15.8 @xmath4 m .
as such , licl may give a significant absorption feature in l and t dwarf spectra as suggested by @xcite and @xcite .
if the feature is observable , it could be used to estimate the total lithium elemental abundance in conjunction with optical li i observations , to confirm the equilibrium lithium chemistry models , and to provide a better test of substellarity for cool objects . in this work ,
we continue our long term project to update and complete molecular opacity data @xcite . here
we present a complete line list ( transition energies and oscillator strengths ) of all allowed rovibrational transitions in the electronic ground state of @xmath1li@xmath2cl .
the calculations were performed using an accurate hybrid potential and the dipole moment function of @xcite .
the line list was incorporated into the stellar atmosphere code phoenix @xcite to compute spectra for a range of t dwarf models to explore the possibility of observing licl .
for the present calculations , an accurate hybrid potential was constructed for the @xmath0 electronic state from the spectral inversion fit of @xcite and from the multi - reference single- and double - excitation configuration interaction ( mrsdci ) calculations of @xcite .
the fit to the effective potential energy proposed by @xcite consisted of a sum of five radial functions accounting empirically for vibrational adiabatic and nonadiabatic effects .
the coefficients of this expansion were determined by direct spectral inversion from the frequencies of 2577 known transitions in the infrared and microwave spectral regions for the isotopic variants @xmath6li@xmath2cl , @xmath6li@xmath7cl , @xmath1li@xmath2cl and @xmath1li@xmath7cl .
the normalized standard deviation of the fit was 0.993 over the complete domain of definition of the radial functions , i.e. for internuclear distances from @xmath8 to @xmath9 .
a shift in energy of @xmath10 was applied to the ogilvie fit , in addition to a shift of @xmath11 from its original equilibrium geometry , to obtain coincidence with the _ ab initio _ energy minimum at @xmath12 determined by cubic spline interpolation from the mrsdci data of @xcite . beyond the range @xmath13 , a spline fit to the _ ab initio _ data was used , connecting smoothly with the shifted ogilvie fit . for internuclear distances
@xmath14 , a fit to the multi - reference potential has been performed using the usual van der waals dispersion expansion to account for the long - range interaction . to our knowledge ,
no data have been reported for the van der waals coefficients of the @xmath0 state of licl , thus theoretical estimates were obtained using average values from several techniques in a similar way as in @xcite . in order to determine the spectroscopic constants of the @xmath0 potential ,
the vibrational wave functions , @xmath15 , and energy eigenvalues , @xmath16 , have been calculated by solving with numerov techniques @xcite the radial nuclear schrdinger equation , @xmath17 \chi_{v}(r)=0,\ ] ] where @xmath4 is the reduced mass of the system , @xmath18 is the rotational quantum number corresponding to the angular momentum of nuclear rotation , and @xmath19 is the electronic potential energy .
the reduced mass adopted for @xmath20li@xmath2cl was @xmath21 @xmath22 @xcite .
calculations were performed on a grid with stepsize @xmath23 for the integration , over a range of internuclear distances from @xmath24 to @xmath25 .
the calculations yielded for this hybrid potential an energy difference @xmath26 and a dissociation energy @xmath27 , slightly larger than the thermochemical value , @xmath28 , of @xcite or the flame photometry measurement , @xmath29 , of @xcite .
the introduction of the spin - orbit interaction in our calculations would lower the dissociation asymptote of the @xmath0 state , thereby bringing the theoretical dissociation energy into closer agreement with the experimental estimates .
our theoretical vibrational constants @xmath30 , @xmath31 and @xmath32 are in excellent agreement with the accurate dunham constants @xmath33 , @xmath34 and @xmath35 , respectively , derived from experiment by @xcite for @xmath20li@xmath2cl .
the frequency of the first band , @xmath36 , essentially reproduces the value of @xmath37 , obtained using the dunham terms given above .
the line oscillator strengths , @xmath38 , from rovibrational states @xmath39 to final state @xmath40 were computed using the @xmath0 dipole moment function of @xcite for all allowed absorption transitions between the 29,370 rovibrational levels solutions of eq .
( [ re ] ) , thus giving a total of 3,357,811 linesli@xmath2cl oscillator strength data is available online at the uga molecular opacity project database website http://www.physast.uga.edu/ugamop/ ] .
a standard expression for the line oscillator strength can be found , for example , in @xcite .
computed line oscillator strengths and transition energies are reported in table [ tbl1 ] , along with the high resolution measurements of @xcite , for the @xmath41-branch @xmath42 of the fundamental vibrational band @xmath43 , for @xmath44 to 4 .
the agreement is excellent , with a maximum transition energy discrepancy of @xmath45 for the @xmath46 line of the @xmath47 band . in figure
[ fig1 ] , representative licl opacities , absorption cross section per molecule , are presented for pressures and temperature appropriate to t dwarfs .
the opacities are computed using eq .
( 6 ) of @xcite , using einstein a - coefficients from the above line lists , and multiplying by a lorentzian line profile .
the full width half maximum line width is estimated by only considering collisional broadening and is typically @xmath480.1 @xmath49 at 100 atm .
the rovibrational levels of licl are assumed to be in equilibrium and a correction for stimulated emission is included .
the fundamental and first two vibrational overtone bands , as well as a portion of the pure rotational band , are depicted .
the fundamental band , with a band origin at 15.8 @xmath4 m , is the dominant licl opacity source in the mid - infrared .
we note , however , that phoenix uses molecular line lists instead of pre - computed opacity tables as described below .
the atmosphere models used for this work were calculated as described in @xcite . these models and their comparisons to earlier versions were the subject of a previous publication @xcite and we thus do not repeat the detailed description of the models here .
however , we will briefly summarize the major physical properties .
the models are based on the ames h@xmath50o and tio line lists by @xcite and @xcite and also include the line lists for feh by @xcite and for vo and crh by r. freedman ( nasa - ames , private communication ) .
we try as much as possible to constantly add new opacities as they become available ( see for example , * ? ? ?
* ; * ? ? ?
* ) and the new feh and crh opacities recently calculated in @xcite and @xcite will soon be added to our database . however , as can be seen from these references , the new line lists calculated for feh and crh have no features ( for vibrational transitions ) in the mid - ir region where the licl feature is located .
although the global opacity is expected to be changed overall using these new line lists , for the purpose and wavelength window of this paper the use of the line lists of @xcite and r. freedman is appropriate .
the models account for equilibrium formation of dust and condensates and include grain opacities for 40 species . in this paper
we only consider the so - called `` ames - cond '' models in which the dust particles have sunk below the atmosphere from the layers in which they originally formed .
as demonstrated in @xcite this limiting case is appropriate for t dwarfs which are discussed in this paper .
we stress that large uncertainties persist in the water opacities for parts of the temperature range of this work @xcite .
in addition to the opacity sources listed above and in ( * ? ? ?
* and references therein ) the new licl line list presented in this paper has been added to our opacity database . in order to assess the effects of the new licl line data , we compare spectra calculated with and without this opacity source . the models used in the following discussion were all iterated to convergence for the parameters indicated .
the high resolution spectra which have the individual opacity sources selected are calculated on top of the models .
the licl line opacity data turned out to be too weak to influence the temperature structure of the atmosphere .
the models have solar abundances with the non - depleted lithium abundance of log(n@xmath51)=3.31 .
we calculated models with log(g)=3.0 , 4.0 and 5.0 and effective temperatures of 900 k , 1200 k and 1500 k which are typical parameters of old @xmath52 to young @xmath53 t dwarfs .
this parameter region turned out to be the one showing the strongest licl features .
as can be seen in figure [ fig2 ] the effect of licl is strongest for @xmath54=1200 k and log(g)=3.0 in the ir around the fundamental vibrational band origin at 15.8 @xmath4 m and the relative flux difference is typically less than 20% overall .
the general strength of the licl absorption warrants inclusion in model calculations , but the lack of a distinct feature will make it hard to detect in an observed spectrum which is dominated by water absorption .
however , the model parameters @xmath54=1200 k and log(g)=3.0 are particularly interesting since these are parameters typical for very young ( and hence very bright ) mid to early t dwarfs .
using an accurate hybrid potential and fully quantum - mechanical techniques , we have constructed a comprehensive and complete theoretical line list of spectroscopic accuracy for the @xmath0 electronic ground state of @xmath1li@xmath2cl .
although licl appears to be a dominant li - bearing gas over an extended domain of the @xmath55 diagram in cool dwarf atmospheres , synthetic spectra calculations with the stellar atmosphere code phoenix suggest that flux differences resulting from the incorporation of this new line list are less than 20% for parameters typical of young to old t dwarfs . the strongest signature of licl for @xmath54=1200 k and log(g)=3.0 appears in the vicinity of the fundamental vibrational band origin at 15.8 @xmath4 m , where the spectrum is dominated by water absorption .
the current results suggest that it will be difficult to measure the full inventory of elemental lithium in t dwarfs after it is reposited into molecular species .
this work was supported by nasa grants nag5 - 8425 , nag5 - 9222 , and nag5 - 10551 as well as nasa / jpl grant 961582 , and in part by nsf grants ast-9720704 and ast-0086246 ( and a grant to the institute for theoretical atomic , molecular & optical physics , harvard - smithsonian cfa ) .
some of the calculations were performed on the ibm sp `` blue horizon '' of the san diego supercomputer center , with support from the nsf , and on the ibm sp of nersc with support from the doe .
this work also was supported in part by the ple scientifique de modlisation numrique at ens - lyon .
p.f.w . acknowledges itamp at harvard university and sao for travel support .
allard , f. , hauschildt , p. h. , alexander , d. r. , tamanai , a. , & schweitzer , a. 2001 , , 556 , 357 , f. , hauschildt , p. h. , & schweitzer , a. 2000 , , 539 , 366 barrado y navascus , d. , stauffer , j. r. , & patten , b. m. 1999 , , 522 , l53 basri , g. , marcy , g. w. , & graham , j. r. 1996 , , 458 , 600 brewer , l. , & brackett , e. 1961 , chem .
, 61 , 425 bulewicz , e. m. , phillips , l. f. , & sugden , t. m. 1961 , trans .
faraday soc . , 57 , 921 burrows , a. , marley , m. s. , & sharp , c. m. 2000 , , 531 , 438 burrows , a. , ram , r. s. , bernath , p. , sharp , c. m. , & milsom , j. a. 2002 , , 577 , 986 chabrier , g. , & baraffe , i. 1997 , , 327 , 1039 cooley , j. w. 1961 , math .
computation , 15 , 363 dulick , m. , bauschlicher , c. w. , jr . , burrows , a. , sharp , c. m. , ram , r. s. , & bernath , p. 2003
, , 594 , 651 hauschildt , p. h. , & baron , e. 1999 , j. comput .
102 , 41 huber , k. p. , & herzberg , g. 1979 , molecular spectra and molecular structure , vol .
iv , constants of diatomic molecules ( new york : van nostrand reinhold ) jones , h. , & lindenmayer , j. 1987 , chem . phys .
lett . , 135 , 189 lodders , k. 1999 , , 519 , 793 ogilvie , j. f. 1992 , spectrosc .
lett . , 25 , 1341 partridge , h. , & schwenke , d. w. 1997 , , 106 , 4618 phillips , j. g. , & davis s. p. 1993
, , 409 , 860 rebolo , r. , martin , e. l. , & magazzu , a. 1992 , , 389 , l83 schwenke , d. w. 1998 , chemistry and physics of molecules and grains in space , faraday discussion , 109 , 321 skory , s. s. , weck , p. f. , stancil , p. c. , & kirby , k. 2003 , , 148 , 599 thompson , g. a. , maki , a. g. , olson , w. b. , & weber , a. 1987 , j. mol . spectrosc . , 124 , 130 weck , p. f. , schweitzer , a. , stancil , p. c. , hauschildt , p. h. , & kirby , k. 2003a , , 582 , 1059 weck , p. f. , schweitzer , a. , stancil , p. c. , hauschildt , p. h. , & kirby , k. 2003b , , 584 , 459 weck , p. f. , stancil , p. c. , & kirby , k. 2003c , , 118 , 9997 weck , p. f. , kirby , k. , & stancil , p. c. 2004 , , 120 , 4216 crcccr @xmath56 & r(7 ) & 1.26217(-5 ) & 644.64 & 644.7380 & 0.0980 + & r(10 ) & 1.21968(-5 ) & 648.35 & 648.4665 & 0.1165 + & r(16 ) & 1.16041(-5 ) & 655.30 & 655.4703 & 0.1703 + & r(19 ) & 1.13577(-5 ) & 658.55 & 658.7408 & 0.1908 + & r(22 ) & 1.11270(-5 ) & 661.64 & 661.8543 & 0.2143 + & r(26 ) & 1.08349(-5 ) & 665.53 & 665.7600 & 0.2300 + & r(34 ) & 1.02818(-5 ) & 672.44 & 672.7089 & 0.2689 + & r(49 ) & 9.30016(-6 ) & 682.23 & 682.5450 & 0.3150 + & r(50 ) & 9.23624(-6 ) & 682.73 & 683.0491 & 0.3191 + & r(58 ) & 8.72963(-6 ) & 686.06 & 686.3802 & 0.3208 + + @xmath57 & r(1 ) & 3.25145(-5 ) & 628.00 & 628.0418 & 0.0418 + & r(6 ) & 2.55717(-5 ) & 634.51 & 634.5921 & 0.0821 + & r(12 ) & 2.38987(-5 ) & 641.78 & 641.9116 & 0.1316 + & r(18 ) & 2.28181(-5 ) & 648.45 & 648.6304 & 0.1804 + & r(23 ) & 2.20447(-5 ) & 653.55 & 653.7615 & 0.2115 + & r(25 ) & 2.17516(-5 ) & 655.47 & 655.6910 & 0.2210 + & r(28 ) & 2.13235(-5 ) & 658.21 & 658.4520 & 0.2420 + &
r(32 ) & 2.07686(-5 ) & 661.62 & 661.8852 & 0.2652 + & r(36 ) & 2.02270(-5 ) & 664.75 & 665.0280 & 0.2780 + & r(37 ) & 2.00932(-5 ) & 665.49 & 665.7710 & 0.2810 + & r(48 ) & 1.86527(-5 ) & 672.38 & 672.6855 & 0.3055 + + @xmath58 & r(5 ) & 3.89879(-5 ) & 624.51 & 624.5800 & 0.0700 + & r(13 ) & 3.54663(-5 ) & 634.09 & 634.2288 & 0.1388 + & r(16 ) & 3.46439(-5 ) & 637.42 & 637.5746 & 0.1546 + & r(20 ) & 3.36623(-5 ) & 641.61 & 641.8000 & 0.1900 + & r(23 ) & 3.29764(-5 ) & 644.58 & 644.7908 & 0.2108 + & r(27 ) & 3.21029(-5 ) & 648.30 & 648.5320 & 0.2320 + & r(40 ) & 2.94364(-5 ) & 658.45 & 658.7343 & 0.2843 + & r(45 ) & 2.84505(-5 ) & 661.54 & 661.8405 & 0.3005 + &
r(46 ) & 2.82550(-5 ) & 662.10 & 662.4058 & 0.3058 + & r(51 ) & 2.72851(-5 ) & 664.64 & 664.9493 & 0.3093 + & r(52 ) & 2.70924(-5 ) & 665.09 & 665.3980 & 0.3080 + + @xmath59 & r(12 ) & 4.75816(-5 ) & 624.26 & 624.3715 & 0.1115 + &
r(15 ) & 4.64316(-5 ) & 627.60 & 627.7262 &
0.1262 + & r(25 ) & 4.32566(-5 ) & 637.63 & 637.8264 & 0.1964 + & r(33 ) & 4.09981(-5 ) & 644.42 & 644.6672 & 0.2472 + & r(38 ) & 3.96417(-5 ) & 648.10 & 648.3652 & 0.2652 + & r(50 ) & 3.64842(-5 ) & 655.09 & 655.3840 & 0.2940 + & r(58 ) & 3.44306(-5 ) & 658.27 & 658.5725 & 0.3025
+ + @xmath60 & r(17 ) & 5.70297(-5 ) & 621.13 & 621.2291 & 0.0991 + & r(20 ) & 6.70073(-5 ) & 624.18 & 624.3003 & 0.1203 + & r(32 ) & 5.14267(-5 ) & 634.85 & 635.0536 & 0.2036 + & r(47 ) & 4.63923(-5 ) & 644.66 & 644.9326 &
0.2726 + & r(55 ) & 4.37976(-5 ) & 648.23 & 648.5196 & 0.2896 + | we present a complete line list for the @xmath0 electronic ground state of @xmath1li@xmath2cl computed using fully quantum - mechanical techniques .
this list includes transition energies and oscillator strengths in the spectral region @xmath3 for all allowed rovibrational transitions in absorption within the electronic ground state .
the calculations were performed using an accurate hybrid potential constructed from a spectral inversion fit of experimental data and from recent multi - reference single- and double - excitation configuration interaction calculations .
the line list was incorporated into the stellar atmosphere code phoenix to compute spectra for a range of young to old t dwarf models .
the possibility of observing a signature of licl in absorption near 15.8 @xmath4 m is addressed and the proposal to use this feature to estimate the total lithium elemental abundance for these cool objects is discussed . |
SECTION 1. SHORT TITLE; TABLE OF CONTENTS; FINDINGS.
(a) Short Title.--This Act may be cited as the ``Real Solutions to
World Hunger Act of 2002''.
(b) Table of Contents.--The table of contents of this Act is as
follows:
Sec. 1. Short title; table of contents; findings.
Sec. 2. Definitions.
Sec. 3. Ensuring safety and mitigating ecological impacts of United
States exports of genetically engineered
animals, plants, and seeds.
Sec. 4. Promotion of international research regarding sustainable
agriculture to assist developing countries.
Sec. 5. Position of the United States in the international financial
institutions regarding genetically
engineered animals, plants, and seeds.
Sec. 6. Tax on biotech companies to help fund sustainable agriculture
research.
(c) Findings.--Congress finds the following:
(1) The need for mandatory labeling, safety testing, and
environmental reviews of genetically engineered foods do not
constitute obstacles to the cessation of world hunger.
(2) The dominant causes of world hunger are not
technological in nature, but rooted in basic social-economic
failures.
(3) Technologies, like genetically engineered food, may
have a limited role, but economics remain the significant
barrier to a consistent food supply, and the development of
expensive genetically engineered foods may only exacerbate this
trend.
(4) Most genetically engineered food products and almost
all research funding for the development of genetically
engineered food target developed nation agriculture and
consumers. Developing countries cannot afford this technology
and therefore are vastly ignored.
(5) Agroecological interventions have had significant
success in helping developing nations feed themselves with
higher yields and improved environmental practices, all within
reasonable costs for developing countries.
(6) If the biotech industry believes they can help mitigate
hunger concerns, domestic or foreign, then requiring biotech
companies to make available the necessary resources for this
purpose is appropriate.
SEC. 2. DEFINITIONS.
In this Act:
(1) Genetically engineered animal.--The term ``genetically
engineered animal'' means an animal that contains a genetically
engineered material or was produced with a genetically
engineered material. An animal shall be considered to contain a
genetically engineered material or to have been produced with a
genetically engineered material if the animal has been injected
or otherwise treated with a genetically engineered material or
is the offspring of an animal that has been so injected or
treated.
(2) Genetically engineered plant.--The term ``genetically
engineered plant'' means a plant that contains a genetically
engineered material or was produced from a genetically
engineered seed. A plant shall be considered to contain a
genetically engineered material if the plant has been injected
or otherwise treated with a genetically engineered material
(except that the use of manure as a fertilizer for the plant
may not be construed to mean that the plant is produced with a
genetically engineered material).
(3) Genetically engineered seed.--The term ``genetically
engineered seed'' means a seed that contains a genetically
engineered material or was produced with a genetically
engineered material. A seed shall be considered to contain a
genetically engineered material or to have been produced with a
genetically engineered material if the seed (or the plant from
which the seed is derived) has been injected or otherwise
treated with a genetically engineered material (except that the
use of manure as a fertilizer for the plant may not be
construed to mean that any resulting seeds are produced with a
genetically engineered material).
(4) Genetically engineered material.--The term
``genetically engineered material'' means material that has
been altered at the molecular or cellular level by means that
are not possible under natural conditions or processes
(including recombinant DNA and RNA techniques, cell fusion,
microencapsulation, macroencapsulation, gene deletion and
doubling, introducing a foreign gene, and changing the
positions of genes), other than a means consisting exclusively
of breeding, conjugation, fermentation, hybridization, in vitro
fertilization or tissue culture or mutagenesis.
(5) Biotech company.--The term ``biotech company'' means a
person engaged in the business of creating genetically
engineered material and obtaining the patent rights to that
material for the purposes of commercial exploitation of that
material. The term does not include the employees of such
person.
SEC. 3. ENSURING SAFETY AND MITIGATING ECOLOGICAL IMPACTS OF UNITED
STATES EXPORTS OF GENETICALLY ENGINEERED ANIMALS, PLANTS,
AND SEEDS.
It shall be unlawful for any person to ship or offer for shipment,
or for any carrier or other person to transport or receive for
transportation, to any foreign country, any genetically engineered
animal, genetically engineered plant, or genetically engineered seed
that the person knows, or has reason to believe, will be used by the
ultimate purchaser to produce an agricultural commodity if--
(1) the genetically engineered animal, genetically
engineered plant, or genetically engineered seed--
(A) was denied a Federal approval necessary as a
condition for commercial marketing in the United
States; or
(B) was the subject of an application for such a
Federal approval that was withdrawn; or
(2) the government of the foreign country has not certified
that ecological impacts related to the importation of the
genetically engineered animal, genetically engineered plant, or
genetically engineered seed have been mitigated to the
satisfaction of the foreign government.
SEC. 4. PROMOTION OF INTERNATIONAL RESEARCH REGARDING SUSTAINABLE
AGRICULTURE TO ASSIST DEVELOPING COUNTRIES.
(a) Grants for International Research.--The Secretary of
Agriculture may make grants to designated international research
institutions for the purpose of promoting the development of
sustainable agriculture techniques that rely on minimum artificial
inputs to meet the food and fiber needs of developing countries.
Eligible sustainable agriculture techniques may not derive any genetic
engineered material.
(b) Use of Grant Funds.--A grant recipient shall use the funds
provided under this section only in a manner consistent with the
purpose for which the grant is awarded.
(c) Designated Institutions.--The Secretary of Health and Human
Services shall designate the international research institutions
eligible to apply for a grant under this section. The designated
institutions shall include the United Nations Food and Agriculture
Organization and the Consultative Group on International Agricultural
Research.
(d) Competitive Basis.--Grants under this section shall be made on
a competitive basis.
(e) Funding Source.--The Secretary of Agriculture shall use the
Sustainable Agriculture Trust Fund, in such amounts as provided in
advance in appropriation Acts, to make grants under this section.
SEC. 5. POSITION OF THE UNITED STATES IN THE INTERNATIONAL FINANCIAL
INSTITUTIONS REGARDING GENETICALLY ENGINEERED ANIMALS,
PLANTS, AND SEEDS.
The Secretary of the Treasury shall instruct the United States
Executive Director at each international financial institution (as
defined in section 1701(c)(2) of the International Financial
Institutions Act) to make no effort to encourage the institution to
prohibit any country eligible for assistance under the Heavily Indebted
Poor Countries (HIPC) Initiative of the International Bank for
Reconstruction and Development from requiring compulsory licensing with
respect to any genetically engineered animal, genetically engineered
plant, or genetically engineered seed.
SEC. 6. TAX ON BIOTECH COMPANIES TO HELP FUND SUSTAINABLE AGRICULTURE
RESEARCH.
(a) Special Tax.--
(1) Tax imposed.--Subchapter A of chapter 1 of the Internal
Revenue Code of 1986 is amended by adding at the end the
following new part:
``PART VIII--TAX ON GENETIC ENGINEERING BUSINESSES
``Sec. 59B. Imposition of tax.
``SEC. 59B. IMPOSITION OF TAX.
``(a) Tax Imposed.--In the case of a corporation, there is hereby
imposed (in addition to any other tax imposed by this subtitle) a tax
equal to 1 percent of the gross income of such business for the taxable
year which is attributable (directly or indirectly) to--
``(1) the marketing in the United States of any genetically
engineered organism, or
``(2) the holding of a patent on any such an organism.
``(b) Definition.--In this section, the term `genetically
engineered organism' means--
``(1) an organism that has been altered at the molecular or
cellular level by means that are not possible under natural
conditions or processes (including but not limited to
recombinant DNA and RNA techniques, cell fusion,
microencapsulation, macroencapsulation, gene deletion and
doubling, introducing a foreign gene, and changing the
positions of genes), other than a means consisting exclusively
of breeding, conjugation, fermentation, hybridization, in vitro
fertilization, tissue culture, or mutagenesis; and
``(2) an organism made through sexual or asexual
reproduction (or both) involving an organism described in
subparagraph (A), if possessing any of the altered molecular or
cellular characteristics of the organism so described.''
(2) Clerical amendment.--The table of parts for such
subchapter A is amended by adding at the end the following new
item:
``Part VIII. Tax on genetic engineering
businesses.''
(3) Effective Date.--The amendments made by this subsection
shall apply to taxable years beginning after the date of the
enactment of this Act.
(b) Sustainable Agriculture Trust Fund.--
(1) Creation and funding source.--Subchapter A of chapter
98 of the Internal Revenue Code of 1986 (relating to trust fund
code) is amended by adding at the end the following new
section:
``SEC. 9511. SUSTAINABLE AGRICULTURE TRUST FUND.
``(a) Creation of Trust Fund.--There is established in the Treasury
of the United States a trust fund to be known as the `Sustainable
Agriculture Trust Fund', consisting of such amounts as may be
appropriated or credited to the Sustainable Agriculture Trust Fund as
provided in this section or section 9602(b).
``(b) Transfer to Trust Fund of Certain Taxes.--There is hereby
appropriated to the Sustainable Agriculture Trust Fund amounts
equivalent to the taxes received in the Treasury under section 59B.
``(c) Expenditures From Trust Fund.--Amounts in the Sustainable
Agriculture Trust Fund shall be available, as provided in appropriation
Acts, only for grants under sections 3 and 4 of the Real Solutions to
World Hunger Act of 2002.''.
(2) Clerical amendment.--The table of sections for such
subchapter A is amended by adding at the end the following new
item:
``Sec. 9511. Sustainable Agriculture
Trust Fund.'' | Real Solutions to World Hunger Act of 2002 - Makes it unlawful for any person to ship, or offer to ship, or for any carrier or person to transport, or receive for transportation, to any foreign country, any genetically engineered animal, plant, or seed (as defined by this Act) if the person knows or has reason to believe that the engineered article will be used to produce an agricultural commodity if: (1) such article was denied Federal approval for U.S. marketing, or its application for approval was withdrawn; or (2) the foreign government has not certified that related ecological impacts of such article have been satisfactorily mitigated.Authorizes the Secretary of Agriculture to make grants to designated international research institutions to promote development of sustainable agricultural techniques (which may not derive any genetic engineered material) that rely on minimum artificial inputs to meet developing countries' food and fiber needs.Directs the Secretary of the Treasury to instruct the United States Executive Director at each international financial institution to make no effort to encourage the institution from prohibiting countries eligible for certain assistance from requiring compulsory licensing of genetically engineered animals, plants, or seeds.Amends the Internal Revenue Code to: (1) impose a tax on a corporation equal to one percent of the gross income that is attributable to the U.S. marketing of any genetically engineered organism (as defined by this Act), or the holding of a patent on any such organism; and (2) establish in the Treasury the Sustainable Agriculture Trust Fund. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Rescue and Emergency Services
Prepared for Our Nation's Defense Act''.
SEC. 2. FINDINGS.
The Congress finds the following:
(1) Many nations currently possess weapons of mass
destruction and related materials and technologies, and such
weapons are increasingly available to a variety of sources
through legitimate and illegitimate means.
(2) The proliferation of weapons of mass destruction is
growing, and will likely continue despite the best efforts of
the international community to limit their flow.
(3) The increased availability, relative affordability, and
ease of use of weapons of mass destruction may make the use of
such weapons an increasingly attractive option to potential
adversaries who are not otherwise capable of countering United
States military superiority.
(4) On November 12, 1997, President Clinton issued an
Executive Order stating that ``the proliferation of nuclear,
biological, and chemical weapons (``weapons of mass
destruction'') and the means of delivering such weapons
constitutes an unusual and extraordinary threat to the national
security, foreign policy, and economy of the United States''
and declaring a national emergency to deal with that threat.
(5) The Quadrennial Defense Review concluded that the
threat or use of weapons of mass destruction is a likely
condition of future warfare and poses a potential threat to the
United States.
(6) The United States lacks adequate preparedness at the
Federal, State, and local levels to respond to a potential
attack on the United States involving weapons of mass
destruction.
(7) The United States has initiated an effort to enhance
the capability of Federal, State, and local governments as well
as local emergency response personnel to prevent and respond to
a domestic terrorist incident involving weapons of mass
destruction.
(8) More than 40 Federal departments, agencies, and bureaus
are involved in combating terrorism, and many, including the
Department of Defense, the Department of Justice, the
Department of Energy, the Department of Health and Human
Services, and the Federal Emergency Management Agency, are
executing programs to provide civilian personnel at the
Federal, State, and local levels with training and assistance
to prevent and respond to incidents involving weapons of mass
destruction.
(9) The Secretary of Defense has called for the
establishment of 10 Rapid Assessment and Initial Detection
elements, composed of 22 National Guard personnel, to provide
timely regional assistance to local emergency responders during
an incident involving chemical or biological weapons of mass
destruction.
(10) The Department of Energy has established a Nuclear
Emergency Response Team which is available to respond to
incidents involving nuclear or radiological emergencies.
(11) The Department of Defense has begun to implement a
program to train local emergency responders in major cities
throughout the United States to prevent and respond to
incidents involving weapons of mass destruction.
(12) The Department of Justice has initiated a program to
direct and coordinate training and exercises to enhance local
emergency response to incidents involving weapons of mass
destruction, and may be establishing a National Center for
Domestic Preparedness.
(13) Federal agency initiatives to enhance domestic
preparedness to respond to an incident involving weapons of
mass destruction are hampered by incomplete interagency
coordination and overlapping jurisdiction of agency missions.
(14) The Federal Emergency Management Agency, originally
designated to lead the coordinated Federal effort to enhance
preparedness to respond to incidents involving weapons of mass
destruction, has withdrawn from that role, and a successor lead
agency has not yet been determined.
(15) In order to ensure effective local response
capabilities to incidents involving weapons of mass
destruction, the Federal Government, in addition to providing
training, must concurrently address the need for--
(A) compatible communications capabilities for all
Federal, State, and local emergency responders, which
often use different radio systems and operate on
different radio frequencies;
(B) adequate equipment necessary for response to an
incident involving weapons of mass destruction, and a
means to ensure that financially lacking localities
have access to such equipment;
(C) local and regional preplanning efforts to
ensure the effective execution of emergency response in
the event of an incident involving a weapon of mass
destruction; and
(D) increased planning and training to prepare for
emergency response capabilities in port areas and
littoral waters.
SEC. 3. ESTABLISHMENT OF COMMISSION.
(a) Establishment.--There is hereby established a commission to be
known as the ``Commission to Assess Weapons of Mass Destruction
Domestic Response Capabilities''.
(b) Composition.--The Commission shall be composed of 15 members,
appointed as follows:
(1) 4 members appointed by the Speaker of the House of
Representatives;
(2) 4 members appointed by the majority leader of the
Senate;
(3) 2 members appointed by the minority leader of the House
of Representatives;
(4) 2 members appointed by the minority leader of the
Senate;
(5) 3 members appointed by the President.
(c) Qualifications.--Members shall be appointed from among
individuals with knowledge and expertise in emergency response matters.
(d) Deadline for Appointments.--Appointments shall be made not
later than the date that is 30 days after the date of the enactment of
this Act.
(e) Initial Meeting.--The Commission shall conduct its first
meeting not later than the date that is 30 days after the date that
appointments to the Commission have been made.
(f) Chairman.--A Chairman of the Commission shall be elected by a
majority of the members.
SEC. 4. DUTIES OF COMMISSION.
The Commission shall--
(1) assess Federal agency efforts to enhance domestic
preparedness for incidents involving weapons of mass
destruction;
(2) assess the progress of Federal training programs for
local emergency responses to incidents involving weapons of
mass destruction;
(3) assess deficiencies in training programs for responses
to incidents involving weapons of mass destruction, including a
review of unfunded communications, equipment, and preplanning
and maritime region needs;
(4) recommend strategies for ensuring effective
coordination with respect to Federal agency weapons of mass
destruction response efforts, and for ensuring fully effective
local response capabilities for weapons of mass destruction
incidents; and
(5) assess the appropriate role of State and local
governments in funding effective local response capabilities.
SEC. 5. REPORT.
Not later than the date that is 6 months after the date of the
first meeting of the Commission, the Commission shall submit a report
to Congress on its findings under section 4 and recommendations for
improving Federal, State, and local domestic emergency preparedness to
respond to incidents involving weapons of mass destruction.
SEC. 6. POWERS.
(a) Hearings.--The Commission or, at its direction, any panel or
member of the Commission, may, for the purpose of carrying out this
Act, hold such hearings, sit and act at times and places, take
testimony, receive evidence, and administer oaths to the extent that
the Commission or any panel member considers advisable.
(b) Information.--The Commission may secure directly from any
department or agency of the United States information that the
Commission considers necessary to enable the Commission to carry out
its responsibilities under this Act.
SEC. 7. COMMISSION PROCEDURES.
(a) Meetings.--The Commission shall meet at the call of a majority
of the members.
(b) Quorum.--Eight members of the Commission shall constitute a
quorum other than for the purpose of holding hearings.
(c) Commission.--The Commission may establish panels composed of
less than full membership of the Commission for the purpose of carrying
out the Commission's duties. The actions of each such panel shall be
subject to the review and control of the Commission. Any findings and
determinations made by such panel shall not be considered the findings
and determinations of the Commission unless approved by the Commission.
(d) Authority of Individuals To Act for Commission.--Any member or
agent of the Commission may, if authorized by the Commission, take any
action which the Commission is authorized to take by this Act.
SEC. 8. PERSONNEL MATTERS.
(a) Pay of Members.--Members of the Commission shall serve without
pay by reason of their work on the Commission.
(b) Travel Expenses.--The members of the Commission shall be
allowed travel expenses, including per diem in lieu of subsistence, at
rates authorized for employees of agencies under subchapter I of
chapter 57 of title 5, United States Code, while away from their homes
or regular places of business in the performance of services for the
Commission.
(c) Staff.--(1) The Commission may, without regard to the
provisions of title 5, United States Code, governing appointments in
the competitive service, appoint a staff director and such additional
personnel as may be necessary to enable the Commission to perform its
duties.
(2) The Commission may fix the pay of the staff director and other
personnel without regard to the provisions of chapter 51 and subchapter
III of chapter 53 of title 5, United States Code, relating to
classification of positions and General Schedule pay rates, except that
the rate of pay fixed under this paragraph for the staff director may
not exceed the rate payable for level V of the Executive Schedule under
section 5316 of such title and the rate of pay for other personnel may
not exceed the maximum rate payable for grade GS-15 of the General
Schedule.
(d) Detail of Government Employees.--Upon request of the
Commission, the head of any Federal department or agency may detail, on
a nonreimbursable basis, any personnel of that department or agency to
the Commission to assist it in carrying out its duties.
(e) Procurement of Temporary and Intermittent Services.--The
Commission may procure temporary and intermittent services under
section 3109(b) of title 5, United States Code, at rates for
individuals which do not exceed the daily equivalent of the annual rate
of pay payable for level V of the Executive Schedule under section 5316
of such title.
SEC. 9. MISCELLANEOUS ADMINISTRATIVE PROVISIONS.
(a) Postal and Printing Services.--The Commission may use the
United States mails and obtain printing and binding services in the
same manner and under the same conditions as other departments and
agencies of the United States.
(b) Miscellaneous Administrative and Support Services.--Upon the
request of the Commission, the Administrator of General Services shall
provide to the Commission, on a reimbursable basis, the administrative
support services necessary for the Commission to carry out its duties
under this Act.
(c) Experts and Consultants.--The Commission may procure temporary
and intermittent services under section 3109(b) of title 5, United
States Code.
SEC. 10. TERMINATION OF COMMISSION.
The Commission shall terminate not later than 60 days after the
date that the Commission submits its report under section 5. | Rescue and Emergency Services Prepared for Our Nation's Defense Act - Establishes the Commission to Assess Weapons of Mass Destruction Domestic Response Capabilities to: (1) assess Federal agency efforts to enhance domestic preparedness for incidents involving weapons of mass destruction and Federal training programs for local emergency responses to such incidents; (2) recommend strategies for the coordination of response efforts; (3) assess the appropriate role of State and local governments in funding local response capabilities; and (4) report to the Congress within six months after its first meeting. |
investigating the properties of rubidium ( rb ) atom is of immense interest for a number of applications @xcite .
it is one of the most widely used atom in quantum computational schemes using rydberg atoms , where the hyperfine states of the ground state of rb atom are defined as the qubits @xcite .
it is also used to study quantum phase transitions of mixed - species with degenerate quantum gases @xcite .
there are several proposals to carry out precision studies in this atom such as constructing ultra - precise atomic clocks @xcite , probing parity non - conservation effects @xcite , finding its permanent electric dipole moment @xcite etc .
also , a number of measurements and calculations of lifetimes for many low - lying states in rb have been performed over the past few decades @xcite .
it is found that there are inconsistencies between the calculated and measured values of the lifetimes of atomic states in this atom @xcite . in this context , it is necessary to carry out further theoretical studies in this atom . due to simple single - core electron structure of this atom ,
it is adequate to employ advanced many - body methods for precise calculation of its properties which ultimately act as benchmark tests for the experimental measurements @xcite . in this paper
, we determine polarizabilities of the ground @xmath0 and excited @xmath1 states and study the differential ac stark shifts between these two states . in this process
, we also analyse the reduced matrix elements and their accuracies which are further used to estimate precisely the lifetimes of few excited states in this atom . aim of our present study is to analyse results of differential ac stark shifts from which we can deduce the magic wavelengths ( see below for definition ) that are of great use in state - insensitive trapping of rb atoms .
manipulation of cold and ultracold rb atoms has been widely done by using optical traps @xcite . for a number of applications ( such as atomic clocks and quantum computing @xcite ) , it is often desirable to optically trap the neutral atoms without affecting the internal energy - level spacing for the atoms .
however in an experimental set up , the interaction of an atom with the externally applied oscillating electric field of the trapping beam causes ac stark shifts of the atomic levels inevitably . for any two internal states of an atom , the stark shifts caused due to the trap light are in general different which affects the fidelity of the experiments @xcite .
@xcite proposed the idea of tuning the trapping laser to a magic wavelength , @xmath4 " , at which the differential ac stark shifts of the transition is terminated . using this approach the magic wavelength for the @xmath5 transition in @xmath6sr
was determined with a high precision to be 813.42735(40 ) @xmath7 @xcite .
_ demonstrated the state - insensitive trapping of cs at @xmath8 935 @xmath7 while still maintaining a strong coupling with the @xmath9 transition @xcite .
@xcite calculated the magic wavelengths for the @xmath10 transitions for other alkali atoms ( from na to cs ) by calculating dynamic polarizabilities using a relativistic coupled - cluster ( rcc ) method .
theoretical values for these quantities were calculated at wavelengths where the ac polarizabilities for two states involved in the transition cancel . the data in ref .
@xcite provides a wide range of magic wavelengths for the alkali - metal atoms trapped in linearly polarized light by evaluating electric dipole ( e1 ) matrix elements obtained by linearized rcc method . in this paper
, we try to evaluate these matrix elements considering all possible non - linear terms in the rcc method .
in addition , we would like to optimize the matrix elements using the precisely known experimental results of lifetimes and static polarizabilities for different atomic states and re - investigate the above reported magic wavelengths in the considered atom .
it is also reported in ref . @xcite
that trapping rb atoms in the linearly polarized light offers only a few suitable magic wavelengths for the state - insensitive scheme .
this persuades us to look for more plausible cases for constructing state - insensitive traps of rb atoms using the circularly polarized light .
using the circularly polarized light may be advantageous owing to the dominant role played by vector polarizabilities ( which are absent in the linearly polarized light ) in estimating the ac stark shifts .
moreover , these vector polarizabilities act as fictitious magnetic fields " , turning the ac stark shifts to the case analogous to the zeeman shifts @xcite .
this paper is organized as follows , in sections [ sec2 ] and [ sec3 ] , we discuss in brief the theory of dipole polarizability and method used for calculating them precisely . in section [ sec4 ] ,
we first discuss in detail the evaluation of matrix elements used for precise estimation of polarizability and then present our magic wavelengths first for the linearly polarized light following which for the circularly polarized light . unless stated otherwise , we use the conventional system of atomic units ( au ) , in which @xmath11 , @xmath12 , 4@xmath13 , and the reduced planck constant @xmath14 have the numerical value 1 throughout this paper .
the @xmath15 energy level of an atom placed in a static electric field @xmath16 can be expressed using a time - independent perturbation theory as @xcite @xmath17 where @xmath18s are the unperturbed energy levels in the absence of electric field , @xmath19 represent the intermediate states allowed by the dipole selection rules and @xmath20 is the interaction hamiltonian with @xmath21 as the electric - dipole operator .
since the first - order correction to the energy levels is zero in the present case , therefore we can approximate the energy shift at the second - order level for a weak field @xmath16 and write it in terms of dipole moments @xmath22 as @xmath23 where @xmath24 and @xmath25 is the e1 amplitude between @xmath26 and @xmath27 states . a more traditional notation of the above equation is given by @xmath28 where @xmath29 is known as the static polarizability of the @xmath30 state which
is written as @xmath31 if the applied field is frequency - dependent ( ac field ) , then we can still express the change in energy as @xmath32 with @xmath29 as a function of frequency given by @xmath33 . \ \
\ \\end{aligned}\ ] ] since @xmath34 also depends on angular momentum @xmath35 and @xmath36 values of the given atomic state , it is customary to express them in a different form with @xmath36 dependent factors and @xmath36 independent factors .
therefore , @xmath34 is further rewritten as @xcite @xmath37 where @xmath38 , @xmath39 and @xmath40 define degree of circular polarization , angle between wave vector of the electric field and @xmath41-axis and angle between the direction of polarization and @xmath41-axis , respectively . here
@xmath42 for the linearly polarized light implying there is no vector component present in this case ; otherwise @xmath43 for the right - handed and @xmath44 for the left - handed circularly polarized light . in the absence of magnetic field ( or in weak magnetic field ) , we can choose @xmath45 . here @xmath36 independent factors @xmath46 , @xmath47 and @xmath48 are known as scalar , vector and tensor polarizabilities , respectively . in terms of the reduced matrix elements of dipole operator they are given by @xcite @xmath49 \label{eq - vector } \\
\alpha_v^{2}(\omega ) & = & -2 \sqrt { \frac{5j_v(2j_v-1 ) } { 6(j_v+1)(2j_v+1)(2j_v+3 ) } }
\nonumber \\ & & \sum_{j_k } \left \ { \begin{array}{ccc } j_v & 2 & j_v \\ 1 & j_k & 1 \end{array } \right \ } ( -1)^{j_v+j_{k}+1 } | \langle \psi_v \parallel d \parallel \psi_k \rangle|^{2 } \nonumber \\ & & \times \left [ \frac{1}{\delta e_{kv}+\omega}+\frac{1}{\delta e_{kv}-\omega } \right ] \label{eq - tensor}.\end{aligned}\ ] ] for @xmath50 , the results will correspond to the static polarizabilities which clearly suggests that @xmath47 is zero for the static case .
to calculate wave functions in rb atom , we first obtain dirac - fock ( df ) wave function for the closed - shell configuration @xmath51 $ ] which is given by @xmath52 .
then the df wave function for atomic states with one valence configuration are defined as @xmath53 represents addition of the valence orbital , denoted by @xmath54 , with @xmath52 .
the exact atomic wave function ( @xmath55 ) for such a configuration is determined , accounting correlation effects in the rcc framework , by expressing @xcite @xmath56 which in linear form is given by @xmath57 here @xmath58 and @xmath59 operators account excitations of the electrons from the core orbitals alone and valence orbital together with core orbitals , respectively . in the present paper , we consider eq .
( [ cc1 ] ) instead of eq .
( [ cc2 ] ) as was taken before in our previous calculations @xcite .
we consider here only single , double ( ccsd method ) and important triple excitations ( known as ccsd(t ) method from @xmath52 and @xmath60 .
the excitation amplitudes for the @xmath58 operators are determined by solving @xmath61 where @xmath62 represents singly and doubly excited configurations from @xmath63 .
similarly , the excitation amplitudes for the @xmath59 operators are determined by solving @xmath64 taking @xmath65 as the singly and doubly excited configurations from @xmath60 .
the above equation is solved simultaneously with the calculation of attachment energy @xmath66 for the valence electron @xmath54 using the expression @xmath67 the triples effect are incorporated through the calculation of @xmath66 by including valence triple excitation amplitudes perturbatively ( e.g. see @xcite for the detailed discussion ) . to determine polarizabilities , we divide various correlation contributions to it into three parts as @xmath68 where @xmath69 and @xmath70 represents scalar , vector and tensor polarizabilities , respectively , and the notations @xmath71 , @xmath72 and @xmath54 in the parentheses correspond to core , core - valence and valence correlations , respectively .
the core contributions to vector and tensor polarizabilities are zero .
we determine the valence correlation contributions to the polarizability in the sum - over - states approach @xcite by evaluating their matrix elements by our ccsd(t ) method and using the experimental energies @xcite for the important intermediate states .
contributions from the higher excited states and continuum are accounted from the following expression @xmath73 where @xmath74 are the corresponding angular factors for different values of @xmath75 and @xmath76 is treated as the first order wave function to @xmath77 due to the dipole operator @xmath78 @xcite at the third order many - body perturbation ( mbpt(3 ) method ) level and given as @xmath79 .
also , contributions from the core and core - valence correlations are estimated using this procedure .
we calculate the reduced matrix elements of @xmath78 between states @xmath80 and @xmath81 , to be used in the sum - over - states approach , from the following rcc expression @xmath82 where @xmath83 and @xmath84 involve two non - truncating series in the above expression
. calculation procedures of these expressions are discussed in detail elsewhere @xcite .
our aim is to determine the magic wavelengths for the linearly and circularly polarized electric fields for the @xmath85 transitions in rb atom . to determine these wavelengths precisely , we need accurate values of polarizabilities which depend upon the excitation energies and the e1 matrix elements between the intermediate states of the corresponding states . in this respect ,
we first present below the e1 matrix elements between different transitions and discuss their accuracies .
then we overview the current status of the polarizabilities reported in literature and compare our results with them .
these results are further used to determine the magic wavelengths for both the linearly and circularly polarized lights .
.[e1mat ] absolute values of e1 matrix elements in rb atom in @xmath86 using the dirac - fock ( df ) and ccsd(t ) methods .
uncertainties in the ccsd(t ) results are given in the parentheses . [
cols="<,^,^",options="header " , ] however , the case for the @xmath87 transition is different owing to the presence of non - zero tensor contribution of the @xmath88 state . as shown in fig .
[ figrb-2 ] , we get different magic wavelengths for the @xmath87 transition at @xmath89 and @xmath90 sub - levels of the @xmath88 state .
there are few wavelengths in between resonances where @xmath91 with @xmath92 contribution is not same as the @xmath93 .
this leads to reduction in the number of magic wavelengths for this transition .
for example , we did not find any @xmath4 between the @xmath94 resonances ( at 1529 @xmath7 ) and the @xmath95 resonance ( at 1367 @xmath7 ) for @xmath90 sublevels of the @xmath88 state . we have limited our search for the magic wavelengths where the differential polarizabilities between the @xmath0 and @xmath96 states are less than 0.5% . based on all these data , we list now @xmath4 ( in vacuum ) above 600 @xmath7 in table [ tabrb - linear ] for the @xmath97 and @xmath98 transitions in rb atom and compare them with the previously known results .
the present results are improved slightly due to the optimized e1 matrix elements used here .
the uncertainties in our magic wavelength results are found as the maximum differences between the @xmath99 and @xmath100 contributions with their respective magnetic quantum numbers , where the @xmath101 are the uncertainties in the polarizabilities for their corresponding states .
the reason for not acquiring sufficient number of magic wavelengths for the @xmath98 transition lies in the fact that extra contribution from the tensor polarizability to the total @xmath88 polarizability is not compensated by the counter part of the @xmath0 state .
the idea of using the circularly polarized light to obtain magic wavelengths for the @xmath98 transition is triggered from that fact that the extra contribution from the tensor polarizability to the @xmath88 state might be cancelled by the vector polarizability contributions or the vector polarizabilities are so large that they may play a dominant role in determining the differential polarizabilities .
this would be evident in the following subsection .
transition in rb using the left - handed circularly polarized light . ]
lcccc & + [ 0.2pc ] & & & + 1/2 & 600.83(14 ) & @xmath102 & + & & & 604(7 ) + -1/2 & 607.98(1 ) & -428 & + + + -1/2 & 616.77(2 ) & @xmath103 & 617 + + + 1/2 & 721.628(23 ) & @xmath104 & + & & & 725(7 ) + -1/2 & 728.843(1 ) & -1633 & + + + -1/2 & 761.176(1 ) & @xmath105 & 761 + + + -1/2 & 1306.08(1 ) & 504 & 1306 + lcccc & + [ 0.2pc ] & & & + 1/2 & 613.25(3 ) & @xmath106 & + & & & + -1/2 & 615.51(1 ) & @xmath107 & 616(5 ) + & & & + -3/2 & 618.15(2 ) & @xmath108 & + + + 3/2 & 630.142(1 ) & @xmath109 & + & & & + 1/2 & 628.30(1 ) & @xmath110 & + & & & 628(5 ) + -1/2 & 626.95(1 ) & @xmath111 & + & & & + -3/2 & 625.04(3 ) & @xmath112 & + + + 3/2 & 746.737(15 ) & @xmath113 & + & & & + 1/2 & 738.794(32 ) & @xmath114 & + & & & 742(8 ) + -1/2 & 740.587(1 ) & @xmath115 & + & & & + -3/2 & 742.262(1 ) & @xmath116 & + + + 3/2 & 775.836(5 ) & @xmath117 & + & & & + 1/2 & 775.834(7 ) & @xmath118 & + & & & 775.8(2 ) + -1/2 & 775.789(3 ) & @xmath119 & + & & & + -3/2 & 775.693(2 ) & @xmath120 & + + 1/2 & 783.883(13 ) & @xmath121 & + & & & + -1/2 & 787.547(4 ) & @xmath122 & 786(4 ) + & & & + -3/2 & 776.497(4 ) & @xmath123 + + 1/2 & 1454.4(9 ) & 453 & + & & & & + -1/2 & 1387.1(1 ) & 473 & 1382(149 ) + & & & + -3/2 & 1305.9(1 ) & 504 & + as mentioned previously , polarizabilities for the circularly polarized light have extra contribution from the vector component of the tensor product between the dipole operators .
this extra factor is expected to provide better results for state - insensitive trapping .
first , we present the scalar , vector and tensor dynamic polarizabilities of the @xmath0 , @xmath124 and @xmath88 states in tables [ rb0 ] , [ rb1 ] and [ rb2 ] , respectively , at @xmath125 to perceive their general behavior .
the choice of this wavelength is deliberate for being close to one of the magic wavelengths for the circularly polarized light ( e.g. see table ( [ tabrb - circular1 ] ) and ( [ tabrb - circular2 ] ) ) .
hereafter we shall consider the left - handed circularly polarized light for all the practical purposes as the results will have a similar trend with the right - handed circularly polarized light due to the linear dependency of degree of polarizability @xmath38 in eq .
( [ eq - pol ] ) .
nevertheless , the left or right handed polarization in the experimental set up is just a matter of choice . for the sake of completeness of our study
, we also search for magic wavelengths in the @xmath126 transition in rb atoms using the circularly polarized light although a fairly large number of magic wavelengths for this transition is found using the linearly polarized light .
for this purpose , we plot net dynamic polarizability results of the @xmath0 and @xmath124 states in fig .
[ figrb-3 ] using the circularly polarized light against different values of wavelength .
the figure shows that the total polarizability of the @xmath0 state for any values of @xmath75 is very small except for the wavelengths close to the two primary resonances . due to the @xmath36 dependence of the vector polarizability coefficient in eq .
( [ eq - pol ] ) , the crossing occurs at a different wavelength for the different values of @xmath36 in between two @xmath124 resonances .
as shown in table [ tabrb - circular1 ] , we get set of five magic wavelengths in between seven @xmath124 resonances lying in the wavelength range 600 - 1400 @xmath7 . out of these five sets of magic wavelengths three sets of the magic wavelengths occur only for negative values of @xmath36 .
thus , the number of convenient magic wavelengths for the above transition is less than the number of magic wavelengths obtained for the linearly polarized light .
this advocates for the use of linearly polarized light in this transition , though choice of the circularly polarized light is not bad at all .
the @xmath36 dependence of traps and the difficulties in building a viable experimental set up in the case of circularly polarized light could be the other major concern .
in this work , we also propose the use of `` switching trapping scheme '' ( described below ) which may solve the problem in cases where state - insensitive trapping is only supportive for the negative @xmath36 sublevels of @xmath1 states .
we observed that the same magic wavelength will support state - insensitive trapping for negative @xmath36 sublevels if we switch the sign of @xmath38 and @xmath36 of @xmath0 state .
in other words , the change of sign of @xmath38 and @xmath36 sublevels of @xmath0 state will lead to the same result for the positive values of @xmath36 sub - levels of @xmath1 states .
transition in rb using the left - handed circularly polarized light . ] here we give more emphasis on finding more magic wavelengths for the @xmath127 transition which can be used in the state - insensitive trapping scheme for the rb atom . in table
[ tabrb - circular2 ] , we list a number of @xmath4 for the @xmath127 transition in the far - optical and near infrared wavelengths along with the uncertainties in the @xmath4 and the polarizabilities at the @xmath4 values .
we also list the @xmath128 values in the table which are the average of the magic wavelengths at different @xmath36 sublevels .
the error in the @xmath128 is calculated as the maximum difference between the magic wavelengths from different @xmath36 sublevels . for this transition
we get a set of six magic wavelengths in between seven @xmath88 resonances lying in the wavelength range 600 - 1400 @xmath7 ( i.e. @xmath129 resonance at 1529 @xmath7 , @xmath95 resonance at 1367 @xmath7 , @xmath98 resonance at 780 @xmath7 , @xmath130 resonance at 776 @xmath7 , @xmath131 resonance at 741 @xmath7 , @xmath132 resonance at 630 @xmath7 , and @xmath133 resonance at 616 @xmath7 ) .
five out of six magic wavelengths support a blue detuned trap ( predicted by the negative values of dynamic polarizability ) . out of these five magic wavelengths the magic wavelength at 628 @xmath7 and 742 @xmath7 are recommended for blue detuned traps .
the magic wavelength at 742 @xmath7 supports a stronger trap ( as shown by a larger value of the polarizability at this wavelength in fig.([figrb-4 ] ) ) .
the magic wavelength at 775.8 @xmath7 is very close to the resonance and might not be useful for practical purposes .
the magic wavelength at 1382 @xmath7 supports a red detuned optical trap .
it can be observed from table [ tabrb - circular2 ] that @xmath134 sublevel does not support state - insensitive trapping at this wavelength .
however , using a switching trapping scheme as described in the previous paragraph will allow trapping this sublevel too .
the magic wavelength at 1382 @xmath7 is recommended owing to the fact that it is not close to any atomic resonance and supports a red - detuned trap which was not found in the linearly polarized trapping scheme .
in conclusion , we have employed the relativistic coupled cluster method in the singles , doubles and triples excitations approximation to determine the electric dipole matrix elements in rubidium atom .
some of the important matrix elements were further optimized using the experimental lifetimes of few excited states and static polarizabilities of the ground and @xmath135 excited states .
these optimized matrix elements were then used to improve the precision of the available lifetime results for some of the low - lying excited states in the considered atom .
we also observe disagreement between our calculated dynamic polarizability with a measurement at the wavelength 1064 @xmath7 using the above optimized matrix elements .
we have compared the static and dynamic polarizability results from various works and reported the improved values of the magic wavelengths for the @xmath136 transition using the linearly polarized light .
issues related to state - insensitive trapping of rubidium atoms for the @xmath137 transition with linearly polarized light are discussed and use of the circularly polarized light is emphasized . finally , we evaluate six set of magic wavelengths for the @xmath137 transition which can be used for the above purpose out of which we have recommended two magic wavelengths at 628 @xmath7 and 742 @xmath7 for the blue detuned optical traps and 1382 @xmath7 for the red detuned optical traps .
we also proposed the use of a switching trapping scheme for the magic wavelengths at which the state - insensitive trapping is supported only for either positive or negative @xmath36 sublevels of @xmath1 states .
b.k.s . thanks d. nandy for his help in this work .
the work of b.a . was supported by the department of science and technology , india .
computations were carried out using 3tflop hpc cluster at physical research laboratory , ahmedabad .
a. godone , f. levi , s. micalizio , e. k. bertacco and c. e. calosso , ieee transactions on instrumentation and measurement * 56 * , 378 ( 2007 ) .
j. vanier and c. mandache , appl .
b * 87 * , 565 ( 2007 ) . b. butscher , j. nipper , j. b. balewski , l. kukota , v. bendkowsky , r. low and t. pfau , nat
. phys . * 6 * , 970 ( 2012 ) .
x. l. zhang , l. ishenhower , a. t. gill , t. g. walker and m. saffman , phys .
a * 82 * , 030306 ( 2010 ) .
y. o. dudin , a. g. radnaev , r. zhao , j. z. blumoff , t. a. b. kennedy and a. kuzmich , phys .
rev . lett . * 105 * , 260502 ( 2010 ) .
s. tassy , n. nemitz , f. baumer , c. hohl , a. batar and a. gorlitz , j. phys .
b * 43 * , 205309 ( 2010 ) .
j. guena , p. rosenbusch , p. laurent , m. abgrall , d. rovera , g. santarellu , m. e. tobar , s. bize and a. clairon , ieee trans . on ultrasonics , ferroelectrics , and frequency control
* 57 * , 647 ( 2010 ) .
h. marion , f. p. d. santos , m. abgrall , s. zhang , y. sortais , s. bize , i. maksimovic , d. calonico , j. grunert , c. mandache et al .
lett . * 90 * , 15801 ( 2003 ) .
international committee for weights and measures , proceedings of the sessions of the 95th meeting ( october 2006 ) ; http://www.bipm.org/utils/en/pdf/cipm2006-en.pdf d. sheng , l. a. orozco and e. gomez , j. phys .
b * 43 * , 074004 ( 2010 ) . h.
s. nataraj , b. k. sahoo , b. p. das and d. mukherjee , phys .
* 101 * , 033002 ( 2008 ) . c. e. theodosiou , phys .
a * 30 * , 2881 ( 1984 ) .
w. a. van wijngaarden and j. sagle , phys .
rev . a * 45 * , 1502 ( 1992 ) .
e. gomez , f. baumer , a. d. lange , g. d. sprouse and l. a. orozco , phys .
a * 72 * , 012502 ( 2005 ) .
d. sheng , a. p. galvan and l. a. orozco , phys .
a * 78 * , 062506 ( 2008 ) .
j. marek and p. munster , j. phys .
b * 13 * , 1731 ( 1980 ) . c. tai , w. happer and r. gupta , phys .
rev . a * 12 * , 736 ( 1975 ) . m. s. safronova and u. i. safronova , phys . rev .
a * 83 * , 052508 ( 2011 ) .
j. walls , j. clarke , s. cauchi , g. karkas , h. chen and w. a. van wijngaarden , eur .
j. d * 14 * , 9 ( 2001 ) .
chui , m .- s .
ko , y .- w .
peng and h. ahn , opt .
lett . * 30 * , 842 ( 2005 ) .
a. p. calvan , y. zhaoa , l. a. orozco , e. gomez , a. d. lange , f. baumer and g. d. sprouse , phys .
b * 655 * , 114 ( 2007 ) .
n. schlosser , g. reymond , i. protsenko and p. grangier , nature * 411 * , 1024 ( 2001 ) .
s. kuhr , w. alt , d. schrader , m. muller , v. gomer and d. meschede , science * 293 * , 278 ( 2001 ) .
h. katori , _ proceedings of the sixth symposium frequency standards and metrology _ , ed . by p. gill , world scientific singapore , p. 323
( 2002 ) . c. a. sackett , d. kielpinski , b. e. king , c. langer , v. meyer , c. j. myatt , m. rowe , q. a. turchette , w. m. itano , d. j. wineland and c. monroe , nature * 404 * , 256 ( 2006 ) .
m. s. safronova , c. j. williams and c. w. clark , phys .
rev . a * 67 * , 040303(r ) ( 2003 ) .
m. takamoto and h. katori , phys .
lett . * 91 * , 223001 ( 2003 ) .
h. katori , t. ido and m. kuwata - gonokami , j. phys .
jpn * 68 * , 2479 ( 1999 ) .
a. d. ludlow et al .
et al . , science * 319 * , 1805 ( 2005 ) .
j. mckeever , j. r. buck , a. d. boozer , a. kuzmich , h .- c .
nagerl , d. m. stamper - kurn and h. j. kimble , phys .
lett . * 90 * , 133602 ( 2003 ) .
bindiya arora , m. s. safronova and c. w. clark , phys .
a * 76 * , 052509 ( 2007 ) . v. v. flambaum , v. a. dzuba and a. derevianko , phys .
* 101 * , 220801 ( 2008 ) . c. y. park , h. noh , c. m. lee and d. cho , phys
a * 63 * , 032512 ( 2001 ) .
keith d. bonin and vitaly v. kresin , _ electric - dipole polarizabilities of atoms , molecules and clusters _ , world scientific publishing co. pte ltd ( 1997 ) .
n. l. manakov , v. d. ovsiannikov and l. p. rapoport , physics rep . * 141 * , 319 ( 1986 ) .
i. lindgren , int .
j. quantum chem .
* 12 * , 33 ( 1978 ) .
b. k. sahoo , b. p. das , r. k. chaudhuri and d. mukhrejee , j. comput .
methods sci .
* 7 * , 57 ( 2007 ) .
bindiya arora , d. nandy and b. k. sahoo , phys .
a * 85 * , 02506 ( 2012 ) . c. e. moore , atomic energy levels , u.s .
gpo , washington , d.c .
bur . stand .
, " u.. govt .
print . off .
, v. 35 ( 1971 ) . yu .
ralchenko , f. -c .
jou , d. e. kelleher , a. e. kramida , a. musgrove , j. reader , w. l. wiese and k. olsen , nist atomic spectra database , ( version 3.1.2 ) , national institute of standards and technology , gaithersburg , md ( 2005 ) .
j. e. sansonetti , w. c. martin and s.l .
young , _ handbook of basic atomic spectroscopic data _ , ( version 1.1.2 ) , national institute of standards and technology , gaithersburg , md ( 2005 ) .
b. k. sahoo , b. p. das and d. mukherjee , phys .
a * 79 * , 052511 ( 2009 ) .
d. mukherjee , b. k. sahoo , h. s. nataraj and b. p. das , j. phys .
a * 113 * , 12549 ( 2009 ) .
b. k. sahoo , s. majumder , r. k. chaudhuri , b. p. das and d. mukhrejee , j. phys .
b * 37 * , 3409 ( 2004 ) .
r. w. schmieder , a. lurio and w. happer , phys .
rev . a * 3 * , 1209 ( 1971 ) .
m. marinescu , h. r. sadeghpour and a. dalgarno , phys .
a * 49 * , 5103 ( 1994 ) . c. zhu , a. dalgarno , s. g. porsev and a. derevianko , phys .
a * 70 * , 03722 ( 2004 ) .
m. s. safronova , bindiya arora and c. w. clark , phys .
a * 73 * , 022505 ( 2006 ) .
r. f. gutterres , c. amiot , a. fioretti , c. gabbanini , m. mazzoni and o. dulieu , phys .
a * 66 * , 024502 ( 2002 ) . w. f. holmgren , m. c. revelle , v. p. a. lonij and a. d. cronin , phys .
a * 81 * , 053607 ( 2010 ) .
w. r. johnson , d. kolb and k .- n .
huang , at .
data nucl .
data tables * 28 * , 334 ( 1983 ) .
k. d. bonin and m. a. kadar - kallen , phys .
a * 47 * , 944 ( 1993 ) .
k. e. miller , d. krause and l. r. hunter , phys .
a * 49 * , 5128 ( 1994 ) .
j. marek and p. mnster , j. phys .
b : atom . molec
* 13 * , 1731 ( 1980 ) .
c. krenn , w. scherf , o. khait , m. musso and l. windholz , z. phys .
clusters * 41 * , 229 ( 1997 ) .
l. r. hunter , d. krause , s. murthy and t. w. sung , phys .
a * 37 * , 3283 ( 1988 ) .
l. r. hunter , d. krause , k. e. miller , d. j. berkeland and m. g. boshier , optics comm .
* 94 * , 2010 ( 1992 ) . c. tanner and c. wieman , phys .
rev . a * 38 * , 162 ( 1988 ) .
r. marrus , d. mccolm and j. yellin , phys .
a * 147 * , 55 ( 1966 ) .
m. j. seaton , comp .
. comm . * 146 * , 254 ( 2002 ) .
bindiya arora , m. s. safronova and c. w. clark , phys .
a * 76 * , 052516 ( 2007 ) .
m. s. safronova , w. r. johnson and a. derevianko , phys .
a * 60 * , 4476 ( 1999 ) .
a. derevianko , w. r. johnson , m. s. safronova and j. f. babb , phys .
lett . * 82 * , 3589 ( 1999 ) .
cheng zhu , alex dalgarno , sergey g. porsev and andrei derevianko , phys .
a * 70 * , 032722 ( 2004 ) .
a. derevianko , w. r. johnson , m. s. safronova and j. f. babb , phys . rev
. lett . * 82 * , 3589 ( 1999 ) . | we study the cancellation of differential ac stark shifts in the @xmath0 and @xmath1 states of rubidium atom using the linearly and circularly polarized lights by calculating their dynamic polarizabilities .
matrix elements were calculated using a relativistic coupled - cluster method at the single , double and important valence triple excitations approximation including all possible non - linear correlation terms .
some of the important matrix elements were further optimized using the experimental results available for the lifetimes and static polarizabilities of atomic states .
magic wavelengths " are determined from the differential stark shifts and results for the linearly polarized light are compared with the previously available results .
possible scope of facilitating state - insensitive optical trapping schemes using the magic wavelengths for circularly polarized light are discussed . using the optimized matrix elements ,
the lifetimes of the @xmath2 and @xmath3 states of this atom are ameliorated . |
Source: elenacastaldi77 / iStock
5. Niger
> GNI per capita: $370
> 2016 GDP: $7.5 billion
> Population: 20.7 million
> Life expectancy: 59.7 years at birth
Rated by the UN as one of the least developed countries in the world, Niger struggles with droughts, political instability and insurgency. In fact, basic human rights are still a major issue in the country, with slavery only being banned in 2003. A strong education system could push a country in the right direction, and Niger invests heavily in its schooling. But while the government allocates more of its spending to education that is typical in most countries, only 15.5% of people in Niger 15 and older were considered literate in 2012 — the lowest literacy rate of all the poorest countries. Recently discovered oil fields are taking the forefront of Niger’s economy, with the oil and mining industry accounting for nearly half of the country’s total exports.
Source: Thinkstock
4. Liberia
> GNI per capita: $370
> 2016 GDP: $2.1 billion
> Population: 4.6 million
> Life expectancy: 62.0 years at birth
Liberia, Africa’s oldest republic, is home to more than 4.5 million people. The country, with a government modeled heavily off of the U.S. constitution, is still recovering from a bloody 14-year civil war that ended in 2003.
Liberia had the second largest GDP contraction among the world’s poorest countries at -1.6% in 2016. While still heavily reliant on agriculture, the sector’s GDP contribution decreased from 44.3% in 2011 to 34.2% in 2016. Liberia’s biggest exports are passenger and cargo ships at 45% of total exports.
Source: Thinkstock
3. Central African Republic
> GNI per capita: $370
> 2016 GDP: $1.8 billion
> Population: 4.6 million
> Life expectancy: 51.4 years at birth
The average person in Central African Republic lives on less than $400 a year. Like many of the world’s poorest countries, CAR’s economy is primarily labor driven and heavily dependent on farming — with agriculture accounting for about 43% of the country’s GDP.
Limited economic opportunities and low incomes can make it difficult to lead healthy lives, and few parts of the world have a lower life expectancy than CAR. Life expectancy at birth in the landlocked African nation is only 51.4 years, two decades less than the global average.
Source: Thinkstock
2. Malawi
> GNI per capita: $320
> 2016 GDP: $5.4 billion
> Population: 18.1 million
> Life expectancy: 62.5 years at birth
Though it has been a democratically stable country since the 1990s, Malawi has considerable hurdles to clear to achieve economic prosperity. Hit especially hard by HIV-AIDS, Malawi is home to over a million children orphaned by the disease. Additionally, the country depends heavily on agriculture — despite an unfavorable arid and dry climate — with crop production accounting for 28.3% of economic output. And while the government spends 20.4% of its budget on education — about 7 percentage points more than the United States — the literacy rate has actually declined 3 percentage points from 2014 to 2015.
Source: Thinkstock
1. Burundi
> GNI per capita: $280
> 2016 GDP: $3.0 billion
> Population: 10.5 million
> Life expectancy: 57.1 years at birth
Bordered by three other countries on this list, Burundi is a landlocked country in sub-Saharan Africa — and the poorest in the world. Burundi shares several traits common among poor nations. Heavily dependent on labor, some 40% of Burundi’s GDP is derived from agriculture. In comparison, agriculture accounts for only about 1% of economic output in the United States. While many of the poorest countries have rapidly growing economies, economic activity contracted by 0.6% in Burundi in 2016 — even as the global economy expanded by 2.4%.
Economic growth and prosperity are likely stymied by conflict in Burundi. The country has been embroiled in an ethnic civil war for over a decade. ||||| Income inequality is an increasingly contentious political issue in the United States. The top 1% of earners in the United States control nearly double the amount of wealth as the lowest earning 50%. This is not a uniquely American problem however — and income inequality in the United States appears to be a microcosm of uneven wealth distribution on a global scale.
North America is home to fewer than 5% of the global population — yet the continent’s combined gross domestic product accounts for over one-quarter of global economic activity. Meanwhile, sub-Saharan Africa is home to nearly 14% of the world’s population, yet the region’s economic output accounts for only 2% of global GDP.
While GDP is a practical way to measure the size of a given country or region’s economy, it does not accurately reflect the overall wealth of a population. Unlike GDP, gross national income, or GNI, accounts for all economic activity within a country’s borders in addition to wealth generated by nationally-owned entities operating abroad. Adjusted to the population and converted to U.S. dollars, GNI per capita is a good approximation of the average income of residents of a given country.
24/7 Wall St. reviewed GNI per capita in over 170 nations to identify the 25 poorest countries in the world. Worldwide, the average person lives on about $10,300 per year. In the poorest countries, approximate annual incomes per person range from only $900 on the high end, to less than $300 in the poorest country.
Click here to see the 25 poorest countries in the world.
Click here to see our detailed findings and methodology. | – As American lawmakers and financial experts debate economic disparity, 24/7 Wall St. takes a worldwide economic view—specifically, regarding which nations are the poorest on Earth. The site compared gross national income, or GNI, per capita from the World Bank for more than 170 nations, which is a close equivalent to figuring out what residents' average annual income is. Here, the top 10 poorest nations, plus their GNI per capita: Burundi; $280 Malawi; $320 Central African Republic; $370 Liberia; $370 Niger; $370 Madagascar; $400 Democratic Republic of Congo; $420 The Gambia; $440 Mozambique; $480 Guinea; $490 Click for the full list, or check out the stark contrast of the world's richest nations. |
since last 25 years , there has been an explosion of interest in the magnetic behavior of pyrochlore oxides .
they exhibit metallic , insulating or semi - conducting behavior often coupled with magnetic phase transitions .
pyrochlores are a good system for studying the effects of spin - orbital interplay .
a main goal is to understand how various phase transition such as magnetic ordering and metal insulator transitions emerges from this interplay .
most importantly , pyrochlores provide an opportunity to study the role of geometrical frustration in phase transitions.@xcite .
the metal - insulator transition ( mit ) in mo pyrochlore oxides @xmath0 ( where r is rare earth metal ) and role of its frustrated lattice structure has been extensively studied@xcite .
the evolution of charge dynamics at metal - insulator transition has been spectroscopically investigated for @xmath1 where the spin - orbit interaction as well as the electron correlation is effectively tuned by the doping level ( x)@xcite . the transition from ferromagnetic metal to spin glass insulator and paramagnetic metal has been observed with increase of the radius of rare earth metal ion @xmath2 and external pressure due to the competing double exchange and super exchange interactions on the frustrated lattice@xcite . compounds with b = mn , mo , ir and os are interesting as they undergo metal - insulator transition ( mit ) by changing temperature , pressure and r - site cations .
for example , coulomb interactions has been found important for mit and giant magnetoresistance for systems with 3d electrons like b = mn@xcite . on the other hand , for 5d systems with b = ir and os
, recent first - principles studies revealed that spin - orbit interaction plays a major role in their electronic and magnetic properties@xcite . in these systems the natural tendency to form long - range ordered ground states is frustrated , resulting in some novel short - range ordered phases like spin glasses@xcite , spin ices , and spin liquids .
the role of orbital degrees of freedom has been proposed in metal insulator transitions in various pyrochlore oxides@xcite . in this work ,
we focus on phase transition in orbital degrees of freedom in pyrochlore .
the lattice structure of @xmath3 is composed of two intervening pyrochlore lattices formed by mo cations and r cations . in this work ,
we neglect the coupling between mo 4d electrons and r site rare earth metal moments and focus on mo pyrochlore network alone.@xcite . in mo pyrochlore lattice ,
the mo cation is surrounded by octahedra of oxygens @xmath4 .
the octahedral crystal fields of oxygen splits five fold degenerate d orbitals of mo cation into lower three fold degenerate @xmath5 and higher two fold degenerate @xmath6 levels .
further , the @xmath7 octahedron is distorted along the direction toward the center of each mo tetrahedron , also known as the local ( 111 ) axis or trigonal axis .
the distortion in the @xmath7 octahedron along trigonal axis splits three fold @xmath5 level into lower singlet @xmath8 level ( below the fermi level ) and higher two fold degenerate @xmath9 levels ( above fermi level)@xcite .
mo electronic configuration is kr @xmath10 .
as there are only two electrons in outermost d shell of @xmath11 cation , the singlet @xmath8 up spin band ( formation of up spin channel and down spin channel happens because of strong hund s coupling . )
is fully occupied and the two fold degenerate upper @xmath9 up spin band is half occupied . since these @xmath9 electrons reside in a half filled band above fermi level , they hop from site to site with two fold degeneracy at each site and contribute in conductance . on the other hand , @xmath8 electrons are sitting in a fully occupied band below the fermi level .
they act like spin half localized spins interacting with each other via anti - ferromagnetic super exchange mechanism and play a role in shaping the overall phase competition .
on pyrochlore lattice , these @xmath8 and @xmath12 electrons compete to produce a resultant order .
in addition , the itinerant electrons and localized spin half electrons interact via ferromagnetic hund s rule coupling or double exchange mechanism .
the model we study in this paper takes all these into account including on - site coulomb interaction between degenerate @xmath9 electrons .
we start with the two band orbital double exchange ( de ) model hamiltonian previously proposed for various pyrochlore systems@xcite that includes kinetic energy , coulomb interaction , hund s coupling and anti - ferromagnetic super - exchange , @xmath13 the first term denotes kinetic energy of itinerant @xmath9 electrons with spin @xmath14 and with orbital index @xmath15 running over degenerate orbitals @xmath16 and @xmath17 of the @xmath12 band .
second term , @xmath18 , denotes the hund s coupling between itinerant @xmath9 electrons with localized spin half @xmath8 electron .
third term , @xmath19 , denotes the anti ferromagnetic super - exchange ( se ) among localized @xmath8 electrons .
@xmath20 is approximately set by @xmath21 where @xmath22 is the transfer integral between the @xmath8 orbitals and @xmath23 the intra - orbital coulomb repulsion in the @xmath8 orbital .
the last term denotes on - site coulomb interactions between @xmath9 including intra and inter - orbital repulsions .
the pressure - induced lattice contraction leads to the increase of the electron transfer interactions .
in contrast to the `` chemical pressure '' with the r - ion size variation in which the @xmath24 transfer ( t ) is most effectively modified , the isotropic lattice contraction by the `` physical '' pressure act on the @xmath8 electrons , enhancing the antiferromagnetic se interaction @xmath25 between the localized @xmath8 electron spin@xcite .
we study two band de model ( [ demodel ] ) on the two dimensional checkerboard lattice in the limit @xmath26 .
we take this lattice because checkerboard lattice has frustrated lattice structure like pyrochlore lattice but has simpler two band hamiltonian . the checkerboard lattice is shown in fig.[check - latt ] . for large @xmath27 , we can simplify the model .
we first rotate the axis of quantization of every fermionic operator @xmath28 from universal z - axis to the direction of the core spin @xmath29 at every site by transformation , @xmath30=u(\theta_i , \phi_i)\left[\begin{array}{c}p_i\\a_i\end{array}\right ] \mbox{,\quad where~~}\\ u(\theta_i,\phi_i)=exp\big(-i\frac{\phi_i}{2}\sigma^z\big)exp\big(-i\frac{\theta_i}{2}\sigma^y\big).\end{aligned}\ ] ] this renders the hund s term diagonal in spin , and in @xmath31 limit , the anti - parallel state gets projected out from the hamiltonian , the on site energy gets shifted by @xmath27 and we get the spins of itinerant electrons aligned parallel to the localized spin at each site @xcite .
@xmath32 where @xmath33 is spinless fermion operator and @xmath34 . in above eq.([mainmodel ] ) , @xmath35 is considered as an inter - orbital interaction and the electronic hopping element @xmath36 is given by @xmath37 where @xmath38 denotes the angle of spin @xmath39 , making the hopping element site and spin dependent .
because of anisotropy of the @xmath12 orbitals and relative angle of mo - o - mo bond @xcite , the relative strength of @xmath40 and @xmath41 can be expressed as @xmath42 in mo pyrochlore oxides @xmath43 , which means that inter orbital hopping @xmath44 is significantly larger than the intra orbital hopping @xmath45 @xcite . for simplicity
, we neglect the intra orbital hopping @xmath46 , and set @xmath47 and get , @xmath48 where we replaced orbital degeneracy @xmath49 by spin degeneracy @xmath50 and mapped the problem from orbital space to spin space .
using eq.([furukawa ] ) , we can find the ground state of the model simply because spin variables @xmath51 in hopping amplitude @xmath52 are decoupled and the energy of electrons in the model can be minimized by simply maximizing @xmath52 .
for @xmath52 to be maximum , nearest neighboring spins have to be parallel to each other . in parallel spin configuration ,
the hopping amplitude @xmath52 becomes site and spin independent .
we now study the nature of ground state phases of the model eqn.([furukawa ] ) in extreme limits @xmath53 and @xmath54 .
we would give an estimate for the transition point at the zero temperature . in limit @xmath53 ,
to determine the nature of zero temperature phase of the model , we calculate the zero temperature magnetic susceptibility @xmath55 $ ] .
( the square bracket denotes the matrix structure of physical quantity under consideration . ) under the random phase approximation , magnetic susceptibility matrix of interacting electron gas can be expressed in term of tight binding susceptibility matrix @xmath56 $ ] , @xmath57=(g\mu_b)^2[\chi^0({\bf q } ) ] ( \mathbb{i}-u[\chi^0({\bf q})])^{-1}.\end{aligned}\ ] ] the magnetic susceptibility matrix @xmath58 $ ] diverges when @xmath59\right|=0.\end{aligned}\ ] ] from eqn.([rpa ] ) , we find the minimum value of u at which the magnetic instability sets in .
this eqn . also gives the information about the nature of ordering by locating wave - vector ` * q * ' at which the determinant in above eqn . becomes zero . to calculate free electron magnetic susceptibility @xmath56 $ ]
, we take the external magnetic field term as perturbation to tight binding term and use first order perturbation theory to get , @xmath60 where @xmath61 , @xmath62 is a diagonalizing matrix for tight binding part of the hamiltonian , @xmath63 are sub - lattice indices of unit cell of the checkerboard lattice , @xmath64 are running over the band indices and @xmath65 are running over spin indices . using eqn.([susceptmatrix ] ) , we calculate the susceptibility matrix numerically for all values of ` * q * ' and find out the magnetic instability from eqn.([rpa ] ) .
we find out the paramagnetic phase in the model and zero temperature transition point has been found at @xmath66 . in this method
we write the self - interaction term as , @xmath67\end{aligned}\ ] ] where last identity being valid for spin half fermions .
+ in our scheme , we apply the hubbard stratanovich transformation to replace the quadratic self - interaction term with an integral over a linear term @xcite .
the partition function @xmath68 , therefore , can be written as @xmath69\int [ dm]\int [ d\omega]\int [ dp^\dagger , dp ] e^{-s}\end{aligned}\ ] ] where @xmath70\nonumber\\ & -&\int_{0}^\beta d\tau \sum_{<ij>,\alpha } t_{ij}(p^{\dagger}_{i,\alpha\tau}p_{j , \alpha\tau } + h.c.)\nonumber\\ & + & \int_{0}^\beta d\tau \sum_{i\alpha}(\epsilon_{i\alpha}-\mu ) n_{i\alpha \tau}+\int_{0}^\beta d\tau \sum_{i\alpha}p^{\dagger}_{i,\alpha \tau}\partial_{\tau}p_{j , \alpha \tau}\end{aligned}\ ] ] we calculate the partition function now using perturbation theory and , then , take the limit @xmath71 , @xmath72 \left[e^{-\int_0^\beta d\tau ( h^{\prime}_{\tau}+h_{u\tau})}\right]\end{aligned}\ ] ] where @xmath73 is given by eq .
( [ effectiveham ] ) .
we , then , make static field approximation retaining space dependent part and ignoring the time dependent part of hamiltonian @xcite , @xmath74 the integral over @xmath75 in the partition function has the maximum value near the saddle point @xmath76 . at half filling @xmath77 , the saddle point equation becomes @xmath78 and @xmath75 is integrated out finally to give the effective hamiltonian , @xmath79 we have , thus , mapped the original hubbard problem to electrons coupled to auxiliary magnetic moments @xmath80 . in strong coupling limit , we rotate locally at each site to a frame pointing along direction of spin at that site , @xmath81 under this transformation , @xmath82 we write the transformed hamiltonian ( ignoring time dependent part ) as , @xmath83 where indices i and j are running over all the bonds of checkerboard lattice and @xmath84 we can now calculate the partition function .
the first order correction @xmath85 e^{-\int_0^\beta d\tau h_{0\tau } } \int d\tau h^{\prime}_{\tau}$ ] becomes zero ( applying wick s theorem ) .
+ the second order correction is given by @xmath86 e^{-\int_0^\beta d\tau h_{0\tau } } \int d\tau_1 \int d\tau_2 h^{\prime}_{\tau_1 } h^{\prime}_{\tau_2}.\end{aligned}\ ] ] we apply wick s theorem again here and in the limit @xmath71 get second order correction , @xmath87 therefore , the effective hamiltonian to second order is given by , @xmath88 which is an anisotropic classical heisenberg model on checkerboard lattice .
thus , in strong coupling limit , we can find the ground state of the model by monte carlo method .
we use metropolis algorithm to implement the monte carlo and conclude that the model is a ferromagnet in strong coupling limit .
to access thermal physics , we use static auxiliary field ( saf ) approach earlier used for a comprehensive study of hubbard model in triangular @xcite , fcc@xcite and pyrochlore lattice@xcite . to calculate the partition function , we use eqn .
( [ effectiveham ] ) for all values of @xmath89 . for a given @xmath80 configuration ,
the electron problem is linear and the hilbert space scales linearly with lattice size . in this work we take the hopping element @xmath90 in eqn . (
[ furukawa ] ) spin and site independent and use a real space monte - carlo technique .
we start with a configuration of @xmath80 with random magnitudes and orientation at high temperature t. we , then , attempt an update @xmath91 at site @xmath92 . the energy @xmath93 is computed before and after the attempted update and @xmath94 is compared to @xmath95 in the metropolis spirit . to calculate the partition function and physical properties at zero temperature , we anneal the sample from higher temperature to zero temperature . plane .
the color map denotes the value of maxima of s(q ) at given temperature and @xmath89 .
the maxima in this case corresponds to q=(0,0,0 ) , which gives ferromagnetic order .
the blue colored regions are where structure factor is very low , or magnetic correlation is very weak to non - existent . the brighter colors like yellow pink and red denote stronger correlations.,width=60,height=60 ] in order to capture the magnetic correlation and transition temperature @xmath96 , we calculate the thermal average of structure factor defined as @xmath97 at each temperature , which serves us as the order parameter of the magnetic transition .
we have shown the structure factor s(q ) in parameter space of @xmath98 in fig .
( [ structurefactor ] ) . at large temperature ,
s(q ) is vanishingly small for all q , however , when lowering the temperature , we notice rapid growth of s(q ) at few specific q. the onset of the growth is shown in the fig .
( [ structurefactor ] ) at the magnetic transition temperature @xmath96 as function of @xmath89 . in fig .
( [ magnetization ] ) , we have shown average magnetization versus @xmath89 at various temperature .
we observe that average magnetization decreases as temperature increases at given @xmath89 .
we have also plotted @xmath99 versus @xmath89 which shows dependence of @xmath89 on the magnitude of @xmath100 fields using variational minimizations . here , we consider an ideal ferro configuration of @xmath100 with magnitude m , and calculate the total energy e(m ) as function of m by diagonalizing the hamiltonian for ferro @xmath100 configuration .
then , e(m ) is minimized to find the @xmath99 and plotted versus @xmath89 . and
@xmath99 as function of @xmath89 at different temperature.,width=60,height=60 ]
we studied correlation driven orbital mott transition in 2 dimensional pyrochlore lattice .
we studied a model hamiltonian in which we allow hund s coupling between itinerant and localized electrons apart from the coulomb interaction . in the weak coupling limit ,
we calculate the zero temperature orbital magnetic susceptibility under random phase approximation and show that the model exhibits para orbital phase . in strong coupling limit
, we calculate the effective hamiltonian using green function perturbation theory and establish ferro - orbital ordering at zero temperature . using static auxiliary field based monte carlo ,
we show finite temperature orbital phase diagram of the model .
99 j. s. gardner , m. j. p. gingras and j. e. greedan , rev .
* 82 * , 53 ( 2010 ) .
m. a. subramanian , b. h. toby , a. p. ramirez , w. j. marshall , a. w. sleight and g. h. kwei , science*273 * , 81 ( 1996 ) , http://www.sciencemag.org/content/273/5271/81.full.pdf .
y. shimakawa , y. kubo , n. hamada , j. d. jorgensen , z. hu , s. short , m. nohara , and h. takagi , phys .
b * 59 * , 1249 ( 1999 ) .
h. shinaoka , y. motome , t. miyake and s. ishibashi , phys .
b * 88 * , 174422 ( 2013 ) h. shinaoka , t. miyake and s. ishibashi , phys . rev . lett . *
108 * , 247204 ( 2012 ) .
h. j. silverstein , k. fritsch , f. flicker , a. m. hallas , j. s. gardner , y. qiu , g. ehlers , a. t. savici , z. yamani , k. a. ross , b. d. gaulin , m. j. p. gingras , j. a. m. paddison , k. foyevtsova , r. valenti , f. hawthorne , c. r. wiebe and h. d. zhou , phys .
b * 89 * , 054433 ( 2014 ) .
k. matsuhira , m. wakeshima , y. hinatsu and s. takagi , j. phys .
* 80 * , 094701 ( 2011 ) .
n. hanasaki , k. watanabe , t. ohtsuka , i. kezsmarki , s. iguchi , s. miyasaka and y. tokura , phys .
. lett . * 99 * , 086401 ( 2007 ) .
s. iguchi _
et al _ , arxiv : 1109.3744v1 [ cond-matt.str-el ] ( 2011 ) . k. ueda , j. fujioka , y. takahashi , t. suzuki , s. ishiwata , y. taguchi and y. tokura , phys .
lett . * 109 * , 136402 ( 2012 ) .
s. nakatsuji , y. machida , y. maeno , t. tayama , t. sakakibara , j. vanduijn , l. balicas , j.n .
millican , r.t .
macaluso and j.y .
chan , phys .
96 * , 087204 ( 2006 ) .
i. v. solovyev , phys .
b * 67 * , 174406 ( 2003 ) .
s. iguchi , n. hanasaki , m. kinuhara , n. takeshita , c. terakura , y. taguchi , h. takagi and y. tokura , phys .
lett . * 102 * , 136407 ( 2009 ) .
y. motome and n. furukawa , phys .
* 104*,106407 ( 2010 ) . y. motome and n. furukawa , j. phys.:conf .
* 200 * ( 2010 ) 012131 .
y. motome and n. furukawa , phys .
b * 82 * , 060407(r ) ( 2010 ) . y. motome and n. furukawa , j. phy . : conf . ser . *
320 * 012060 ( 2011 ) .
h. ichikawa _
et al _ , j. phys .
. jpn . * 74 * , 1020 ( 2005 ) .
j. schulz , phys .
. lett . * 65 * , 2462 ( 1990 ) .
j. e. hirsch , phys .
b * 28 * , 4059 ( 1983 ) . t. v. ramakrishnan and d. sa r. tiwari and p. majumdar , eur .
phys . lett . *
108 * 27007 ( 2014 ) .
r. tiwari and p. majumdar , arxiv:1302.2922 ( 2013 ) .
n. swain , r. tiwari and p. majumdar , arxiv:1505.03502 ( 2015 ) . | we present here correlation driven orbital mott transition in 2 dimensional pyrochlore lattice .
we study a model hamiltonian in which we take hund s coupling between itinerant and localized electrons apart from the coulomb interaction . in the weak coupling limit
, we calculate zero temperature susceptibility under random phase approximation and find the model in para orbital phase . in strong coupling limit , we calculate the effective hamiltonian using green function perturbation theory and find out ferro - orbital ordering at zero temperature .
finally , we use a static auxiliary field based monte carlo , explicitly retaining all the spatial correlations , to study finite temperature phase diagram of the model . |
recently , interference management in random access networks has attracted a great interest . for an example
, the working group of ieee 802.11 , which is one of the most successful standards in commercial wireless communication systems , is considering performance improvement for overlapping basic service sets ( obss ) under the new standardization called ieee 802.11 high efficiency wireless local area network ( hew ) @xcite .
it is known that obss experiences severe interference and the interference among obss is a primary problem that ieee 802.11 hew should overcome . on the other hand , interference alignment ( ia ) @xcite has been considered as a promising solution to achieve the optimal degrees - of - freedom ( dof ) in several interference network models including cellular networks .
suh and tse characterized the dof of the @xmath0-cell uplink cellular network and they proposed an achievable scheme for the optimal dof @xcite .
the main idea of @xcite is to align the interferences from users in other - cells to predefined interference spaces , and it was shown that the achievable dof increases as the number of users transmitting concurrently increases . to apply ia , global channel state information ( csi )
is required and , hence , some performance degradation may occur as long as there happens csi feedback quantization in practice .
the related performance was analyzed in @xcite along with a further performance optimization .
in addition , opportunistic ia ( oia ) has been proposed for the @xmath0-cell uplink network in which user scheduling is combined with ia . unlike the original ia technique
, oia does not require global csi , time / frequency expansion , and iterations for beamformer design , thereby resulting in easier implementation @xcite .
later , @xcite compared oia to the traditional ia with quantized csi and indicated the advance of oia . to further improve the performance of oia , an active alignment transmit beamforming scheme
was proposed in @xcite which perfectly aligns the interference to the reference interference direction of one bs and , therefore , achieves partial ia with nonzero probability . as the previous oia schemes mostly minimize the inter - cell interference , a new oia scheme that additionally considers the intra - cell power loss
was also proposed in @xcite .
ia techniques have been also applied to random access networks ( rans ) based on carrier sensing multiple access ( csma ) mechanisms @xcite . in @xcite ,
multiple packets from users in other overlapped networks are aligned at the physical ( phy ) layer and the decoded packets are assumed to be exchanged through wired backhaul such as ethernet . basically , at the medium access control ( mac ) layer in @xcite , users are scheduled like the point coordinated function ( pcf ) which is a part of ieee 802.11 standard .
thus , the proposed ia algorithm in @xcite requires tight coordination among access points ( aps ) in overlapped networks through wired backhaul and it does not consider collisions among users even though the collision effect is the most important factor to degrade the performance of rans .
ia was applied to a fully distributed random access environment in @xcite like the distributed coordinated function ( dcf ) of ieee 802.11 standard . in @xcite ,
the proposed mac protocol allows a new transmission even when there exist ongoing transmissions already in overlapped networks as long as the new transmitter and receiver have a sufficiently large number of antennas and ensure no interference to ongoing transmissions .
thus , if users have the same number of antennas , then the performance improvement becomes limited .
moreover , the dof of multiple access channel was characterized when both users and ap have multiple antennas and each user independently decides whether to transmit in a specific time , which can be regarded as the uplink scenario of a single ran @xcite . in @xcite
, the authors proved the optimal average dof can be achieved by the interference alignment in specific network scenarios , but the interference among multiple rans is not considered . in this paper , we propose a novel oia protocol in order to efficiently manage the interference among overlapped rans operating in slotted aloha protocol .
the proposed oia protocol jointly considers phy and mac layers . in the phy layer ,
a beamforming algorithm based on singular value decomposition ( svd ) is adopted to minimize _ generating interference _ from each user to other overlapped rans . in the mac layer ,
a novel opportunistic random access algorithm is proposed , which is based on cumulative distribution function ( cdf ) of each user s generating interference .
therefore , the proposed oia protocol is a cross - layer solution for interference - limited rans .
the main differences between the proposed oia protocol and the conventional oia algorithm @xcite are as follows .
* each user determines to send packets for itself in the proposed oia protocol , while base stations determine which users send packets in the conventional oia algorithm . *
the proposed oia protocol utilizes the cdf value of each user s generating interference as random access criterion , while the conventional oia algorithm utilizes the very generating interference as a scheduling criterion . *
the number of concurrently transmitting users in the network is a random variable in the proposed oia protocol , while it is fixed in the conventional oia algorithm .
thus , the proposed oia protocol examines the number of concurrently transmitting users in the network and changes the packet decoding methodology according to the number of concurrently transmitting users .
the rest of this paper is organized as follows : section [ sec : system_model ] describes the system model and section [ sec : conv ] introduces the conventional techniques which can be used for interference - limited rans . in section [ sec : oia ] , the oia protocol for the interference - limited rans is proposed .
section [ sec : evaluation ] evaluates the throughput performance of the proposed oia protocol and , finally , section [ sec : conclusion ] concludes the paper .
we consider an _ uplink _ scenario in @xmath0 overlapped rans each of which has one ap and @xmath1 users .
each ap and each user are assumed to have @xmath2 and @xmath3 antennas , respectively .
[ fig : scenario ] shows an example network where three rans are overlapped .
we assume that each ran operates with slotted aloha and transmission time is equally divided by slots . at each slot
, each user transmits a packet to its serving ap with probability @xmath4 . due to the nature of random access ,
simultaneous transmissions from multiple users is inevitable . assuming that rans are geometrically overlapped , such concurrent packet transmissions from multiple users in different rans cause interference due to the broadcasting property of wireless medium .
the channel matrix from the @xmath5-th user in the @xmath6-th ran to the @xmath7-th ap is denoted by @xmath8}_k \in \mathbb{c}^{m\times l}$ ] , where @xmath9 and @xmath10 .
the received signal at the @xmath7-th ap , @xmath11 , is given as @xmath12}_k \textbf{w}^{[i , j ] } x^{[i , j ] } s^{[i , j]}+\textbf{z}_k,\ ] ] where @xmath13 } \in \{0,1\}$ ] denotes the random variable representing the activity of the @xmath5-th user in the @xmath6-th ran and @xmath14}=1\}=p$ ] . hence , @xmath13}$ ] becomes zero if the user has no packet to transmit .
@xmath15 } \in \mathbb{c}$ ] and @xmath16 } \in \mathbb{c}^{l\times 1}$ ] denote the information stream and its transmit beamforming vector of the @xmath5-th user in the @xmath6-th ran , respectively .
let @xmath17}$ ] and @xmath18 .
that is , @xmath19 indicates the number of concurrently transmitting users in the @xmath6-th ran and @xmath20 indicates the total number of concurrently transmitting users in all @xmath0 rans .
we assume that each user experiences rayleigh fading and the channel gain independently changes in each time slot .
each ap is assumed to periodically transmit a pilot signal so that the users can estimate their channels to all aps located in the overlapped area by using the reciprocity of the wireless channel . in this paper
, we also assume that each user transmits a single information stream .
@xmath21 represents the additive gaussian noise at the @xmath7-th ap .
furthermore , the @xmath5-th user in the @xmath6-th ran is assumed to know its outgoing channels @xmath8}_k$ ] , @xmath22 , by exploiting the channel reciprocity , i.e. , the _ local csi _ at the transmitter .
in addition , we assume the interference - limited networks and the average path - loss from users to aps are assumed to be identical to each other for simplicity .
for designing rans , the mitigation of collision effects among simultaneously transmitting users is one of the most challenging issues .
it has been shown that multi - packet reception ( mpr ) at the phy layer , which is implemented through multi - user multiple - input multiple - output ( mimo ) techniques in general , can be a promising solution @xcite . as assumed in section [ sec : system_model ] , each user transmits a single stream and each ap has @xmath2 antennas , each ap can successfully decode up to @xmath2 packets from transmitting users in the overlapped area by using the mpr technique @xcite . in practice , the probability that each ap successfully decodes the received packets depends on the number of concurrently transmitting users . the average throughput ( packets / slot ) in the mac layer of @xmath0-overlapped rans with the mpr technique is obtained as @xmath23 where @xmath24 denotes the total number of users in the @xmath0-overlapped rans and @xmath25 denotes the probability that each received packet is successfully decoded when there exist @xmath26 concurrently transmitting users in the whole @xmath0-overlapped rans .
note that if the @xmath7-th ran has @xmath27 users simultaneous transmissions , @xmath28 .
although the @xmath7-th ap may try to decode all the @xmath26 users packets , only the @xmath27 packets are useful for it and it may throw the other decoded packets . obviously , @xmath29 decreases as @xmath26 increases due to the interferences from @xmath30 users .
more detailed analysis on @xmath25 is given in @xcite .
in addition to mpr , we can also consider the interference nulling ( in ) technique which utilizes the transmit beamforming at each user for overlapped rans . for applying the in technique ,
each ap sets @xmath31 spatial dimensions for receiving signals from its serving users .
then , each user performs transmit beamforming to make its signal arriving at the signal spaces of the other aps be zero . assuming @xmath0 overlapped rans and @xmath31 dimensional signal space at each ap , each user should make its transmit signal , which is arrived at @xmath32 dimensional signal space in other aps , be zero .
hence , the dimension of signal space at each ap , @xmath31 , should satisfy the following constraint : @xmath33 where @xmath3 indicates the number of transmit antennas at users .
[ fig : in_model ] shows a transmission scenario with the in technique when @xmath34 , @xmath35 , and @xmath36 , which satisfies the condition in ( [ eq : in_limitation ] ) . in this figure , it is assumed that @xmath37 , @xmath38 , and @xmath39 . in fig .
[ fig : in_model ] , without loss of generality , each ap is assumed to set its first antenna as the signal space for receiving packets from its serving users , while other two antennas are reserved for interference signals from other rans .
all transmitting users in the overlapped networks perform transmit beamforming in order to null out the interference at other aps signal spaces , which is the first antenna of each ap in fig [ fig : in_model ] .
each ap receives the signals from the users belong to itself through the first antenna , and each ap operates as an independent cell since there is no interference from other cells for the first antenna . in fig .
[ fig : in_model ] , there is a _ packet collision _ in the third ran , but the aps in the first and second rans can receive a packet successfully since interference signals are still nulled out at the signal spaces .
if the number of concurrently transmitting users in a specific ran is less than or equal to @xmath31 , that is , @xmath40 , @xmath41 , then each ap can surely decode the received signals with the mpr technique _ regardless of the number of transmitting users in other rans_. however , if the number of concurrently transmitting users in a specific ran is larger than @xmath31 but the number of concurrently transmitting users in all the @xmath0 rans is smaller than @xmath2 , that is , @xmath42 but @xmath43 , @xmath44 , then the ap can still successfully decode the received signal by using all receive antennas with mpr .
the overall signal detection procedure of the in technique is shown in fig . [
fig : in_detection ] . by taking into account this possibility ,
the throughput at the mac layer with the in technique is given as @xmath45 \right\ } , \end{array}\ ] ] where @xmath46 denotes the probability that the received signals are successfully decoded when @xmath26 and @xmath5 users concurrently transmit packets in a specific ran and the other rans respectively and each ap sets @xmath31 antennas as the signal space .
here , @xmath31 is assumed to satisfy the condition in ( [ eq : in_limitation ] ) .
in , the first summation in the brace shows the throughput of each ran obtained from the signals arriving at the @xmath31-dimensional signal space .
the second summation shows the throughput obtained when the number of simultaneously transmitting users in a ran is larger than @xmath31 but the total number of simultaneously transmitting users in the whole @xmath0 rans , @xmath20 , is less than @xmath2 .
note that the ap may possibly decode those packets by performing mpr with its whole @xmath2 antennas if the concurrently transmitting users in the @xmath0 rans are no larger than @xmath2 . in
, @xmath47 is the probability that @xmath26 users are simultaneously transmitting in a ran and the formula in the bracket shows the probability that the number of simultaneously transmitting users in other @xmath48 rans is no larger than @xmath49 . as we consider the total throughput of the @xmath0-overlapping rans , the factor of @xmath0 is appeared outside of the brace in .
as mentioned before , the constraint on the dimension of signal space , which is shown in ( [ eq : in_limitation ] ) , limits the dof in each ran and restricts the applicability of the in technique in practice . in this section ,
we propose the oia protocol which efficiently controls the interference among overlapped rans . in the proposed oia protocol ,
the dof in each ran , @xmath31 , is not limited by ( [ eq : in_limitation ] ) and can be arbitrarily set from @xmath50 to @xmath2 .
the oia protocol is designed by considering both the phy and mac layers , which is as follows .
[ subsec : svd_beamforming ] first of all , the @xmath7-th ap sets its interference space for interference alignment , which is denoted by @xmath51 $ ] , where @xmath52 is the orthonormal basis , @xmath53 and @xmath54 . here , @xmath55 denotes the dimension of the signal space reserved at each ap and it is assumed that @xmath56 for all @xmath57 for convenience in this subsection since we focus on the operation at the phy layer .
obviously , @xmath19 can vary from @xmath58 to @xmath1 according to the mac layer operation which is the focus of the next subsection . for given @xmath59
, the @xmath7-th ap also calculates the null space of @xmath59 , defined by @xmath60 \triangleq \textrm{null}(\mathbf{q}_k),\ ] ] where @xmath61 is the orthonormal basis , and broadcasts it to all users in the network .
can be informed to users without any signaling process and each user may compute @xmath62 in a distributed manner without being informed . ]
if @xmath63 , then @xmath62 can be any orthonormal matrix .
we assume the unit - norm beamforming vector at the @xmath5-th user in the @xmath6-th ran as @xmath64}$ ] , i.e. , @xmath65 } \right\|^2 = 1 $ ] . from @xmath62 and @xmath66}_{k}$
] , the @xmath5-th user in the @xmath6-th ran calculates its _ effective _ generating interference , called _ leakage of interference ( lif ) _ , from @xmath67}_{k } & = \left\|\textrm{proj}_{\bot \mathbf{q}_k}\left ( \mathbf{h}_{k}^{[i , j]}\mathbf{w}^{[i , j]}\right)\right\|^2 \nonumber \\ % & = \sum_{m=1}^{s}\left\| \left({\mathbf{u}_{k , m}}^{h}\mathbf{h}_{k}^{[i , j ] } \mathbf{w}^{[i , j ] } \right){\mathbf{u}_{k , m } } \right\|^2 \\ \label{eq : eta_tilde}&= \left\|\mathbf{u}_k^{h}\mathbf{h}_{k}^{[i , j ] } \mathbf{w}^{[i , j ] } \right\|^2,\end{aligned}\ ] ] where @xmath68 , @xmath69 , and @xmath70 . here , @xmath71 denotes the projection operation of @xmath72 onto the null space of @xmath73 and @xmath74 denotes the hermitian operation .
lif can be regarded as the interference power which is received at the @xmath7-th ap and not aligned at the interference space @xmath75 . instead of perfectly nulling interference from users to other rans ,
the proposed transmit beamforming is performed to minimize sum of the effective interference power to other aps .
thus , each user finds the optimal transmit beamforming vector @xmath64}$ ] that minimizes its lif metric which defined as : @xmath76}_{\textrm{\textrm{sum } } } & = \sum_{k=1 , k\neq i}^{k } \left\| { \mathbf{u}_k}^{h}\mathbf{h}_{k}^{[i , j]}\mathbf{w}^{[i , j]}\right\|^2
\triangleq \left\| \mathbf{g}^{[i , j ] } \mathbf{w}^{[i , j]}\right\|^2,\end{aligned}\ ] ] where @xmath77}\in \mathbb{c}^{(k-1)s\times l}$ ] is defined by @xmath78 } & \triangleq \bigg [ \left({\mathbf{u}_1}^{h}\mathbf{h}_{1}^{[i , j]}\right)^{t } , \ldots , \left({\mathbf{u}_{i-1}}^{h}\mathbf{h}_{i-1}^{[i , j]}\right)^{t } , \nonumber\\ & \hspace{20pt}\left({\mathbf{u}_{i+1}}^{h}\mathbf{h}_{i+1}^{[i , j]}\right)^{t } , \ldots , \left({\mathbf{u}_k}^{h}\mathbf{h}_{k}^{[i , j]}\right)^{t } \bigg]^{t}.\end{aligned}\ ] ] let us denote the svd of @xmath77}$ ] as @xmath79 } = \boldsymbol{\omega}^{[i , j]}\boldsymbol{\sigma}^{[i , j]}{\mathbf{v}^{[i , j]}}^{h } , \displaybreak[0]\ ] ] where @xmath80}\in \mathbb{c}^{(k-1)s\times l}$ ] and @xmath81}\in \mathbb{c}^{l\times l}$ ] consist of @xmath3 orthonormal columns , respectively , and @xmath82 } = \textrm{diag}\left ( \sigma^{[i , j]}_{1 } , \ldots , \sigma^{[i , j]}_{l}\right)$ ] , where @xmath83}_{1}\ge \cdots \ge\sigma^{[i , j]}_{l}$ ] .
then , it is apparent that the optimal @xmath64}$ ] is determined as @xmath84}_{\textrm{svd } } = \arg \min_{\mathbf{w}^{[i , j ] } } \left\| \mathbf{g}^{[i , j ] } \mathbf{w}^{[i , j]}\right\|^2 = \mathbf{v}^{[i , j]}_{l},\ ] ] where @xmath85}_{l}$ ] is the @xmath3-th column of @xmath81}$ ] . with
this choice , @xmath86}_{\textrm{sum } } = { \sigma^{[i , j]}_{l}}^2 $ ] is achievable . after receiving signals at the @xmath7-th ap , @xmath62
is multiplied to remove the interference that is aligned to the interference space of the @xmath7 ap , @xmath75 .
then , the received signal can be expressed as : @xmath87}_k \textbf{w}^{[i , j ] } x^{[i , j ] } s^{[i , j]}+\textbf{z}_k,\end{aligned}\ ] ] when @xmath88 , this beamforming makes the interference signals be zero . thus , the proposed oia includes the in technique presented in section [ sec : in ] . on the other hand , when @xmath89 , the users transmissions may cause interference to the packet received at other aps .
[ fig : oia_model ] shows the geometric signal structure of the proposed oia protocol at aps when @xmath34 , @xmath90 , @xmath91 and @xmath92 , which corresponds to the case where @xmath93 and , as a result , the in technique can not be applied . for simplicity , we assume that the interference space of each ap is the same in fig .
[ fig : oia_model ] .
note that the proposed transmit beamforming minimizes the lif metric in . in the next subsection
, we will introduce an opportunistic random access mechanism to further reduce such interference .
[ subsec : opportunism ] although the svd - based transmit beamforming minimizes the interference to other rans at the phy layer , the residual interference may exist in the signal space at aps . hence , we need further reduce the interference at the mac layer by exploiting opportunistic random access . in the conventional oia technique proposed for cellular networks , the number of transmitting users in each cell is fixed according to the predetermined scheduling policy . in rans , however , the number of concurrently transmitting users in each ran is a random variable which can vary over time by nature .
while the conventional opportunistic random access in rans maximizes the signal strength of users @xcite , each user applies the opportunism based on its effective generating interference to other rans , i.e. , the lif metric in ( [ eq : lif ] ) , in the proposed oia protocol .
each user observes its lif metric for a long time to obtain the corresponding cumulative distribution function ( cdf ) .
specifically , in each time slot , each user calculates its instant lif metric based on ( [ eq : lif ] ) and stores it to update the histogram of the lif values from which the cdf can be calculated by normalization .
another possible methodology to obtain the cdf is as follows : let @xmath94 be the stored cdf at slot @xmath95 and the lif calculated at slot @xmath96 be @xmath97 .
then the cdf can be updated as @xmath98 where @xmath99 is the observation window . with larger @xmath96 and @xmath99
, we can obtain a more accurate cdf .
once the cdf is determined , each user finds the output of the obtained cdf by using the instant lif metric . in the proposed oia protocol
, each user transmits a packet if the output of the cdf is smaller than a certain threshold .
note that the cdf values are uniformly distributed in @xmath100 $ ] @xcite .
the transmission probability of each user becomes @xmath4 by setting the threshold to @xmath4 .
if a user already has a steady state cdf of @xmath101 , the equivalent operation of the proposed protocol is that the user transmits its packet if the current lif metric is smaller than @xmath102 .
as long as the number of simultaneously transmitting users in a ran is smaller than @xmath31 , the ap may decode the desired streams by treating the signals from other cells as interference which is minimized by the opportunistic transmission at the mac layer and the svd - based beamforming at the phy layer .
as the number of users in each ran , @xmath1 , increases , the transmission probability , @xmath4 , should be decreased in order to avoid packet collisions among users .
then , the decrease of @xmath4 also leads to the decrease of the generating interference of each user to other rans .
hence , we can conclude that the interference among rans would be reduced as @xmath1 increases since a smaller @xmath4 makes the lif become smaller . in the proposed oia protocol , each user needs to calculate its instant lif value and find the corresponding cdf value in each time slot .
this process may result in computational burden to users since channel estimation and svd operation are required
. however , most interference management techniques , including the conventional oia algorithms proposed for cellular uplink / downlink networks , require users to estimate the interference channels to other cells for exploiting opportunistic user scheduling .
in addition , obtaining cdf value does not yield severe computational burden at each user , compare to the channel estimation operation .
thus , the proposed oia protocol has a similar computational complexity with the conventional oia algorithms .
note that the proposed oia does not require users to feed back channel matrix or scheduling metric , i.e. , cdf value , to their aps , while the conventional oia algorithms in cellular networks require all users to feed back their lif and beamforming vector to the corresponding base stations .
therefore , the proposed oia protocol does not increase computational complexity much at each user , and it can be applied to practical rans ( such as wireless lans ) without significant modifications .
[ subsec : throughput_oia ] if the number of concurrently transmitting users in each ran is smaller than @xmath31 , @xmath103 , and the residual interference at aps is small enough , then the throughput of the oia protocol can be expressed as a similar form in ( 4 ) .
note that the constraint shown in ( [ eq : in_limitation ] ) does not limit @xmath31 in the proposed oia protocol . in practice
, however , the residual interference at the aps may reduce the packet success probability .
the packet success probability of the @xmath6-th ap may depend on the dimension of reserved signal space ( @xmath31 ) , the number of concurrently transmitting users in the @xmath6-th ran ( @xmath19 ) , the number of receive antennas at ap ( @xmath2 ) , the number of transmit antennas at each user ( @xmath3 ) , and the number of concurrently transmitting users in other rans ( @xmath104 ) .
hence , the throughput of the proposed oia protocol can be expressed as : @xmath105 \\ & \displaystyle + \sum\limits_{m = 1}^s m \cdot { n\choose m}p^m \left ( { 1 - p } \right)^{n - m } \cdot\\ & \displaystyle \!\!\!\!\!\!\!\!\!\!\!\!\ ! \left . \left
[ \sum\limits_{j = m - m + 1}^{n(k-1 ) } \!\!\!\ ! { { n(k-1 ) \choose j}p^j ( 1 - p)^{n ( k - 1 ) - j } \cdot p^{\rm oia}_{m , j } } \right ] \right\ } , \end{array}\ ] ] where @xmath106 denotes the probability that the received packets are successfully decoded , when there exist @xmath26 concurrently transmitting users in a specific ran and @xmath5 concurrently transmitting users exist in other rans for given @xmath3 , @xmath31 , and @xmath2 . if the total concurrently transmitting users in whole networks ( @xmath107 ) is less than the number of receive antennas at aps ( @xmath2 ) , then the mpr technique can be used for decoding the received packets .
this phenomenon results in the throughput of a single ran shown by the first summation in the brace of ( [ eq : t_oia ] ) . on the other hand , if @xmath108 and @xmath109 , then the probability that the received packets are successfully decoded is determined by the proposed oia protocol which includes the transmit beamforming , opportunistic access , and receive beamforming .
the corresponding throughput is described by the second summation in the brace of ( [ eq : t_oia ] ) .
note that here only the scenarios where @xmath110 are reflected to exclude the cases already considered in the first summation .
unfortunately , @xmath111 is not mathematically tractable due to complicated interactions among many factors such as the time - varying nature of the number of active users in rans , and it should be evaluated by simulations . in section [ sec : evaluation ] , we evaluate @xmath111 by simulation and demonstrate the resultant throughput of the proposed oia protocol in various environments .
[ sec : evaluation ] we first consider a three - overlapped network each of which supports @xmath112 users ( @xmath34 and @xmath113 ) .
both aps and users have three antennas ( @xmath114 ) and the channel matrix between each transmit receive antenna pair is assumed to experience rayleigh fading .
the average received signal - to - noise ratios ( snrs ) at aps from all users in the network are assumed to be @xmath58db .
the aps adopt zero - forcing ( zf ) technique for decoding the multiple received signals in all protocols including mpr , in , and the proposed oia .
in particular , in the proposed oia protocol , the zf decoder is used for decoding packets from users in the corresponding ran after null projection @xmath62 as shown in . the signal - to - interference - plus - noise ratio ( sinr ) threshold for successful packet decoding
is assumed to be @xmath58db . vs. the number of simultaneously transmitting users from other networks ( @xmath34 , @xmath35 , @xmath115 , @xmath116).,width=4 ] we first consider the phy layer performance of oia for the first ran as a representative example .
[ fig : success_probability ] shows the phy - layer packet success probability of oia , @xmath106 , by varying the number of simultaneously transmitting ( or active ) users @xmath117 from other rans when @xmath116 and @xmath118 .
three cases are considered where the numbers of concurrently transmitting users @xmath119 in the first ran are 1 , 2 , and 3 . to investigate the steady state performance ,
the cdf of lif is obtained by collecting @xmath120 samples .
we can observe that @xmath121 decreases with a larger @xmath26 or @xmath5 . at the mac layer , the number of concurrently transmitting users , @xmath26 and @xmath5 , vary slot by slot due to the random access nature of the @xmath4-persistent protocol .
specifically , the first network sees local @xmath26 users simultaneous transmissions with probability @xmath122 and @xmath5 more users simultaneous transmissions from the other rans with probability @xmath123 , respectively .
this phenomenon was implemented in the simulations performed in this paper . if @xmath124 and @xmath125 , a collision ( or transmission fail ) happens .
if @xmath126 , a smaller @xmath106 happens with a larger @xmath26 or @xmath5 as shown in fig .
[ fig : success_probability ] . a larger @xmath26 indicates a reduced dof for the decoding at the users in the first ran while a larger @xmath5 indicates a larger interference from the other rans . ) .,width=4 ] fig .
[ fig : throughput_over_p ] compares the throughput of the proposed oia protocol with the conventional techniques including the mpr and in techniques for varying the transmission probability , @xmath4 .
the proposed oia protocol achieves much better throughput than the mpr and in techniques .
note that the in technique in the figure is identical to the proposed oia protocol with @xmath36 because the condition is satisfied in this case and the oia protocol operates in the same way as the in technique , as discussed in section iv .
the in technique achieves better throughput than the mpr technique regardless of the transmission probability .
the throughput increases as the dimension of signal space in the oia protocol ( @xmath31 ) increases , and thus we can conclude that a larger value of @xmath31 is preferable for the proposed oia protocol in this network scenario . for each scheme , there exists the optimal transmission probability that maximizes the throughput .
we can observe that the optimal transmission probability of the proposed oia protocol increases as @xmath31 increases , which implies that more aggressive transmission is preferable for larger @xmath31 .
for example , the maximum throughput of the oia protocol with @xmath118 is equal to @xmath127 packets / slot , while that of the mpr technique is equal to @xmath128 packets / slot .
hence , the oia protocol yields @xmath129 throughput improvement , compared to the mpr technique . as explained in section [ sec : oia ] , the proposed oia protocol consists of transmit beamforming as a phy layer technique and interference - aware opportunistic random access as a mac layer technique . in order to analyze the contribution of each technical component for overall throughput enhancement ( also the overall effect by joint design of two components ) , we introduce two interference management protocols which exploit only one of the two technical components of the proposed oia protocol .
we term the oia protocol without transmit beamforing at the phy layer and the oia protocol without opportunistic random access in the mac layer as ` oia w / o tx - bf ' and ` oia w / o ora ' , respectively .
[ fig : throughput_over_p_cross_n10_effect ] compares throughputs of mpr , ` oia w / o tx - bf ' , ` oia w / o ora ' , and the proposed oia protocols . in fig .
[ fig : throughput_over_p_cross_n10_effect ] , @xmath31 is set to @xmath130 .
the considered ` oia w / o tx - bf ' and ` oia w / o ora ' protocols outperform the mpr technique , while the effect of the svd - based transmit beamforming on the overall throughput is shown to be more significant than that of the cdf - based opportunistic random access .
compared to the oia protocol without opportunistic random access , which results in the maximum throughput of @xmath131 packets / slot , the proposed oia yields @xmath132 throughput enhancement .
in addition , the proposed oia achieves @xmath133 thropughput improvement , compared to the oia protocol without transmit beamforming , which results in the maximum throughput of @xmath134 packets / slot .
it is found that the maximum throughput of the proposed oia protocol is achieved in the larger transmission probability than the those of other schemes .
[ fig : throughput_over_k ] shows the maximum throughput of the several considered protocols in this paper according to the number of overlapped rans ( @xmath0 ) when there exist @xmath112 users in each rans ( @xmath113 ) .
we also assume the numbers of transmit and receive antennas as well as the dimension of signal space at aps are identical to the number of overlapped rans , i.e. , @xmath135 .
the maximum throughput of each protocol is evaluated by searching all possible transmission probabilities . from fig .
[ fig : throughput_over_k ] , we observe that the effect of the opportunistic random access at the mac layer on the maximum throughput is marginal
. however , the effect of the opportunistic random access at the mac layer on the maximum throughput becomes significant when it combines with the transmit beamforming at the phy layer of the proposed oia protocol .
note that both the svd - based transmit beamforming and the cdf - based opportunistic random access play a role of reducing the interference among overlapped rans . from fig .
[ fig : throughput_over_k ] , we can conclude that performance gain from the opportunistic random access in the mac layer can be magnified in conjunction with the svd - based transmit beamforming in the phy layer . for example , the proposed oia protocol achieves @xmath136 throughput improvement , compared to the mpr technique when @xmath137 .
in this paper , we proposed a novel interference management protocol called opportunistic interference alignment ( oia ) for overlapped random access networks operating with slotted aloha , which intelligently combines the interference alignment based transmit beamforming technqiue at the phy layer and the opportunistic random access technique at the mac layer .
we also introduced a simple extension method of the conventional techniques for interference - limited rans : multipacket reception and interference nulling .
the proposed oia protocol is shown to significantly outperform the conventional schemes in terms of mac layer throughput .
the proposed oia protocol is expected to be applied to next - generation wireless lans such as ieee 802.11 hew without significant modifications .
we leave this issue for further study .
x. chen and c. yuen , performance analysis and optimization for interference alignment over mimo interference channels with limited feedback,"_ieee trans .
signal process .
62 , no . 7 , pp .
1785 - 1795 , apr . 2014 .
h. j. yang , w .- y shin , b. c. jung , and a. paulraj , opportunistic interference alignment for mimo interfereing multiple - access channel , " _ ieee trans .
wireless commun .
2180 - 2192 , may 2013 .
j. leithon , c. yuen , h. a. suraweera , and h. gao , a new opportunistic interference alignment scheme and performance comparison of mimo intereference alignement with limited feedback , " _ ieee globecom , _ pp .
1123 - 1127 , dec . 2012 | an interference management problem among multiple overlapped random access networks ( rans ) is investigated , each of which operates with slotted aloha protocol . assuming that access points and users have multiple antennas , a novel opportunistic interference alignment ( oia ) is proposed to mitigate interference among overlapped rans .
the proposed technique intelligently combines the transmit beamforming technique at the physical layer and the opportunistic packet transmission at the medium access control layer .
the transmit beamforming is based on interference alignment and the opportunistic packet transmission is based on the generating interference of users to other rans , which can be regarded as a joint optimization of the physical layer and the medium access control layer .
it is shown that the proposed oia protocol significantly outperforms the conventional schemes such as multi - packet reception and interference nulling . interference alignment , interference nulling , multi - packet reception , random access networks , slotted aloha , transmit beamforming . |
autophagy is a constitutive , dynamic , bulk degradation process that is necessary for a number of processes in living cells . besides these functions
, recent studies have revealed that pro - autophagic drugs constitute a novel strategy to overcome apoptosis - resistant cancers . among these ,
glioblastoma multiforme , the most common and life - threatening primary central nervous system malignant tumor , is characterized by resistance to apoptosis , high proliferation and invasiveness , and poor response to surgery , radiation , and chemotherapy - based treatments . in autophagy , the constant flow of autophagosomes to lysosomes
one of these , lc3b , initially present in a soluble form in the cytoplasm ( lc3b - i ) , is converted to a lipidated form ( lc3b - ii ) , sequestered into autophagosomal membranes and finally delivered , along with autophagosomal cargo , to the lysosome and degraded .
consequently , the conversion of lc3b and its turnover can be used as a measure of the rate of autophagic degradation in cells .
one of the most frequently used methods to monitor autophagy in vitro is based on gfp ( green fluorescent protein ) expressed as a fusion protein with lc3b . however , this approach is limited by several issues such as the subjectivity in counting gfp - positive structures , the generation of fluorescent intracellular protein aggregates independent of autophagy and the possibility of a self - induction of autophagy by transfection procedures .
other in vitro methods to visualize the autophagic status are based on the staining of the growing cells with acidotropic dyes such as monodansyl - cadaverine ( mdc ) , acridine orange , lysosensor blue and lysotracker red , and their subsequent steady - state microscopic visualization .
however , the incorporated dyes are not strictly specific markers for autophagosomes ; in addition , problems with non - specific background staining and meta - stable emission fluorescent spectra are often reported . in this report
, we present an application for quantitative analysis of autophagosomes associated with lc3b - gfp fluorescence expression .
we have experimentally applied and verified this computer assisted approach in a particular experimental setup : a well established rapamycin - mediated autophagic induction in two different human glioma cell lines .
human astrocytoma established cell lines , i.e. t98 g and u373-mg , provided by ecacc and previously employed , were cultured at 37c and 5% co2 atmosphere , using d - mem medium supplemented with 10% fbs , 100 units / ml penicillin , 0.1 mg / ml streptomycin , and 1% l - glutamine ( invitrogen , carlsbad , ca , usa ) .
rapamycin - mediated autophagy induction was carried out seeding 210 cells onto 12-multiwell plates , 12 h before autophagy - inducer addition .
louis , mo , usa ) , resuspended in dmso , was added and incubated for different times at the following concentrations ( 0.1 , 0.5 , and 1 m ) .
protein extracts were quantified using the quant - it protein assay kit ( invitrogen ) and then denatured in laemmli sample buffer ( 2% sds , 6% glycerol , 150 mm beta - mercaptoethanol , 0.02% bromophenol blue , and 62.5 mm tris - hcl ph=6.8 ) .
after electrophoresis , proteins were transferred onto nitro - cellulose membrane hybond - c extra ( ge healthcare , waukesha , wi , usa ) . membranes were blocked for 1 h with 8% non - fat milk in tbs ( 138 mm nacl , 20 mm tris ph=7.6 ) containing 0.1% tween 20 and then incubated overnight at 4c with primary antibodies .
species - specific peroxidase - labelled ecl secondary antibodies ( ge healthcare ) were employed .
protein signals were revealed using the ecl advance western blotting detection kit ( ge healthcare ) .
the following primary rabbit polyclonal antibodies were employed : anti - lc3b , anti-(ph - p70s6 kinase ) , anti-(p70s6 kinase ) ( cell signalling technology inc . , danvers , ma , usa ) .
anti--actin was used for an internal control ( mouse monoclonal antibody , cell signalling technology ) .
protein expression was quantified by densitometric analysis with imagej software ( http://rsbweb.nih.gov/ij/ ) according to the guidelines . the premo autophagy sensor kit ( invitrogen )
briefly , t98 g and u373-mg cells were transduced with bacmam lc3b - gfp or with bacmam lc3b(g120a)-gfp with a multiplicity of infection ( moi ) equal to 30 , using 510 cells in 96-multiwell plates .
titration experiments with different mois ( from 10 to 100 ) were performed . at different post - transduction ( p.t . ) times , an inverted fluorescence microscope ( eclipse nikon ts100 ) was employed for live cells imaging using 40 magnification . for electron microscopy analysis , t98 g and u373-mg cells ( 10 ) were grown in dmem medium and treated with the autophagy inducer rapamycin ( 1 m ) . at 24 h p.t .
, cells were harvested by centrifugation at 800 rpm for 3 min and fixed with 2% glutaraldehyde in dmem , for 2 h at room temperature .
cells were then rinsed in pbs ( ph=7.2 ) overnight and post - fixed in 1% aqueous oso4 for 2 h at room temperature .
after that , cells were pre - embedded in 2% agarose in water , dehydrated in acetone , and finally embedded in epoxy resin ( electron microscopy sciences , em - bed812 ) .
ultrathin sections ( 5060 nm ) were collected on formvar - carbon - coated nickel grids and stained with uranyl acetate and lead citrate .
the specimens were observed with a zeiss em900 transmission electron microscope ( tem ) equipped with a 30 m objective aperture and operating at 80 kv .
autocounter , a javascript implementation to analyze lc3b - gfp expression dynamics , is based upon imagej programming language ( http://rbsweb.nih.gov/ij/ ) and is a custom - made list of imagej java commands with fine - tuned parameters .
this method is quasi - operator - independent because of i ) the operator 's use of two imagej tools ( freehand selection , threshold ) and ii ) the action of the operator - independent imagej javascript ( supplementary figures 1 and 2 ) .
immunoblotting and densitometric analysis of the ph - p70s6k , p70s6k and lc3b expression 24 h p.t . with different rapamycin concentrations . in densitometric analysis ( values under panels ) , the protein expression was normalized to -actin ( using imagej software ) .
rapamycin - treatment induced autophagy in t98 g and u373-mg cells .
immunoblotting and densitometric analysis of the ph - p70s6k , p70s6k and lc3b expression 24 h p.t . with different rapamycin concentrations . in densitometric analysis ( values under panels ) , the protein expression was normalized to -actin ( using imagej software ) .
cells were untreated ( panels a and b , respectively for t98 g and u373-mg cells ) or treated ( panels c and d ) with the autophagy - inducer rapamycin ( 1 m ) , harvested at 24 h p.t . , fixed and stained for ultrastructural visualization at 12,000 magnification with a zeiss em900 tem .
scale bars : 1 m .
transmission electron microscopy analysis of t98 g and u373-mg cells .
cells were untreated ( panels a and b , respectively for t98 g and u373-mg cells ) or treated ( panels c and d ) with the autophagy - inducer rapamycin ( 1 m ) , harvested at 24 h p.t . , fixed and stained for ultrastructural visualization at 12,000 magnification with a zeiss em900 tem .
scale bars : 1 m . briefly , by means of the freehand selection , the operator draws a red line highlighting the contour of a fluorescence - positive cell .
afterward , the operator - independent algorithm performs the following steps : i ) split of the original rgb ( red - green - blue ) image into the three color channels to select the contribution of the green fluorescence ; and ii ) subtraction of the red channel from the green one to better distinguish the green vesicles / particles and to better eliminate the background contribution .
then , the operator refines the vesicle / particle contrast by means of the threshold and the operator - independent algorithm performs the detection and the measurement of the vesicles in terms of number and area ( default thresholding / segmentation method based on the isodata algorithm , supplementary figures 1 and 2 ) .
human astrocytoma established cell lines , i.e. t98 g and u373-mg , provided by ecacc and previously employed , were cultured at 37c and 5% co2 atmosphere , using d - mem medium supplemented with 10% fbs , 100 units / ml penicillin , 0.1 mg / ml streptomycin , and 1% l - glutamine ( invitrogen , carlsbad , ca , usa ) .
rapamycin - mediated autophagy induction was carried out seeding 210 cells onto 12-multiwell plates , 12 h before autophagy - inducer addition .
louis , mo , usa ) , resuspended in dmso , was added and incubated for different times at the following concentrations ( 0.1 , 0.5 , and 1 m ) .
protein extracts were quantified using the quant - it protein assay kit ( invitrogen ) and then denatured in laemmli sample buffer ( 2% sds , 6% glycerol , 150 mm beta - mercaptoethanol , 0.02% bromophenol blue , and 62.5 mm tris - hcl ph=6.8 ) .
after electrophoresis , proteins were transferred onto nitro - cellulose membrane hybond - c extra ( ge healthcare , waukesha , wi , usa ) . membranes were blocked for 1 h with 8% non - fat milk in tbs ( 138 mm nacl , 20 mm tris ph=7.6 ) containing 0.1% tween 20 and then incubated overnight at 4c with primary antibodies .
species - specific peroxidase - labelled ecl secondary antibodies ( ge healthcare ) were employed .
protein signals were revealed using the ecl advance western blotting detection kit ( ge healthcare ) .
the following primary rabbit polyclonal antibodies were employed : anti - lc3b , anti-(ph - p70s6 kinase ) , anti-(p70s6 kinase ) ( cell signalling technology inc . , danvers , ma , usa ) .
anti--actin was used for an internal control ( mouse monoclonal antibody , cell signalling technology ) .
protein expression was quantified by densitometric analysis with imagej software ( http://rsbweb.nih.gov/ij/ ) according to the guidelines . the premo autophagy sensor kit ( invitrogen )
briefly , t98 g and u373-mg cells were transduced with bacmam lc3b - gfp or with bacmam lc3b(g120a)-gfp with a multiplicity of infection ( moi ) equal to 30 , using 510 cells in 96-multiwell plates . titration experiments with different mois ( from 10 to 100 ) were performed . at different post - transduction ( p.t . )
times , an inverted fluorescence microscope ( eclipse nikon ts100 ) was employed for live cells imaging using 40 magnification . for electron microscopy analysis , t98 g and u373-mg cells ( 10 ) were grown in dmem medium and treated with the autophagy inducer rapamycin ( 1 m ) . at 24 h p.t .
, cells were harvested by centrifugation at 800 rpm for 3 min and fixed with 2% glutaraldehyde in dmem , for 2 h at room temperature .
cells were then rinsed in pbs ( ph=7.2 ) overnight and post - fixed in 1% aqueous oso4 for 2 h at room temperature .
after that , cells were pre - embedded in 2% agarose in water , dehydrated in acetone , and finally embedded in epoxy resin ( electron microscopy sciences , em - bed812 ) .
ultrathin sections ( 5060 nm ) were collected on formvar - carbon - coated nickel grids and stained with uranyl acetate and lead citrate .
the specimens were observed with a zeiss em900 transmission electron microscope ( tem ) equipped with a 30 m objective aperture and operating at 80 kv .
autocounter , a javascript implementation to analyze lc3b - gfp expression dynamics , is based upon imagej programming language ( http://rbsweb.nih.gov/ij/ ) and is a custom - made list of imagej java commands with fine - tuned parameters .
this method is quasi - operator - independent because of i ) the operator 's use of two imagej tools ( freehand selection , threshold ) and ii ) the action of the operator - independent imagej javascript ( supplementary figures 1 and 2 ) .
immunoblotting and densitometric analysis of the ph - p70s6k , p70s6k and lc3b expression 24 h p.t . with different rapamycin concentrations . in densitometric analysis ( values under panels ) , the protein expression was normalized to -actin ( using imagej software ) .
immunoblotting and densitometric analysis of the ph - p70s6k , p70s6k and lc3b expression 24 h p.t . with different rapamycin concentrations . in densitometric analysis ( values under panels ) , the protein expression was normalized to -actin ( using imagej software ) .
cells were untreated ( panels a and b , respectively for t98 g and u373-mg cells ) or treated ( panels c and d ) with the autophagy - inducer rapamycin ( 1 m ) , harvested at 24 h p.t . , fixed and stained for ultrastructural visualization at 12,000 magnification with a zeiss em900 tem .
scale bars : 1 m .
transmission electron microscopy analysis of t98 g and u373-mg cells .
cells were untreated ( panels a and b , respectively for t98 g and u373-mg cells ) or treated ( panels c and d ) with the autophagy - inducer rapamycin ( 1 m ) , harvested at 24 h p.t . , fixed and stained for ultrastructural visualization at 12,000 magnification with a zeiss em900 tem .
scale bars : 1 m . briefly , by means of the freehand selection , the operator draws a red line highlighting the contour of a fluorescence - positive cell .
afterward , the operator - independent algorithm performs the following steps : i ) split of the original rgb ( red - green - blue ) image into the three color channels to select the contribution of the green fluorescence ; and ii ) subtraction of the red channel from the green one to better distinguish the green vesicles / particles and to better eliminate the background contribution .
then , the operator refines the vesicle / particle contrast by means of the threshold and the operator - independent algorithm performs the detection and the measurement of the vesicles in terms of number and area ( default thresholding / segmentation method based on the isodata algorithm , supplementary figures 1 and 2 ) .
rapamycin - mediated autophagy induction was performed in t98 g and u373-mg cells . in order to assess the rapamycin efficacy in inducing autophagy by blocking the mtor pathway , both t98 g and u373-mg cells were initially incubated with different concentrations of rapamycin ( 0.1 , 0.5 , and 1 m ) for 24 h ; then , proteins were extracted and subjected to sds electrophoresis .
mtor inhibition was evaluated by immunoblotting measuring ph - p70s6k / p70s6k ratio , while autophagy induction by lc3b - i to lc3b - ii conversion . as reported in figure 1 ,
rapamycin treatment resulted in a decrease of ph - p70s6k / p70s6k ratio ; in addition , lc3b - ii / bact ratio increased in both cell lines , at each concentration of rapamycin , suggesting autophagy activation .
a tem comparative analysis of the ultrastructural features of t98 g and u373-mg cells , treated with 1 m rapamycin , revealed the presence of large cytoplasmic vacuoles containing residual digested material .
importantly , these autophagy - like vesicles were nearly absent in untreated cells ( figure 2 ) .
the premo autophagy sensor was then employed for autophagy monitoring as follows t98 g and u373-mg cells were seeded onto 96-multiwell plates at 510 ; after cell attachment , the premo autophagy sensor was added ( moi=30 ) .
after overnight culture , cells were incubated with rapamycin ( 1 m ) as before and lc3b - gfp expression was time - course monitored using an inverted fluorescence microscope ( eclipse nikon ts100 ) . as a control
, the bacmam lc3b(g120a)-gfp was similarly transduced in t98 g and u373-mg cells , before rapamycin administration ; this mutation prevents the lc3b cleavage and the subsequent lipidation during normal autophagy , and thus the protein localization remains cytosolic and diffuse .
as reported in figure 3 , rapamycin untreated cells , transduced separately with wild - type bacmam lc3b - gfp or with bacmam lc3b(g120a)-gfp , showed , in a relatively low percentage ( 510% ) , a diffuse cytosolic fluorescence : furthermore , in both cell lines , about 1% displayed a gfp punctate expression and , altogether , these patterns were maintained unaltered for 5 days of culture .
similarly , rapamycin treated cells , transduced with bacmam lc3b(g120a)-gfp , generally displayed a 13% of gfp punctate expression , independently from the time - course observation .
differently , the amount of lc3b - gfp dots per cell was significantly increased after rapamycin administration , particularly from 24 to 48 h p.t . following treatment with chloroquine diphosphate at 30 , 60 , and 90 m for 12 h
, t98 g and u373-mg cells , previously transduced with bacmam lc3b - gfp and treated with rapamycin ( 1 m ) as before , exhibited a marked increase in lc3b - gfp expression , as expected : however , fluorescent spots are not clearly defined , resulting in large irregular shapes ( supplementary figure 3 ) .
figure 3lc3b - gfp and lc3b(g120a)-gfp expression in rapamycin treated t98 g and u373-mg cells .
t98 g and u373-mg cells ( 510 ) were transduced with lc3b - gfp and lc3b(g120a)-gfp bacmam viral particles ( moi=30 ) and , after 12 h , treated or not with rapamycin ( 1 m ) . after 24 h of further incubation
t98 g and u373-mg cells ( 510 ) were transduced with lc3b - gfp and lc3b(g120a)-gfp bacmam viral particles ( moi=30 ) and , after 12 h , treated or not with rapamycin ( 1 m ) . after 24 h of further incubation , cells were analysed using an inverted fluorescence microscope at 40 magnification .
scale bars : 10 m . more specifically , to quantify the lc3b - gfp puncta per cell , in an unsupervised manner , a particular imagej javascript was developed .
as summarized in the materials and methods section and in the supplementary figure 1 , the program analysed dic phase contrast - fluorescent photographs : the contours of cells were drawn by the operator and , after that , the number of intracellular fluorescent spots ( # ves ) and their percent area within the cell area ( aves / acell ) were calculated .
in addition , the distribution of the fluorescent spots was calculated classifying their area in custom - defined classes ( e.g. , small : 0<c11 m ; medium : 1<c23 m ; large : c3>3 m ) .
the program was then applied in fluorescent spots calculation of figure 4 : panels a to d reported the time - course imaging of t98 g and u373-mg cells after lc3b - gfp and 10 m rapamycin administration at 4 h ( t1 ) and 5 h ( t2 ) .
as highlighted in table 1 , the mean percent variation of vesicle area [ (aves / acell ) ] in the time interval t1t2 sensibly increased in both cell lines ( 61.6% and 96.7% , respectively for t98 g and u373-mg ) ; depending on the cell line , different trends were reported for the mean percent variation of number of fluorescent vesicles ( #ves ) ( 22.3% and 40.9% , respectively for t98 g and u373-mg ) . as expected , following the time progression , fusions of fluorescent organelles into larger ones and de novo generation of small and medium organelles were scored .
figure 4time - course analysis of lc3b - gfp expression in t98 g and u373-mg cells .
cells were transduced with bacmam lc3b - gfp viral particles ( moi=30 ) ; after 12 h of incubation , cells were treated with rapamycin ( 10 m ) and analysed at 4 h ( t1 ) and 5 h ( t2 ) p.t . recorded dic images by inverted fluorescence microscope
these results were reported in table 1 , referring to t98 g ( panels a and b ) and to u373-mg ( panels c and d ) .
time - course analysis of lc3b - gfp expression in t98 g and u373-mg cells .
cells were transduced with bacmam lc3b - gfp viral particles ( moi=30 ) ; after 12 h of incubation , cells were treated with rapamycin ( 10 m ) and analysed at 4 h ( t1 ) and 5 h ( t2 ) p.t .
recorded dic images by inverted fluorescence microscope were then analysed in the variation of number and area of vesicles by autocounter .
these results were reported in table 1 , referring to t98 g ( panels a and b ) and to u373-mg ( panels c and d ) .
table 1time - course analysis of lc3b - gfp expression in t98 g and u373-mg cells ( results related to figure 4 , panels a to d).time - course*t1t2aves / acell#ves#ves#ves#vesaves / acell#ves#ves#ves#ves(%)c1c2c3(%)c1c2c3t98 g panel a2.11915313.512471t98 g panel b3.4134815.312525u373-mg panel c1.295313.2171133u373-mg panel d2.01412202.6131030tt = t2t1(aves / acell)#ves#ves c1#ves c2#ves c3(%)(%)t98g61.622.3512u373-mg96.740.920.51*autocounter analysis in time - course imaging of t98 g and u373-mg cells after lc3b - gfp and rapamycin ( 10 m ) administration at 4 h ( t1 ) and 5 h ( t2 ) .
ratio between vesicle area and cell area ( aves / acell ) , number of vesicles ( # ves ) , and number of vesicles grouped , according to their area , into three defined classes ( small : 0<c11 m ; medium : 1<c23 m ; large : c3>3 m ) are reported."dynamic change of vesicle pattern in the time interval t = t2t1 in terms of mean percent variation of vesicle area [ (aves / acell ) ] , mean percent variation of number of vesicles ( #ves ) , and mean variation of number of vesicles in ranges of vesicle area .
autocounter analysis in time - course imaging of t98 g and u373-mg cells after lc3b - gfp and rapamycin ( 10 m ) administration at 4 h ( t1 ) and 5 h ( t2 ) . ratio between vesicle area and cell area ( aves / acell ) , number of vesicles ( # ves ) , and number of vesicles grouped , according to their area , into three defined classes ( small : 0<c11 m ; medium : 1<c23 m ; large : c3>3 m ) are reported . "
dynamic change of vesicle pattern in the time interval t = t2t1 in terms of mean percent variation of vesicle area [ (aves / acell ) ] , mean percent variation of number of vesicles ( #ves ) , and mean variation of number of vesicles in ranges of vesicle area . in figure 5 , t98 g and u373-mg cells characterized by difficult - to - measure fluorescent patterns were analysed by autocounter : as reported , the program detected almost all the fluorescent vesicles with a predominant distribution into small and medium size classes ( supplementary table 1 ) .
t98 g and u373-mg cells were transduced with bacmam lc3b - gfp viral particles ( 510 cells , moi=30 ) ; after 12 h culturing , cells were then treated with rapamycin ( 1 m ) and further incubated for additional 24 h. autocounter analysis was then performed on a0 and b0 light and fluorescent photograph panels . in panels a1 and b1 , the contours of the cells were drawn by the operator .
then , panels a2 and b2 showed the transformed masks for vesicles number and area analysis . scale bars : 10 m . size statistics of the gfp scored vesicles was reported in supplementary table 1 .
t98 g and u373-mg cells were transduced with bacmam lc3b - gfp viral particles ( 510 cells , moi=30 ) ; after 12 h culturing , cells were then treated with rapamycin ( 1 m ) and further incubated for additional 24 h. autocounter analysis was then performed on a0 and b0 light and fluorescent photograph panels . in panels a1 and b1 , the contours of the cells were drawn by the operator .
then , panels a2 and b2 showed the transformed masks for vesicles number and area analysis . scale bars : 10 m .
to further assess if lc3b - gfp scoring was associated with the modulation of the autophagy induction , t98 g and u373-mg cells were transduced with lc3b - gfp ( 510 cells into 96-multiwells and moi=30 , as above ) and then treated with increasing concentrations of rapamycin ( 0.1 , 0.5 , 5 , and 10 m ) . as a result , fluorescent spots were primarily detected at the highest rapamycin concentration even after 2 h p.t .
as reported in figure 6 , fluorescent vesicle areas were linearly correlated to the amount of the autophagy - inducer employed ( correlation coefficient r=0.9162 ) .
differently , no significant correlation between the vesicle number and rapamycin treatments was obtained ( data not shown ) .
lc3b - gfp viral particles ( 510 cells , moi=30 ) ; after 12 h culturing , cells were then treated with different rapamycin concentrations ( 0.1 , 0.5 , 5 , and 10 m ) and further incubated for additional 24 h. scale bars : 10 m .
a ) autocounter analysis was then performed in four different replicas for each treatment , evaluating overall vesicles number and area .
b ) plot of the percent vesicle area normalized to the entire cell area vs. rapamycin concentrations .
t98 g and u373-mg cells were transduced with bacmam lc3b - gfp viral particles ( 510 cells , moi=30 ) ; after 12 h culturing , cells were then treated with different rapamycin concentrations ( 0.1 , 0.5 , 5 , and 10 m ) and further incubated for additional 24 h. scale bars : 10 m .
a ) autocounter analysis was then performed in four different replicas for each treatment , evaluating overall vesicles number and area .
b ) plot of the percent vesicle area normalized to the entire cell area vs. rapamycin concentrations .
autophagy is a constitutive process in most cell types and it can be induced by certain stimuli such as nutrient starvation , rapamycin treatment , and infection with pathogens . currently , the most commonly used and specific inducer of autophagy is rapamycin , which directly inhibits mtor in living cells .
autophagy can be observed by ultrastructural imaging of autophagosomes and other organelles involved in the process .
electron microscopy analysis can be employed in autophagy qualitative analysis , even if autophagosomes counting and the measure of their size can be achieved . besides ultrastructural examinations ,
one of the most common imaging methods in autophagy studies is the visualization of the recruitment of microtubule - associated protein 1 light chain 3 beta ( lc3b ) to autophagosomes , an early step in autophagy that occurs when the protein is cleaved and lipidated .
lc3b is modified during the induction of autophagy through several processing steps including cleavage by the protease atg4b to generate lc3-i , atg3/7-mediated ligation of phosphatidylserine or phosphatidylethanolamine to generate lc3-ii and translocation to the autophagosomal membrane . a second reaction catalyzed by atg4b removes lc3-ii from the autophagosomal membrane by delipidation .
translocation of lc3b - gfp can be identified by puncta that appear following engagement of the autophagic response .
although the analysis of fluorescent lc3b - gfp is a useful approach , it is tedious to quantify the autophagy by the manual counting of the fluorescent puncta .
furthermore , the cells should also be counted using unbiased procedures such as multispectral imaging flow cytometry . of note ,
lc3b - gfp can associate with ubiquitinated protein aggregates referred to as aggresome - like induced structures ( alis ) and p62 bodies / sequestosomes .
therefore , a lipidation - defective lc3 mutant , where glycine 120 is mutated to alanine , was employed as a negative control .
after transduction of the mutant and rapamycin administrations , a very low amount of aggregates , independently of autophagy , was scored in both t98 g and u373-mg cells .
similarly , as we have reported , the transfection protocol itself did not artifactually induce lc3b - gfp discrete signals .
we have therefore introduced an imagej javascript for autophagy evaluation , autocounter , which resulted less labor intensive than western blotting and less subjective than manual counting of autophagic puncta under fluorescent microscopy examinations .
using lc3b - gfp - transduced and rapamycin - stimulated cells , we have showed that the assay can be used to quantify the autophagy induction and to follow time - course dynamic changes in autophagic vesicles formation .
this application was designed to monitor the autophagy status through the evaluation of the amount of puncta per cell as well as their area .
in addition , it was reported that quantified autophagosome areas , in many cases , correlated with the rates of protein degradation .
therefore , the employed surface analysis of the same target cell in time - course assays may be informative regarding the specific autophagic status and its progression .
the imagej javascript was then assayed in sensitivity and specificity of detection of fluorescent dots in complex cell panels .
the program was able to detect nearly all vesicle - like spots , providing also statistics on their sizes . to correctly perform these assays ,
we have reported that , before the autophagy induction , the lc3b - gfp background number of puncta was relatively low ( the frequencies of cells with fluorescent dots were about 1/100 and 1/150 , respectively for t98 g and u373-mg ) . as a consequence , the marked increase in lc3b - gfp expression was mainly associated with induced autophagosomes against a low background of constitutive autophagy . as extensively reported ,
the treatment with the lysosomotropic agent chloroquine increased lysosomal ph inhibiting lysosomal fusion with the autophagosomes : this process interfered with the autophagic flux producing autophagosomes accumulation .
however , in our experiments , this treatment resulted in an increase of the overall fluorescence signals , but with a clear worse geometrical definition of the resulted intracellular vesicles .
furthermore , it was not clearly established if different sizes of lc3b - gfp positive vesicles correlated with the levels of autophagy .
therefore , the proposed program was designed to subdivide , according to the operator 's requests , the intracellular fluorescent dots into three micrometric classes ( small , medium , and large area ) .
in addition , as a result of our assay , the increase in the autophagy - inducer administration clearly caused an increase of the overall surface of the autophagic vesicles rather than of their total number within the cells . in conclusion , we have developed and assayed an imagej javascript specifically devoted to lc3b - gfp expression analysis in living human astrocytoma cells : this program is mainly intended for in vitro studies of autophagy modulation .
in addition , this program might be a framework for further refinements such as the possibility to track lc3b - rfp ( red fluorescence protein ) vesicle expression as well as other fluorescence - based or non - fluorescence - based intracellular signals in cells of different origins . | an imagej javascript , autocounter , was specifically developed to monitor and measure lc3b - gfp expression in living human astrocytoma cells , namely t98 g and u373-mg .
discrete intracellular gfp fluorescent spots derived from transduction of a baculovirus replication - defective vector ( bacmam lc3b - gfp ) , followed by microscope examinations at different times . after viral transgene expression , autophagy was induced by rapamycin administration and assayed in ph - p70s6k/ p70s6k and lc3b immunoblotting expression as well as by electron microscopy examinations .
a mutated transgene , defective in lc3b lipidation , was employed as a negative control to further exclude fluorescent dots derived from protein intracellular aggregation .
the imagej javascript was then employed to evaluate and score the dynamics changes of the number and area of lc3b - gfp puncta per cell in time course assays and in complex microscope examinations . in conclusion
, autocounter enabled to quantify lc3b - gfp expression and to monitor dynamics changes in number and shapes of autophagosomal - like vesicles : it might therefore represent a suitable algorithmic tool for in vitro autophagy modulation studies . |
aberrant activity of various receptor tyrosine kinases ( rtks ) plays an essential role in the pathogenesis of malignancies .
sprouty ( spry ) proteins represent a major class of ligand - inducible inhibitors of rtk - dependent signaling pathways .
dspry ( drosophila melanogaster spry ) was initially defined by hacohen et al . in 1998 as an inhibitor of fibroblast growth factor ( fgf)-mediated tracheal branching during d. melanogaster development .
they also identified three human homologs of dspry , designated spry13 , in a search of the expressed sequence tag ( est ) database .
spry proteins , in particular spry1 , spry2 , and spry4 isoforms that may have an important role in controlling growth signals , are evidently deregulated in some pathological conditions including cancer .
several studies have reported spry up- or downregulation in a variety of neoplasms including breast , prostate [ 57 ] , hepatocellular [ 8 , 9 ] , colon , and lung cancer [ 11 , 12 ] , as well as melanoma [ 13 , 14 ] and gastrointestinal stromal tumors , implicating spry as a possible tumor suppressor that could be potentially employed as a tumor marker and an interesting target for drug intervention . on this basis
, we intended to investigate the role of spry in ovarian cancer , the seventh leading cancer in women and the second cause of death from gynaecological malignancies worldwide .
as an initial attempt , we aimed in this study to evaluate the expression pattern of spry1 and spry2 isoforms in a panel of human epithelial ovarian cancer cell lines . since alteration in spry expression in this pathological condition was anticipated , primary human ovarian epithelial cells were also employed as the control , against which the expression pattern in the cancer cells could be compared .
human ovarian cancer cell lines ovcar-3 , skov-3 , and 1a9 were obtained from the american type culture collection ( atcc ) ( manassas , va , usa ) .
other human ovarian cancer cells , caov-3 , a2780 , ov-90 , and igrov-1 , were a kind gift from dr .
yong lee ( department of radiology , st george hospital , the university of new south wales , australia ) .
the primary human ovarian surface epithelial cell line hosepic was obtained from siencell ( siencell , ca , usa ) .
all cell lines were maintained in a humidified 5% co2 incubator at 37c in their respective medium as follows : ovcar-3 , skov-3 , 1a9 , a2780 , and igrov-1 cells in rpmi-1640 ( invitrogen , ca , usa ) , caov-3 in dmem ( invitrogen , ca , usa ) , ov-90 cells in a 1 : 1 mixture of mcdb 105/medium 199 ( sigma - aldrich , missouri , usa ) , and hosepic in oepicm ( siencell , ca , usa ) .
the culture media used were all supplemented with 10% fetal bovine serum and 1% penicillin - streptomycin mixture ( invitrogen , ca , usa ) .
ovcar-3 , skov-3 , and hosepic cells were seeded onto sterile glass coverslips in a 6-well tissue culture plate at an initial density of 2.5 10 cells / well and maintained in rpmi medium supplemented with 10% fetal bovine serum at 37c in a humidified , 5% co2 atmosphere . at 50% confluence , the cells were fixed in 0.1% sodium azide plus 0.5% formaldehyde in phosphate - buffered saline ( pbs ) ( sigma - aldrich , missouri , usa ) for one hour at room temperature .
this was followed by one hour incubation with 70% ethanol / pbs at 4c for permeabilization and further fixation . in order to block nonspecific binding of the antibodies ,
the coverslips were immersed in 1% bovine serum albumin ( bsa ) in pbs for one hour at room temperature .
cells were then incubated overnight at 4c with monoclonal anti - spry1 and anti - spry2 antibodies ( 1 : 20 in 1% bsa/1x pbs ) ( abnova , taiwan ) , with the exception of the negative control samples to which no primary antibody was applied .
incubation with the secondary antibody was subsequently applied to all samples using alexa flour 488 chicken anti - mouse igg ( 1 : 500 in 1% bsa/1x pbs ) ( invitrogen , ca , usa ) for 1 hour at 4c in dark . next ,
the cells were counter - stained with propidium iodide ( 1 : 500 ) for 3 minutes and the coverslips were mounted with gelatine glycerol and stored at 4c in dark .
the cells were visualized by laser scanning confocal microscope ( olympus , usa ) and x60 oil immersion lense .
the fluoview software ( version 4.3 , center valley , pa ) was used to overlay the images .
cells were homogenized in a protein lysis buffer ( ripa buffer ) containing 10% protease inhibitor ( sigma - aldrich , missouri , usa ) and the protein concentrations were quantified by biorad protein assay ( bio - rad , ca , usa ) .
then , the same amounts of the proteins were separated by sodium dodecyl sulfate - polyacrylamide gel electrophoresis , transferred to pvdf membranes ( millipore corporation , ma , usa ) , and incubated overnight at 4c with a 1 : 1000 dilution of anti - spry1 or anti - spry2 mouse monoclonal antibodies ( abnova , taiwan ) .
membranes were washed and treated with goat anti - mouse secondary antibody conjugated to horseradish peroxidase ( 1 : 5000 dilution ; santa cruz biotechnology , ca , usa ) for 1 hour at room temperature .
similar process was carried out for gapdh protein with a 1 : 20000 dilution of anti - gapdh mouse monoclonal antibody ( sigma - aldrich , missouri , usa ) . using the imagequant las 4000
biomolecular imager and imagequant software ( ge healthcare , uk ) , the antigen - antibody reaction was digitized , band densitometry was quantified and the data was normalized against the values of gapdh protein expression .
quantitative analysis of the protein expression was otherwise performed through normalizing the data from cancer cell lines against those from hosepic cells as the normal control , where the values were expressed in arbitrary units as a percentage of the protein expression in each cell line to that in hosepic .
total rna was isolated from seven ovarian cancer cell lines as well as the primary human ovarian surface epithelial cell line hosepic in an rnase - free environment using rneasy plus mini kit ( qiagen , germany ) as per the manufacturer 's instructions .
possible contaminating dna from rna preparations was digested by dna - free dnase treatment and removal reagents ( ambion , life technologies , usa ) .
the extracted rna yield and purity were then determined by measuring the absorbance at 230 , 260 , and 280 nm using a nanodrop 2000 spectrophotometer ( thermo fisher scientific , ma , usa ) followed by the evaluation of rna integrity through gel electrophoresis for determination of 28s/18s ribosomal rna ( rrna ) ratio .
reverse transcription was performed using superscript iii one - step rt - pcr system with platinum taq dna polymerase ( invitrogen , ca , usa ) in a veriti 96-well thermal cyclers ( applied biosystems , ca , usa ) according to the manufacturer 's protocol with rnase inhibitor ( qiagen , germany ) added .
the pcr amplification was carried out as follows : a 30 minute incubation at 56c for cdna synthesis , a 3 minute hot start at 94c followed by 35 cycles of denaturation at 94c for 30 seconds , annealing at 56c for 30 seconds , and extension at 72c for 30 seconds with a final extension at 72c for 5 minutes .
transcripts amplification of spry1 and spry2 employed the following oligonucleotide primers based on the published sequence to cross exon / intron regions : 5-ctgcaggggaagtgcaagtgtggagaa-3 ( forward ) and 5-aagcttagttcaggaggtacaacccac-3 ( reverse ) for spry1 , and 5-ggatcccattcgctcatctgccaggaa-3 ( forward ) and 5-aagctttgctgggtgagggcgtctctg-3 ( reverse ) for spry2 .
5-ctgcaggggaagtgcaagtgtggagaa-3 ( forward ) and 5-aagcttagttcaggaggtacaacccac-3 ( reverse ) for spry1 , and 5-ggatcccattcgctcatctgccaggaa-3 ( forward ) and 5-aagctttgctgggtgagggcgtctctg-3 ( reverse ) for spry2 .
oligonucleotide primers of -actin , 5-atatcgccgcgctcgtcgtc-3 ( forward ) and 5-agtggtacggccagaggcgt-3 ( reverse ) , were designed by ncbi primer - blast ( http://www.ncbi.nlm.nih.gov/tools/primer-blast/ ) and used as a reference in the same amplification conditions . minus rt control ( rt ) and no template control ( ntc ) were also included in the experiment to verify absence of genomic dna in the rna preparations and possible contamination of kit components , respectively .
10 l aliquots of the pcr products were separated on 1.5% agarose gel containing a 1 : 10,000 dilution of 10,000 sybr safe dna gel stain ( invitrogen , ca , usa ) by electrophoresis at 80 v and then observed using bio - rad gel doc uv transilluminator 2000 ( bio - rad , ca , usa ) . following approval by the appropriate institutional ethics committee ( south eastern sydney local health district human research ethics committee , nsw , australia ) and in accordance with the relevant guidelines , paraffin - embedded tumor tissue sections from patients with clinically diagnosed primary epithelial ovarian cancer were obtained and immunohistochemically stained . in brief , after deparaffinization , sections were pretreated and antigen retrieval was performed . the incubation with primary antibody
was then performed at 4c overnight using anti - spry1 or anti - spry2 monoclonal antibodies ( abnova , taiwan ) according to the manufacturer 's protocol .
this was followed by incubation with secondary antibody ( santa cruz biotechnology , ca , usa ) and counterstaining with hematoxylin .
positive controls recommended by the manufacturer were used . a negative control , with no primary antibody applied , was also prepared for each sample .
the stained slides were then observed by leica dmlb microscope ( magnification 40 ) and photographed using leica dc200 digital imaging system ( leica microsystems , wetzlar , germany ) .
statistical analyses were performed using graphpad instat ( graphpad prism 5 , san diego , california , usa ) .
student 's t - test was applied for unpaired samples and p values < 0.05 were considered significant .
initial studies were carried out to investigate the expression of spry1 and spry2 proteins in seven commonly - used human epithelial ovarian cancer cell lines by western blot analysis using spry1 and spry2 specific antibodies .
the epithelial ovarian cancer cells exhibited different levels of spry expression ( figure 1(a ) ) .
while ovcar-3 cells expressed high amount of spry1 isoform , and 1a9 and a2780 cells showed a moderate level of expression , igrov-1 cells expressed a low level of spry1 and cell lines skov-3 , caov-3 , and ov-90 had almost no expression .
as for spry2 , whereas ovcar-3 and 1a9 exhibited high levels of expression and a2780 and igrov-1 expressed it moderately , the expression levels were low for caov-3 and ov-90 cells and almost nil for skov-3 . of both isoforms , the highest and lowest levels of expression were seen with ovcar-3 and skov-3 cells , respectively . in sum , our observation revealed nonuniform expression patterns of spry1 and spry2 across the seven cancer cell lines , with the expression levels ranging from almost nil to high . in order to investigate any possible alteration in the expression of human spry1 and spry2 proteins in ovarian cancer , western
blotting was also performed on the protein lysates obtained from the primary human ovarian cells hosepic ( figure 1(a ) ) and quantitative comparison of the protein expression was carried out using imagequant software ( figures 1(b)1(c ) ) .
our results indicated that spry1 and spry2 were moderately expressed in the normal cells . when the nonuniform expression patterns of spry1 and spry2 across the cancer cell lines were individually compared against this pattern , nonconformity was found . while the expression of spry1 and spry2 in ovcar-3 was significantly higher ( p values of 0.0012 and 0.0004 , resp . ) , skov-3 , caov-3 , and ov-90 expressed significantly lower levels of spry1 ( p - values of 0.0002 , 0.0015 , and 0.0002 , resp . ) and spry2 ( p - values of < 0.0001 , 0.0146 , and 0.0003 , resp . ) , with skov-3 expressing the least .
although a2780 expression pattern of spry1 was similar to that of the control group , this cell line expressed significantly higher levels of spry2 ( p value : 0.0032 ) .
igrov-1 cells expressed spry1 significantly lower ( p - value : 0.0002 ) than the control group .
decline in igrov-1 expression of spry2 , however , was not significant ( p - value : 0.1074 ) .
1a9 showed similar levels of spry1 and insignificantly higher levels of spry2 ( p - value : 0.1657 ) as compared to the control . amongst the cancer cell lines , ovcar-3 and skov-3 expressed the highest and lowest levels of both isoforms , respectively .
taken together , while both spry1 and spry2 were moderately expressed in the normal ovarian cells employed as the control , alterations in the expression of spry1 and/or spry2 were found across all cancer cells studied .
spry1 was expressed lower in four cell lines and higher in one . as for spry2
, four cell lines showed lower and two exhibited higher expression , although increase in 1a9 expression of spry2 was not significant .
to evaluate the expression of spry at mrna level and its possible correspondence with spry expression at protein level , we carried out reverse transcription polymerase chain reaction ( rt - pcr ) analysis on the total rna samples derived from the normal and cancer cell lines studied .
the expected product size for spry1 and spry2 was 438 bases and 471 bases , respectively .
while almost undetectable in caov-3 cells , spry1 was expressed by other cells , more remarkably by 1a9 , suggesting that spry1 is differentially expressed in human epithelial ovarian cancer cells in vitro .
spry2 was easily identified in all cell lines , with cancer cells expressing it in a more prominent way . to determine subcellular distribution of spry1 and spry2 proteins in human epithelial ovarian cancer , we performed confocal immunofluorescence microscopy on ovarian cancer cell lines ovcar-3 and skov-3 , as well as human ovarian surface epithelial cell line hosepic , using antibodies specific for spry1 and spry2 .
these two cancer cell lines were selected based on the fact that they represent the cells with the highest and lowest expression of spry1 and spry2 amongst the cell lines studied , and that both are originally derived from ovarian serous adenocarcinoma , the most common subtype of epithelial ovarian cancer [ 18 , 19 ] .
the staining intensity of spry1 and spry2 proteins was stronger in ovcar-3 cells compared with hosepic and skov-3 cells .
moreover , whereas spry1 was found in both cytoplasm and nucleus in vesicular structures , spry2 predominantly showed a cytoplasmic localization ( figure 3 ) . using spry1 and spry2 specific antibodies , immunohistochemical staining was performed on random paraffin - embedded sections obtained from different patients . as seen in figure 4
, we observed a distribution pattern similar to what earlier revealed by our immunocytochemistry study , confirming a cytoplasmic and nuclear localization for spry1 as well as a cytoplasmic distribution for spry2 .
based on prior reports on the role of spry protein family in physiological and pathological conditions including human malignancies , we anticipated that the expression of one or more members of this protein family would be altered in epithelial ovarian cancer cell lines compared with normal ovarian epithelial cells .
examining the expression of spry 1 and spry2 in normal cells , our study indicates that surface epithelial cells of normal human ovaries express the two isoforms , in vitro .
earlier reported the expression of spry2 mrna and protein in granulosa - lutein cells ( glc ) of normal human ovaries .
our results also indicate the differential expression of spry1 and/or spry2 across the ovarian cancer cell lines studied .
these are , at least in part , in keeping with results from previous studies on other malignancies . using immunohistochemical analysis and tissue microarrays , kwabi - addo et al .
consistently showed downregulation of spry1 protein in approximately 40% of prostate cancers compared with matched normal prostate .
fong et al . also observed a consistent reduction in the expression of the spry2 protein in malignant hepatocytes of human hepatocellular carcinoma ( hcc ) compared with normal or cirrhotic hepatocytes .
downregulation of spry2 expression in hcc was also confirmed in another study by song et al . .
sutterluty et al . reported a consistently decreased expression of spry2 protein in nonsmall cell lung cancer ( nsclc ) tissue and cell lines when compared with the normal lung epithelium .
reduced expression of spry2 protein was also reported in ht cell line which is derived from a human b - cell diffuse lymphoma .
. showed low levels of spry2 in low - grade , but not high - grade , colorectal tumors . in another study by feng et al .
, reduced expression of spry2 was observed more in patients with stage iii or iv colon cancer than those with stage ii disease , suggesting that downregulation of spry2 in colon cancer may be associated with tumor invasion and metastasis .
velasco et al . reported a reduction in spry2 expression in 19.85% of endometrial carcinoma and observed a strong and inverse correlation between spry2 and cell proliferation .
considering the decreased expression of spry2 in high grade tumors in comparison with low - grade carcinomas , they suggested that spry2 could act as a tumor progression suppressor gene . in the present study , it is not surprising that the pattern of alteration in spry expression is not similar across the whole range of ovarian cancer cell lines studied .
these cell lines are a number of commonly used in vitro representatives of various subtypes of epithelial ovarian cancer which are originally derived from individual patients demonstrating divergent clinicopathological characteristics .
moreover , all epithelial malignancies have a variety of genetic and epigenetic alterations and , in general , only a fraction of cases of a given tumor type has a specific alteration .
it should be noted in the present study that the expression of both spry1 and spry2 isoforms in ovcar-3 cells was significantly higher than that in the other cancer cell lines and the control group , implying that although decreased expression of spry1 and/or spry2 was observed more frequently amongst the ovarian cancer cell lines studied , decline in the spry expression might not necessarily be required in all epithelial ovarian cancers .
a number of studies have indicated a diminished expression of some of spry genes in breast , prostate [ 57 ] , hcc [ 8 , 9 ] , nsclc [ 11 , 12 ] , human b - cell lymphoma , and colon cancer .
however , spry upregulation has been reported in melanoma cells carrying the b - raf v599e mutation [ 13 , 14 ] , human patient - derived fibrosarcoma cell lines , and gastrointestinal stromal tumors ( gist ) . in a study by schaner et al .
, the gene expression patterns of serous ovarian cancer tissue were investigated by cluster analysis .
the spry homologs 1 and 2 were among the genes that demonstrated coexpression with some other specific genes .
when compared to the western blot analysis , our evaluation of spry presence at mrna level indicates that spry mrna level may not necessarily correspond to the protein expression level . this might be due to the regulation of spry protein family on various levels .
this includes not only transcriptional and translational regulation , but post - translational modification resulting from protein differential localization and/or protein level modification via interaction with other proteins and mediators [ 29 , 30 ] .
meanwhile , the expression of spry at protein or mrna levels seems to be cancer cell - type dependent , as well .
in other words , alteration in the expression of spry is not limited to its downregulation since increased expression of spry has also been reported in some types of cancer .
the first group suggests that some members of this protein family , in particular spry2 , may have a tumor suppressing potential .
this is based on the observations showing spry downregulation in different cancers ( as described before ) , as well as the investigations demonstrating inhibitory effect of spry family on erk signaling pathway in spry overexpressing cancer cells [ 8 , 12 , 21 , 31 ] .
the second group , on the other hand , suggests that while spry may serve as an indicator of good clinical prognosis in cancers showing downregulation of spry [ 32 , 33 ] , it is surprisingly a marker for poor clinical prognosis in other cancers with spry upregulation .
this group is shedding light on the possibility of the later application of spry proteins as tumor markers .
the localization of a protein can provide clues to its mode of action and/or function . in the present study , we observed cytoplasmic and nuclear localization of spry1 and only cytoplasmic localization of spry2 in vitro .
this was further confirmed by our immunohistochemical analysis of tumor samples from patients with ovarian cancer .
our findings concerning spry2 localization were consistent with the results of studies by hausott et al . and
velasco et al . demonstrating a cytoplasmic , but not nuclear , staining for spry2 in nih3t3 fibroblasts and human endometrial carcinoma samples , respectively
to date , various cellular sites at which this protein family resides have been reported [ 35 , 37 ] and the reasons for these discrepancies in spry localization patterns are yet to be scrutinized .
in this study , we have addressed for the first time some aspects of spry protein expression in human epithelial ovarian cancer .
our investigation unveiled the differential expression of spry1 and spry2 proteins in a range of human epithelial ovarian cancer cell lines .
our findings showing lack of correspondence between the expression of spry mrna and spry protein favor the fact that spry regulation occurs at various points . also , our results showing the difference in subcellular localization between spry1 and spry2 isoforms might be of functional significance in ovarian cancer .
the present report , along with similar studies on other cancers , suggests that the expression patterns of spry protein family is cell - type dependent .
further studies exploring the effect of alterations in the expression of spry on tumor cell biology are warranted .
this could lead to the development of novel therapeutic strategies and reliable tumor markers for ovarian cancer . | sprouty ( spry ) proteins , modulators of receptor tyrosine kinase signaling pathways , have been shown to be deregulated in a variety of pathological conditions including cancer . in the present study we investigated the expression of spry1 and spry2 isoforms in a panel of human ovarian cancer cell lines in vitro .
our western blot analysis showed nonuniform patterns of spry expression in the cancer cells , none of which conformed to the pattern observed in the normal ovarian epithelial cells employed as the control . among the seven cancer cell lines studied ,
spry1 was expressed lower in four cell lines and higher in one as compared with the control . as for spry2 , four cell lines showed lower and two exhibited higher expression .
results from rt - pcr assay raised the possibility that spry protein levels may not necessarily correspond with its expression at mrna level .
our immunostaining study revealed that spry2 was predominantly distributed within the whole cytoplasm in vesicular structures whereas spry1 was found in both the cytoplasm and nucleus .
this might provide clues to further investigation of spry mode of action and/or function .
collectively , our study unveiled the differential expression of spry1 and spry2 proteins in various ovarian cancer cell lines . |
more than three decades have elapsed since the discovery of sgr a * ( balick & brown 1974 ) and during most of this time the source remained undetected outside the radio band .
submillimeter radio emission ( the `` submillimeter bump '' ) and both flaring and quiescent x - ray emission from sgr a * are now believed to originate within just a few schwarzschild radii of the @xmath4 black hole ( baganoff et al .
2001 ; schdel et al . 2002 ; porquet et al . 2003 ; goldwurm et al . 2003 ; ghez et al .
2005 ) . unlike the most powerful x - ray flares which show a soft spectral index ( porquet et al .
2003 ) , most x - ray flares from sgr a * are weaker and have hard spectral indices .
more recently , the long - sought near - ir counterpart to sgr a@xmath5 was discovered ( genzel et al .
2003 ) . during several near - ir flares ( lasting @xmath240 minutes )
sgr a * s flux increased by a factor of a few ( genzel et al . 2003
; ghez et al . 2004 ) .
variability has also been seen at centimeter and millimeter wavelengths with a time scale ranging between hours to years with amplitude variation at a level of less than 100% ( bower et al .
2002 ; zhao et al .
2003 ; herrnstein et al .
2004 ; miyazaki et al .
2004 ; mauerhan et al .
these variations are at much lower level than observed at near - ir and x - ray wavelengths .
recently , macquart & bower ( 2005 ) have shown that the radio and millimeter flux density variability on time scales longer than a few days can be explained through interstellar scintillation .
although the discovery of bright x - ray flares from sgr a * has helped us to understand how mass accretes onto black holes at low accretion rates , it has left many other questions unanswered .
the simultaneous observation of sgr a * from radio to @xmath6-ray can be helpful for distinguishing among the various emission models for sgr a * in its quiescent phase and understanding the long - standing puzzle of the extremely low accretion rate deduced for sgr a*. past simultaneous observations to measure the correlation of the variability over different wavelength regimes have been extremely limited .
recent work by eckart et al .
( 2004 , 2005 ) detected near - ir counterparts to the decaying part of an x - ray flare as well as a full x - ray flare based on chandra observations . in order to obtain a more complete wavelength coverage across its spectrum ,
sgr a@xmath5 was the focus of an organized and unique observing campaign at radio , millimeter , submillimeter , near - ir , x - ray and soft @xmath6-ray wavelengths .
this campaign was intended to determine the physical mechanisms responsible for accretion processes onto compact objects with extremely low luminosities via studying the variability of sgr a*. the luminosity of sgr a * in each band is known to be about ten orders of magnitudes lower than the eddington luminosity , prompting a number of theoretical models to explain its faint quiescent as well as its flaring x - ray and near - ir emission in terms of the inverse compton scattering ( ics ) of submillimeter photons close to the event horizon of sgr a * ( liu & melia 2002 ; melia and falcke 2001 ; yuan , quataert & narayan 2004 ; goldston , quataert & tgumenshchev 2005 ; atoyan & dermer 2004 ; liu , petrosian , & melia 2004 ; eckart et al .
( 2004 , 2005 ) ; markoff 2005 ) .
the campaign consisted of two epochs of observations starting march 28 , 2004 and 154 days later on august 31 , 2004 .
the observations with various telescopes lasted for about four days in each epoch .
the first epoch employed the following observatories : xmm - newton , integral , very large array ( vla ) of the national radio astronomy observatory , caltech submillimeter observatory ( cso ) , submillimeter telescope ( smt ) , nobeyama array ( nma ) , berkeley illinois maryland array ( bima ) and australian telescope compact array ( atca ) .
the second epoch observations used only five observatories : xmm - newton , integral , vla , hubble space telescope ( hst ) near infrared camera and multi - object spectrometer ( nicmos ) and cso .
figure 1 shows a schematic diagram showing all of the instruments that were used during the two observing campaigns .
a more detailed account of radio data will be presented elsewhere ( roberts et al .
2005 ) an outburst from an eclipsing binary cxogcj174540.0 - 290031 took place prior to the first epoch and consequently confused the variability analysis of sgr a * , especially in low - resolution data ( bower et al . 2005 ; muno et al . 2005 ; porquet et al .
thus , most of the analysis presented here concentrates on our second epoch observations .
in addition , ground - based near - ir observations of sgr a * using the vlt were corrupted in both campaigns due to bad weather .
thus , the only near - ir data was taken using nicmos of hst in the second epoch .
the structure of this paper is as follows .
we first concentrate on the highlights of variability results of sgr a * in different wavelength regimes in an increasing order of wavelength , followed by the correlation of the light curves , the power spectrum analysis of the light curves in near - ir wavelengths and construction of its multiwavelength spectrum .
we then discuss the emission mechanism responsible for the flare activity of sgr a*.
one of us ( a.g . ) was the principal investigator who was granted observing time using the xmm - newton and integral observatories to monitor the spectral and temporal properties of sgr a*. these high - energy observations led the way for other simultaneous observations .
clearly , x - ray and @xmath6-ray observations had the most complete time coverage during the campaign .
a total of 550 ks observing time or @xmath71 week was given to xmm observations , two orbits ( about 138 ks each ) in each of two epochs ( belanger et al .
2005a ; proquet et al .
briefly , these x - ray observations discovered two relatively strong flares equivalent of 35 times the quiescent x - ray flux of sgr a * in each of the two epochs , with peak x - ray fluxes of 6.5 and @xmath8 ergs s@xmath9 @xmath10 between 2 - 10 kev .
these fluxes correspond to x - ray luminosity of 7.6 and 7.7 @xmath11 ergs s@xmath9 at the distance of 8 kpc , respectively .
the duration of these flares were about 2500 and 5000 s. in addition , the eclipsing x - ray binary system cxogc174540.0 - 290031 localized within 3@xmath12 of sgr a * was also detected in both epochs ( porquet et al .
initially , the x - ray emission from this transient source was identified by chandra observation in july 2004 ( muno et al . 2005 ) before it was realized that its x - ray and radio emission persisted during the the first and second epochs of the observing campaign ( bower et al .
2005 ; belanger et al . 2005a ; porquet et al . 2005 ) .
soft @xmath6-ray observations using integral detected a steady source igrj17456 - 2901 within @xmath13 of sgr a * between 20 - 120 kev ( belanger et al .
( note that the psf of ibis / isgri of integral is 13@xmath14 . )
igrj17456 - 2901 is measured to have a flux 6.2@xmath15 erg s@xmath9 @xmath10 between 20120 kev corresponding to a luminosity of 4.76@xmath16 erg s@xmath9 . during the time that both x - ray flares occurred ,
integral missed observing sgr a * , as this instrument was passing through the radiation belt exactly during these x - ray flare events ( belanger et al .
2005b ) .
as part of the second epoch of the 2004 observing campaign , 32 orbits of nicmos observations were granted to study the light curve of sgr a * in three bands over four days between august 31 and september 4 , 2004 . given that sgr a * can be observed for half of each orbit , the nicmos observations constituted an excellent near - ir time coverage in the second epoch observing campaign .
nicmos camera 1 was used , which has a field of view of @xmath17 and a pixel size of 0.043@xmath18 each orbit consisted of two cycles of observations in the broad h - band filter ( f160w ) , the narrow - band pa@xmath19 filter at 1.87@xmath1 m ( f187n ) , and an adjacent continuum band at 1.90@xmath1 m ( f190n ) . the narrow - band f190n line filter
was selected to search for 1.87@xmath1 m line emission expected from the combination of gravitational and doppler effects that could potentially shift any line emission outside of the bandpass of the f187n .
each exposure used the multiaccum readout mode with the predefined step32 sample sequence , resulting in total exposure times of @xmath27 minutes per filter with individual readout spacings of 32 seconds .
the iraf routine `` apphot '' was used to perform aperture photometry of sources in the nicmos sgr a * field , including sgr a * itself . for stellar sources
the measurement aperture was positioned on each source using an automatic centroiding routine .
this approach could not be used for measuring sgr a * , because its signal is spatially overlapped by that of the orbiting star s2 .
therefore the photometry aperture for sgr a * was positioned by using a constant offset from the measured location of s2 in each exposure . the offset between s2 and sgr a * was derived from the orbital parameters given by ghez et al .
( 2003 ) . the position of sgr a * was estimated to be 0.13@xmath12 south and 0.03@xmath12 west of s2 during the second epoch observing campaign . to confirm the accuracy of the position of sgr a
* , two exposures of sgr a * taken before and during a flare event were aligned and subtracted , which resulted in an image showing the location of the flare emission .
we believe that earlier nicmos observations may have not been able to detect the variability of sgr a * due to the closeness of s2 to sgr a * ( stolovy et al .
1999 ) . at 1.60@xmath1 m
, the nicmos camera 1 point - spread function ( psf ) has a full - width at half maximum ( fwhm ) of 0.16@xmath12 or 3.75 pixels .
sgr a * is therefore located at approximately the half - power point of the star s2 . in order to find an optimal aperature size of sgr a * , excluding signal from s2 which allowed enough signal from sgr a * for a significant detection
, several sizes were measured .
a measurement aperture radius of 2 pixels ( diameter of 4 pixels ) was found to be a suitable compromise .
we have made photometric measurements in the f160w ( h - band ) images at the 32 second intervals of the individual exposure readouts .
for the f187n and f190n images , where the raw signal - to - noise ratio is lower due to the narrow filter bandwidth , the photometry was performed on readouts binned to @xmath23.5 minute intervals .
the standard deviation in the resulting photometry is on the order of @xmath20.002 mjy at f160w ( h - band ) measurements and @xmath20.005 mjy at f187n and f190n .
the resulting photometric measurements for sgr a * show obvious signs of variability ( as discussed below ) , which we have confirmed through comparison with photometry of numerous nearby stars . comparing the light curves of these objects ,
it is clear that sources such as s1 , s2 , and s0 - 3 are steady emitters , confirming that the observed variability of sgr a * is not due to instrumental systematics or other effects of the data reduction and analysis .
for example , the light curves of sgr a * and star s0 - 3 in the f160w band are shown in figure 2a .
it is clear that the variability of sgr a * seen in three of the six time intervals is not seen for s0 - 3 .
the light curve of irs 16sw , which is known to be a variable star , has also been constructed and is clearly consistent with ground - based observations ( depoy et al .
2004 ) .
the thirty - two hst orbits of sgr a * observations were distributed in six different observing time windows over the course of four days of observations .
the detected flares are generally clustered within three different time windows , as seen in figure 2b .
this figure shows the photometric light curves of sgr a * in the 1.60 , 1.87 , and 1.90@xmath1 m nicmos bands , using a four pixel diameter measurement aperture .
the observed `` quiescent '' emission level of sgr a * in the 1.60@xmath1 m band is @xmath20.15 mjy ( uncorrected for reddening ) . during flare events ,
the emission is seen to increase by 10% to 20% above this level . in spite of the somewhat lower signal - to - noise ratio for the narrow - band 1.87 and 1.90@xmath1 m data , the flare activity is still detected in all bands .
figure 3a presents detailed light curves of sgr a * in all three nicmos bands for the three observing time windows that contained active flare events , which corresponds to the second , fourth , and sixth observing windows .
an empirical correction has been applied to the fluxes in 1.87 and 1.90 @xmath1 m bands in order to overlay them with the 1.60@xmath1 m band data .
the appropriate correction factors were derived by computing the mean fluxes in the three bands during the observing windows in which no flares were seen .
this lead us to scale down the observed fluxes in the 1.87 and 1.90@xmath1 m bands by factors of 3.27 and 2.92 , respectively , in order to compare the observed 1.60@xmath1 m band fluxes .
all the data are shown as a time - ordered sequence in figure 3a .
flux variations are detected in all three bands in the three observing windows shown in figure 3a .
the bright flares ( top and middle panels ) show similar spectral and temporal behaviors , both being separated by about two days .
these bright flares have multiple components with flux increases of about 20% and durations ranging from 2 to 2.5 hours and dereddened peak fluxes of @xmath210.9 mjy at 1.60@xmath1 m .
the weak flares during the end of the fourth observing window ( middle panel ) consist of a collection of sub - flares lasting for about 22.5 hours with a flux increase of only 10% .
the light curve from the last day of observations , as shown in the bottom panel of figure 3a , displays the highest level of flare activity over the course of the four days . the dereddened peak flux at 1.6@xmath1 m is @xmath211.1 mjy and decays in less than 40 minutes .
another flare starts about 2 hours later with a rise and fall time of about 25 minutes , with a peak dereddened flux of 10.5 mjy at 1.6@xmath1 m .
there are a couple of instances where the flux changed from `` quiescent '' level to peak flare level or vice versa in the span of a single ( 1 band ) exposure , which is on the order of @xmath27 minutes . for our 1.6 micron fluxes ,
sgr a * is 0.15 mjy ( dereddened ) above the mean level approximately 34% of the time . for a somewhat more stringent higher significant level of 0.3 mjy above the mean , the percentage drops to about 23% . dereddened fluxes quoted above were computed using the appropriate extinction law for the galactic center ( moneti et al .
2001 ) and the genzel et al .
( 2003 ) extinction value of a(h)=4.3 mag .
these translate to extinction values for the nicmos filter bands of a(f160w)=4.5 mag , a(f187n)=3.5 mag , and a(f190n)=3.4 mag , which then correspond to corrections factors of 61.9 , 24.7 , and 23.1 . applying these corrections
leaves the 1.87 and 1.90@xmath1 m fluxes for sgr a * at levels of @xmath227% and @xmath27% , respectively , above the fluxes in the 1.60 @xmath1 m band .
this may suggest that the color of sgr a * is red . however , applying the same corrections to nearby stars , such as s2 and s0 - 3 , the results are essentially the same as that of sgr - a * , namely , the 1.87@xmath1 m fluxes are still high relative to the fluxes at 1.60 and 1.90@xmath1 m .
this discrepancy in the reddening correction is likely to be due to a combination of factors .
one is the shape of the combined spectrum of sgr a * and the shoulder of s2 , as the wings of s2 cover the position of sgr a * .
the other is the diffuse background emission from stars and ionized gas in the general vicinity of sgr a * , as well as the derivation of the extinction law based on ground - based filters , which could be different than the nicmos filter bands . due to these complicating factors , we chose to use the empirically - derived normalization method described above when comparing fluxes across the three nicmos bands .
we have used two different methods to determine the flux of sgr a * when it is flaring .
one is to measure directly the peak emission at 1.6@xmath1 m during the flare to @xmath70.18 mjy . using a reddening correction of about a factor of 62
, this would translate to @xmath210.9 mjy .
since we have used an aperture radius of only 2 pixels , we are missing a very significant fraction of the total signal coming from sgr a*. in addition , the contamination by s2 will clearly add to the measured flux of sgr a*. not only are we not measuring all the flux from sgr a * using our 2-pixel radius aperture , but more importantly , we re getting a large ( but unknown ) amount of contamination from other sources like s2 . the second method is to determine the relative increase in measured flux which can be safely attributed to sgr a * ( since we assume that the other contaminating sources like s2 do nt vary ) .
the increase in 1.6@xmath1 m emission that we have observed from sgr a * during flare events is @xmath20.03 mjy , which corresponds to a dereddened flux of @xmath21.8 mjy .
based on photometry of stars in the field , we have derived an aperture correction factor of @xmath22.3 , which will correct the fluxes measured in our 2-pixel radius aperture up to the total flux for a point source .
thus , the increase in sgr a * flux during a flare increases to a dereddened value of @xmath24.3 mjy .
assuming that all of the increase comes from just sgr - a * , and then adding that increase to the 2.8 mjy quiescent flux ( genzel et al . 2003 ) , then we measure a peak dereddened h - band flux of @xmath27.5 mjy during a flare .
however , recent detection of 1.3 mjy dereddened flux at 3.8@xmath1 m from sgr a * ( ghez et al .
2005 ) is lower than the lowest flux at h band that had been reported earlier ( ghez et al .
this implies that the flux of sgr a * may be fluctuating constantly and there is no quiescent state in near - ir band .
given the level of uncertainties involved in both techniques , we have used the first method of measuring the peak flux which is adopted as the true flux of sgr a * for the rest of the paper . if the second method is used , the peak flux of sgr a * should be lowered by a factor of @xmath20.7 .
we note that the total amount of time that flare activity has been detected is roughly 3040% of the total observation time .
it is remarkable that sgr a * is active at these levels for such a high fraction of the time at near - ir wavelengths , especially when compared to its x - ray activity , which has been detected on the average of once a day or about 1.4 to 5% of the observing time depending on different instruments ( baganoff et al .
2003 ; belanger et al .
in fact , over the course of one week of observations in 2004 , xmm detected only two clusters of x - ray flares .
recent detection of 1.3 mjy dereddened flux at 3.8@xmath1 m from sgr a * is lower than the lowest flux at h band that had been reported earlier ( ghez et al .
this measurement when combined with our variability analysis is consistent with the conclusion that the near - ir flux of sgr a * due to flare activity is fluctuating constantly at a low level and that there is no quiescent flux .
figure 3b shows a histogram plot of the detected flares and the noise as well as the simultaneous 2-gaussian fit to both the noise and the flares . in the plot the dotted lines are the individual gaussians , while the thick dashed line is the sum of the two .
the variations near zero is best fitted with a gaussian , which is expected from random noise in the observations , while the positive half of the histogram shows a tail extending out to @xmath22 mjy above the mean , which represents the various flare detections .
the flux values are dereddened values within the 4-pixel diameter photometric aperture at 1.60@xmath1 m .
the `` flux variation '' values were computed by first computing the mean f160w flux within one of our `` quiescent '' time windows and then subtracting this quiescent value from all the f160w values in all time periods .
so these values represent the increase in flux relative to the mean quiescent .
the parameters of the fitted gaussian for the flares is 10.9 , [email protected] mjy , [email protected] mjy corresponding to the amplitude , center and fwhm , respectively .
the total area of the individual gaussians are 26.1 and 12.0 which gives the percentage of the area of the flare gaussian , relative to the total of the two , to be @xmath231% .
this is consistent with our previous estimate that flares occupy 30 - 40% of the observing time .
a mean quiescent 1.6@xmath1 m flux of 0.15 mjy ( observed ) corresponds to a dereddened flux of @xmath29.3 mjy within a 4-pixel diameter aperture .
the total flux for a typical flare event ( which gives an increase of 0.47 mjy ) would be @xmath29.8 mjy .
but of course all of these measurements refer to the amount of flux collected in a 4-pixel diameter aperture , which includes some contribution from s2 star and at the same time does not include all the flux of sgr a*. if we include the increase associated with a typical flare , which excludes any contribution from s2 , and apply the aperture correction factor of 2.4 which accounts for the amount of missing light from sgr a * , then the typical flux of of 0.47 mjy corresponds to a value of 1.13 mjy .
if we then use the quiescent flux of sgr a * at h - band ( genzel et al . 2003 ) , the absolute flux a typical flare at 1.6@xmath1 m is estimated to be @xmath23.9 mjy .
the energy output per event from a typical flare with a duration of 30 minutes is then estimated to be @xmath210@xmath21 ergs .
the gaussian nature of the flare histogram suggests that this estimate corresponds to the characteristic energy scale of the accelerating events ( if we use a typical flux of a flare @xmath29.8 mjy , then the energy scale increases by a factor of 2.5 ) . in terms of power - law versus gaussian fit ,
the power - law fit to the flare portion only gave a @xmath22=2.6 and rms=1.6 , while the gaussian fit to the flare part only gives a @xmath22=1.6 and rms=1.2 ( better in both ) .
with the limited data we have , we believe that it is difficult to fit a power - law to the flare portion along with a gaussian to the noise peak at zero flux , because the power - law fit continues to rise dramatically as it approaches to zero flux , which then swamps the noise portion centered at zero . during the relatively quiescent periods of our observations ,
the observed 1.6 @xmath1 m fluxes have a 1@xmath23 level of @xmath20.002 - 0.003 mjy .
looking at the periods during which we detected obvious flares , an increase of @xmath20.005 mjy is noted .
this is about 2@xmath23 relative to the observation - to - observation scatter quoted above ( @xmath20.002 mjy ) . to compare these values to the ground - based data using the same reddening correction as genzel et al .
( 2003 ) , our 1-@xmath23 scatter would be about @xmath20.15 mjy at 1.6 @xmath1 m , with our weakest detected flares having a flux @xmath20.3 mjy at 1.6 @xmath1 m .
genzel et al .
report h - band weakest detectable variability at about the 0.6 mjy level .
thus , the hst 1-@xmath23 level is about a factor of 4 better and the weakest detectable flares about a factor of 2 better than ground - based observations . motivated by the report of a 17-minute periodic signal from sgr a * in near - ir wavelengths ( genzel et al .
2003 ) , the power spectra of our unevenly - spaced near - ir flares were measured using the lomb - scargle periodogram ( e.g. , scargle 1982 )
. there are certain possible artificial signals that should be considered in periodicity analysis of hst data .
one is the 22-minute cycle of the three filters of nicmos observations .
in addition , the orbital period of hst is 92 minutes , 46 minutes of which no observation can be made due to the earth s occultation .
thus , any signals at the frequencies corresponding to the inverse of these periods , or their harmonics , are of doubtful significance . in spite of these limitations
the data is sampled and characterized well for the periodic analysis . in order to determine the significance of power at a given frequency , we employed a monte carlo technique to simulate the power - law noise following an algorithm that has been applied to different data sets ( timmer & knig 1995 ; mauerhan et al .
5000 artificial light curves were constructed for each time segment .
each simulated light curve contained red noise , following p(@xmath24 , and was forced to have the same variance and sampling as the original data .
figures 4a , b show the light curves , power spectra , and envelopes of simulated power spectra for the flares during the 2nd and 4th observing time windows . the flare activity with very weak signal - to - noise ratio at the end of the 4th observing window was not included in the power spectrum analysis .
the flares shown in figures 4a , b are separated by about two days from each other and the temporal and spatial behavior of of their light curves are similar . dashed curves on each figure indicate the envelope below which 99% ( higher curve ) , 95% ( middle curve ) , and 50% ( lower curve ) of the simulated power spectra lie .
these curves show ripples which incorporate information about the sampling properties of the lightcurves .
the vertical lines represent the period of an hst orbit and the period at which the three observing filters were cycled .
the only signals which appear to be slightly above the 99% light curve of the simulated power spectrum are at [email protected] hours , or 33@xmath202 minutes .
the power spectrum of the sixth observing window shows similar significance near 33 minutes , but also shows similar significance at other periods near the minima in the simulated lightcurves .
we interpret this to suggest that the power in the sixth observation is not well - modeled as red noise .
we compared the power spectrum of the averaged data from three observing windows using a range of @xmath25 from 1 to 3 .
the choice of @xmath25=2 shows the best overall match between the line enclosing 50% of the simulated power spectra and the actual power spectrum .
a @xmath25 of 3 is not too different in the overall fit to that of @xmath26 . for
the choice of @xmath25=1 , significant power at longer time scales becomes apparent .
however , the significance of longer periods in the power spectrum disappears when @xmath26 was selected , thus we take @xmath25=2 to be the optimal value for our analysis .
the only signal that reaches a 99% significance level is the 33-minute time scale .
this time scale is about twice the 17-minute time scale that earlier ground - based observations reported ( genzel et al .
there is no evidence for any periodicity at 17 minutes in our data .
the time scale of about 33 minutes roughly agrees with the timescales on which the flares rise and decay .
similarly , the power spectrum analysis of x - ray data show several periodicities , one of which falls within the 33-minute time scale of hst data ( aschenbach et al . 2004 ; aschenbach 2005 ) .
however , we are doubtful whether this signal indicates a real periodicity .
this signal is only slightly above the noise spectrum in all of our simulations and is at best a marginal result .
it is clear that any possible periodicities need to be confirmed with future hst observations with better time coverage and more regular time spacing . given that the low - level amplitude variability that is detected here with _
data is significantly better than what can be detected with ground - based telescopes , additional hst observations are still required to fully understand the power spectrum behavior of near - ir flares from sgr a*. using cso with sharc ii , sgr a * was monitored at 450 and 850 @xmath1 m in both observing epochs ( dowell et al .
2004 ) . within the 2 arcminute field of view of the cso images , a central point source coincident with sgr
a * is visible at 450 and 850 @xmath1 m wavelengths having spatial resolutions of 11@xmath12 and 21@xmath12 , respectively .
figure 5a shows the light curves of sgr a * in the second observing epoch with 1@xmath23 error bars corresponding to 20min of integration .
the 1@xmath23 error bars are noise and relative calibration uncertainty added in quadrature .
absolute calibration accuracy is about 30% ( 95% confidence ) . during the first epoch , when a transient source appeared a few arcseconds away from sgr a * ,
no significant variability was detected .
the flux density of sgr a * at 850 @xmath1 m is consistent with the smt flux measurement of sgr a * on march 28 , 2004 , as discussed below . during this epoch ,
sgr a * was also observed briefly at 350 @xmath1 m on april 1 and showed a flux density of [email protected] jy .
the light curve of sgr a * in the second epoch , presented in figure 5a , shows only @xmath225% variability at 450 @xmath1 m .
however , the flux density appears to vary at 850@xmath1 m in the range between 2.7 and 4.6 jy over the course of this observing campaign .
since the cso slews slowly , and we need all of the sgr a * signal - to - noise , we only observe calibrators hourly . the hourly flux of the calibrators as a function of atmospheric opacity shows @xmath230% peak - to - peak uncertainty for a particular calibration source and a 10% relative calibration uncertainty ( 1@xmath23 ) for the cso 850 micron data .
we note the presence of remarkable flare activity at 850 @xmath1 m on the last day of the observation during which a peak flux density of 4.6 jy was detected with a s / n = 5.4 .
the reality of this flare activity is best demonstrated in a map , shown in figure 5b , which shows the 850@xmath1 m flux from well - known diffuse features associated with the southern arm of the circumnuclear ring remaining constant , while the emission from sgr a * rises to 4.6 jy during the active period .
the feature of next highest significance after sgr a * in the subtracted map showing the variable sources is consistent with noise with s / n = 2.5 .
sgr a@xmath5 was monitored in the 870@xmath1 m atmospheric window using the mpifr 19 channel bolometer on the arizona radio observatory ( aro ) 10 m hht telescope ( baars et al .
the array covers a total area of 200@xmath12 on the sky , with the 19 channels ( of 23@xmath12 hpbw ) arranged in two concentric hexagons around the central channel , with an average separation of 50@xmath12 between any adjacent channels .
the bolometer is optimized for operations in the 310 - 380 ghz ( 970 - 790 @xmath1 m ) region , with a maximum sensitivity peaking at 340 ghz near 870 @xmath1 m .
the observations were carried out in the first epoch during the period march 28 - 30th , 2004 between 11 - 16h ut .
variations of the atmospheric optical depth at 870@xmath1 m were measured by straddling all observations with skydips .
the absolute gain of the bolometer channels was measured by observing the planet uranus at the end of each run .
a secondary flux calibrator , i.e. nrao 530 , was observed to check the stability and repeatability of the measurements .
all observations were carried out with a chopping sub - reflector at 4hz and with total beam - throws in the range 120@xmath27 , depending on a number of factors such as weather conditions and elevation .
as already noted above , dust around sgr a@xmath5 is clearly contaminating our measurements at a resolution of 23@xmath12 .
due to the complexity of this field , the only possibility to try to recover the uncontaminated flux is to fit several components to the brightness distribution , assuming that in the central position , there is an unresolved source , surrounded by an extended smoother distribution .
we measured the average brightness in concentric rings ( of 8@xmath12 width ) centered on sgr a@xmath5 in the radial distance range 0 - 80@xmath12 .
the averaged radial profile was then fitted with several composite functions , but always included a point source with a psf of the order of the beam - size . the best fit for both the central component and a broader and smoother outer structure gives a central ( i.e. , sgr a@xmath5 ) flux of [email protected] in the first day of observation on march 28 , 2004 .
the cso source flux fitting , as described earlier , and hht fitting are essentially the same .
due to bad weather , the scatter in the measured flux of the calibrator nrao 530 and sgr a * was high in the second and third days of the run .
thus , the measurements reported here are achieved only for the first day with the photometric precision @xmath2812@xmath29 for the calibrator .
the flux of nrao 530 at 870@xmath1 m during this observation was [email protected] jy .
nma was used in the first observing epoch to observe sgr a * at 3 mm ( 90 ghz ) and 2 mm ( 134 ghz ) , as part of a long - term monitoring campaign ( tsutsumi , miyazaki & tsuboi 2005 ) .
the 2 and 3 mm flux density were measured to be [email protected] and [email protected] jy on march 31 and april 1 , 2004 , respectively , during 2:30 - 22:15 ut .
these authors had also reported a flux density of [email protected] jy at 2 mm on march 6 , 2004 . this observation took place when a radio and x - ray transient near sgr a * was active .
thus , it is quite possible that the 2 mm emission toward sgr a * is not part of a flare activity from sgr a * but rather due to decaying emission from a radio / x - ray transient which was first detected by xmm and vla on march 28 , 2004 . using nine telescopes ,
bima observed sgr a * at 3 mm ( 85 ghz , average of two sidebands at 82.9 and 86.3 ghz ) for five days between march 28 and april 1 , 2004 during 11:10 - 15:30 ut .
detailed time variability analysis is given elsewhere ( roberts et al .
the flux densities on march 28 and april 1 show average values of [email protected] and [email protected] at @xmath23 mm , respectively .
these values are consistent with the nma flux values within errors . no significant hourly variability was detected .
the presence of the transient x - ray / radio source a few arcseconds south of sgr a * during this epoch complicates time variability analysis of bima data since the relatively large synthesized beam ( 82 @xmath30 26 ) changes during the course of the observation .
thus , as the beam rotates throughout an observation , flux included from sgr a west and the radio transient may contaminate the measured flux of sgr a*. using the vla , sgr a * was observed at 7 mm ( 43 ghz ) in the first and second observing epochs . in each epoch , observations were carried out on four consecutive days , with an average temporal coverage of about 4 hr per day . in order to calibrate out rapid atmospheric changes , these observations used a new fast switching technique for the first time to observe time variability of sgr a*. briefly , these observations used the same calibrators ( 3c286 , nrao 530 and 1820 - 254 ) .
the fast switching mode rapidly alternated between sgr a * ( 90sec ) and the calibrator 1820 - 254 ( 30sec ) .
tipping scans were included every 30 min to measure and correct for the atmosphere opacity .
in addition , pointing was done by observing nrao 530 . after applying high frequency calibration ,
the flux of sgr a * was determined by fitting a point source in the _ uv _ plane ( @xmath31100 k@xmath32 ) . as a check
, the variability data were also analyzed in the image plane , which gave similar results .
the results of the analysis at 7 mm clearly indicate a 5 - 10% variability on hourly time scales , in almost all the observing runs . a power spectrum analysis , similar to the statistical analysis of near - ir data presented above ,
was also done at 7 mm .
figure 6a shows typical light curves of nrao 530 and sgr a * in the top two panels at 7 mm .
similar behavior is found in a number of observations during 7 mm observations in both epochs .
it is clear that the light curve starts with a peak ( or that the peak preceded the beginning of the observation ) followed by a decay with a duration of 30 minutes to a quiescent level lasting for about 2.5 hours . at the atca
, we used a similar observing technique to that of our vla observations , involving fast switching between the calibrator and sgr a * simultaneously at 1.7 ( 17.6 ghz ) and 1.5 cm ( 19.5 ghz ) .
unlike ground based northern hemisphere observatories that can observe sgr a * for only 5 hours a day ( such as the vla ) , atca observed sgr a * for 4 @xmath30 12 hours in the first epoch . in spite of the possible contamination of variable flux due to interstellar scintillation toward sgr a * at longer wavelengths , similar variations in both 7 mm and 1.5 cm
are detected .
figure 6b shows the light curve of sgr a * and the corresponding calibrator during a 12-hour observation with atca at 1.7 cm .
the increase in the flux of sgr a * is seen with a rise and fall time scale of about 2 hours .
the 1.5 , 1.7 cm and 7 mm variability analysis is not inconsistent with the time scale at which significant power has been reported at 3 mm ( mauerhan et al . 2005 ) .
furthermore , the time scales for rise and fall of flares in radio wavelengths are longer than in the near - ir wavelengths discussed above .
figure 7 shows the simultaneous light curves of sgr a * during the first epoch in march 2004 based on observations made with xmm , cso at 450 and 850 @xmath1 m , bima at 3 mm and vla at 7 mm .
the flux of sgr a * remained constant in submillimeter and millimeter wavelengths throughout the first epoch , while we observed an x - ray flare ( top panel ) at the end of the xmm observations and hourly variations in radio wavelengths ( bottom panel ) at a level 10 - 20% .
this implies that the contamination from the radio and x - ray transient cxogcj174540.0 - 290031 , which is located a few arcseconds from sgr a * , is minimal , thus the measured fluxes should represent the quiescent flux of sgr a*. these data are used to make a spectrum of sgr a * , as discussed in section 5 .
as for the x - ray flare , there were no simultaneous observations with other instruments during the period in which the x - ray flare took place .
thus , we can not state if there were any variability at other wavelengths during the x - ray flare in this epoch .
figure 8 shows the simultaneous light curve of sgr a * based on the second epoch of observations using xmm , hst , cso and vla .
porquet et al .
( 2005 ) noted clear 8-hour periodic dips due to the eclipses of the transient as seen clearly in the xmm light curve .
sgr a * shows clear variability at near - ir and submillimeter wavelengths , as discussed below .
one of the most exciting results of this observing campaign is the detection of a cluster of near - ir flares in the second observing window which appears to have an x - ray counterpart .
the long temporal coverage of xmm - newton and hst observations have led to the detection of a simultaneous flare in both bands .
however , the rest of the near - ir flares detected in the fourth and sixth observing windows ( see figure 3 ) show no x - ray counterparts at the level that could be detected with xmm .
the two brightest near - ir flares in the second and fourth observing windows are separated by roughly two days and appear to show similar temporal and spatial behaviors .
figure 9 shows the simultaneous near - ir and x - ray emission with an amplitude increase of @xmath215% and 100% for the peak emission , respectively .
we believe that these flares are associated with each other for the following reasons .
first , x - ray and near - ir flares are known to occur from sgr a * as previous high resolution x - ray and near - ir observations have pinpointed the origin of the flare emission .
although near - ir flares could be active up to 40% of time , the x - ray flares are generally rare with a 1% probablity of occurance based on a week of observation with xmm .
second , although the chance coincidence for a near - ir flare to have an x - ray counterpart could be high but what is clear from figure 9 is the way that near - ir and x - ray flares track each other on a short time .
both the near - ir and x - ray flares show similar morphology in their light curves as well as similar duration with no apparent delay .
this leads us to believe that both flares come from the same region close to the event horizon of sgr a*. the x - ray light curve shows a double peaked maximum flare near day 155.95 which appears to be remarkably in phase with the near - ir strongest double peaked flares , though with different amplitudes .
we can also note similar trend in the sub - flares noted near day 155.9 in figure 9 where they show similar phase but different amplitudes .
lastly , since x - ray flares occur on the average once a day , the lack of x - ray counterparts to other near - ir flares indicates clearly that not all near - ir flares have x - ray counterparts .
this fact has important implications on the emission mechanism , as described below . with the exception of the september 4 , 2004 observation toward the end of the second observing campaign , the large error bars of the submillimeter data do not allow us to determine short time scale variability in this wavelength domain with high confidence .
we notice a significant increase in the 850@xmath1 m emission about 22 hours after the simultaneous x - ray / near - ir flare took place , as seen in figure 8 .
we also note the highest 850@xmath1 m flux in this campaign [email protected] jy which is detected toward the end of the submillimeter observations .
this corresponds to a 5.4@xmath23 increase of 850@xmath1 m flux .
figure 10 shows simultaneous light curves of sgr a * at 850@xmath1 m and near - ir wavelengths .
the strongest near - ir flare occurred at the beginning of the 6th observing window with a decay time of about 40 minutes followed by the second flare about 200 minutes later with a decay time of about 20 minutes .
the submillimeter light curve shows a peak about 160 minutes after the strongest near - ir flare that was detected in the second campaign .
the duration of the submillimeter flare is about two hours .
given that there is no near - ir data during one half of every hst orbit and that the 850@xmath1 m data were sampled every 20 minutes compared to 32sec sampling rate in near - ir wavelengths , it is not clear whether the submillimeter data is correlated simultaneously with the second bright near - ir flare , or is produced by the first near - ir flare with a delay of 160 minutes , as seen in figure 10 .
what is significant is that submillimeter data suggests that the 850@xmath1 m emission is variable and is correlated with the near - ir data . using optical depth and polarization arguments
, we argue below that the submillimeter and near - ir flares are simultaneous .
theoretical studies of accretion flow near sgr a * show that the flare emission in near - ir and x - rays can be accounted for in terms of the acceleration of particles to high energies , producing synchrotron emission as well as ics ( e.g. , markoff et al .
2001 ; liu & melia 2001 ; yuan , markoff & falcke 2002 ; yuan , quataert & narayan 2003 , 2004 ) .
observationally , the near - ir flares are known to be due to synchrotron emission based on spectral index and polarization measurements ( e.g. , genzel et al .
2003 and references therein ) .
we argue that the x - ray counterparts to the near - ir flares are unlikely to be produced by synchrotron radiation in the typical @xmath33 g magnetic field inferred for the disk in sgr a * for two reasons .
first , emission at 10kev would be produced by 100gev electrons , which have a synchrotron loss time of only 20seconds , whereas individual x - ray flares rise and decay on much longer time scales .
second , the observed spectral index of the x - ray counterpart , @xmath34 ( @xmath35 ) , does not match the near - ir to x - ray spectral index .
the observed x - ray 2 - 10 kev flux 6@xmath36 erg @xmath10 s@xmath9 corresponds to a differential flux of 2@xmath36 erg @xmath37 s@xmath9 kev@xmath9 ( 0.83 @xmath1jy ) at 1 kev .
the extinction - corrected ( for @xmath38mag ) peak flux density of the near - ir ( 1.6@xmath1 m ) flare is @xmath210.9 mjy .
the spectral index between x - ray and near - ir is 1.3 , far steeper than the index of 0.6 determined for the x - ray spectrum .
instead , we favor an inverse compton model for the x - ray emission , which naturally produces a strong correlation with the near - ir flares . in this picture ,
submillimeter photons are upscattered to x - ray energies by the electrons responsible for the near - ir synchrotron radiation .
the fractional variability at submillimeter wavelengths is less than 20% , so we first consider quiescent submillimeter photons scattering off the variable population of gev electrons that emit in the near - ir wavelengths . in the ics picture , the spectral index of the near - ir flare must match that of the x - ray counterpart , i.e. @xmath19 = 0.6 .
unfortunately , we were not able to determine the spectral index of near - ir flares .
recent measurements of the spectral index of near - ir flares appear to vary considerably ranging between 0.5 to 4 ( eisenhauer et al . 2005 ; ghez et al . 2005 ) .
the de - reddened peak flux of 10.9 mjy ( or 7.5 mjy from the relative flux measurement described in section 2.2.2 ) with a spectral index of 0.6 is consistent with a picture that blighter near - ir flares have harder spectral index ( eisenhauer et al .
2005 ; ghez et al . 2005 ) .
assuming an electron spectrum extending from 3gev down to 10mev and neglecting the energy density of protons , the equipartition magnetic field is 11 g , with equipartition electron and magnetic field energy densities of @xmath25 erg @xmath39 .
the electrons emitting synchrotron at 1.6@xmath1 m then have typical energies of 1.0 gev and a loss time of 35min .
1gev electrons will compton scatter 850@xmath1 m photons up to 7.8kev , so as the peak of the emission spectrum of sgr a * falls in the submillimeter regime , as it is natural to consider the upscattering of the quiescent submillimeter radiation field close to sgr a*. we assume that this submillimeter emission arises from a source diameter of 10 schwarzschild radii ( r@xmath3 ) , or 0.7au ( adopting a black hole mass of 3.7@xmath40 @xmath41 ) . in order to get the x - ray flux , we need the spectrum of the seed photons which is not known .
we make an assumption that the measured submillimeter flux ( 4 jy at 850 @xmath1 m ) , and the product of the spectrum of the near - ir emitting particles and submillimeter flux @xmath42 , are of the same order over a decade in frequency .
the predicted ics x - ray flux for this simple model is @xmath43 erg @xmath10 s@xmath9 kev@xmath9 , roughly half of the observed flux .
the second case we consider to explain the origin of x - ray emission is that near - ir photons scatter off the population of @xmath250 mev electrons that emit in submillimeter wavelengths .
if synchrotron emission from a population of lower - energy ( @xmath44mev ) electrons in a similar source region ( diameter @xmath45r@xmath3 , @xmath46 g ) is responsible for the quiescent emission at submillimeter wavelengths , then upscattering of the flare s near - ir emission by this population will produce a similar contribution to the flux of the x - ray counterpart , and the predicted net x - ray flux @xmath47 erg @xmath37 s@xmath9 kev@xmath9 is similar to that observed . the two physical pictures of ics described above produce similar x - ray flux within the inner diameter @xmath45r@xmath3 , @xmath46 g , and therefore can not be distinguished from each other . on the other hand , if the near - ir flares arise from a region smaller than that of the quiescent submillimeter seed photons , then the first case , in which the quiescent submillimeter photons scatter off gev electrons that emit in the near - ir , is a more likely mechanism to produce x - ray flares .
the lack of an x - ray counterpart to every detected near - ir flare can be explained naturally in the ics picture presented here .
it can be understood in terms of variability in the magnetic field strength or spectral index of the relativistic particles , two important parameters that determine the relationship between the near - ir and ics x - ray flux . a large variation of the spectral index in near - ir wavelengths has been observed ( ghez et al .
2005 ; eisenhauer et al .
figure 11a shows the ratio of the fluxes at 1 kev and 1.6 @xmath1 m against the spectral index for different values of the magnetic field .
note that there is a minimum field set by requiring the field energy density to be similar to or larger than the relativistic particle energy .
if , as is likely , the magnetic field is ultimately responsible for the acceleration of the relativistic particles , then the field pressure must be stronger or equal to the particle energy density so that the particles are confined by the field during the acceleration process .
it is clear that hardening ( flattening ) of the spectral index and/or increasing the magnetic field reduces the x - ray flux at 1 kev relative to the near - ir flux . on the other hand softening ( steepening )
the spectrum can produce strong x - ray flares .
this occurs because a higher fraction of relativistic particles have lower energies and are , therefore , available to upscatter the submillimeter photons .
this is consistent with the fact that the strongest x - ray flare that has been detected from sgr a * shows the softest ( steepest ) spectral index ( porquet et al .
2003 ) . moreover , the sub - flares presented in near - ir and in x - rays , as shown in figure 9 , appear to indicate that the ratio of x - ray to near - ir flux ( s@xmath48 to s@xmath49 ) varies in two sets of double - peaked flares , as described earlier .
we note an x - ray spike at 155.905 days has a 1.90 @xmath1 m ( red color ) counterpart .
the preceding 1.87 @xmath1 m ( green color ) data points are all steadily decreasing from the previous flare , but then the 1.90@xmath1 m suddenly increases up to at least at a level of @xmath50 . the flux ratio corresponding to the peak x - ray flare ( figure 9 ) is high as it argues that the flare has either a soft spectral index and/or a low magnetic field .
since the strongest x - ray flare that has been detected thus far has the steepest spectrum ( porquet et al . 2003 ) , we believe that the observed variation of the flux ratio in sgr a * is due to the variation of the spectral index of individual near - ir flares . since most of the observed x - ray sub - flares are clustered temporally , it is plausible to consider that they all arise from the same location in the disk .
this implies that the the strength of the magnetic field does not vary between sub - flares . as discussed earlier
, we can not determine whether the submillimeter flare at 850@xmath1 m is correlated with a time delay of 160 minutes or is simultaneous with the detected near - ir flares ( see fig . 10 ) . considering that near - ir flares are relatively continuous with up to 40% probability and that the near - ir and submillimter flares are due to chance coincidence
, the evidence for a delayed or simultaneous correlation between these two flares is not clear .
however , spectral index measurements in submillimeter domain as well as a jump in the polarization position angle in submillimeter wavelengths suggest that the transition from optically thick to thin regime occurs near 850 and 450 @xmath1 m wavelengths ( e.g. , aitken et al .
et al . 2000 ; agol 2000 ; melia et al .
2000 ; d. marrone , private communication ) .
if so , it is reasonable to consider that the near - ir and submillimeter flares are simultaneous with no time delay and these flares are generated by synchrotron emission from the same population of electrons .
comparing the peak flux densities of 11 mjy and 0.6 jy at 1.6@xmath1 m and 850@xmath1 m , respectively , gives a spectral index @xmath51 ( if we use a relative flux of 7.6 mjy at 1.6@xmath1 m , the @xmath52 ) .
this assumes that the population of synchrotron emitting particles in near - ir wavelengths with typical energies of @xmath21 gev could extend down to energies of @xmath53 mev .
a low - energy cutoff of 10 mev was assumed in the previous section to estimate the x - ray flux due to ics of seed photons . in this picture , the enhanced submillimeter emission , like near - ir emission , is mainly due to synchrotron and arises from the inner 10r@xmath3 of sgr a@xmath5 with a magnetic field of 10 g .
similar to the argument made in the previous section , the lack of one - to - one correlation between near - ir and submillimeter flares could be due to the varying energy spectrum of the particles generating near - ir flares .
the hard(flat ) spectrum of radiating particles will be less effective in the production of submillimeter emission whereas the soft ( steep ) spectrum of particles should generate enhanced synchrotron emission at submillimeter wavelengths .
this also implies that the variability of steep spectrum near - ir flares should be correlated with submillimeter flares .
the synchrotron lifetime of particles producing 850@xmath1 m is about 12 hours , which is much longer than the 35min time scale for the gev particles responsible for the near - ir emission .
similar argument can also be made for the near - ir flares since we detect the rise or fall time scale of some of the near - ir flares to be about ten minutes which is shorter than the synchrotron cooling time scale .
therefore we conclude that the duration of the submillimeter and near - ir flaring must be set by dynamical mechanisms such as adiabatic expansion rather than frequency - dependent processes such as synchrotron cooling .
the fact that the rise and fall time scale of near - ir and submillimeter flare emission is shorter than their corresponding synchrotron cooling time scale is consistent with adiabatic cooling .
if we make the assumption that the 33-minute time scale detected in near - ir power spectrum analysis is real , this argument can also be used to rule out the possibility that this time scale is due to the near - ir cooling time scale . as described earlier , a soft @xmath6-ray integral source igrj17456 - 2901 possibly coincident with sgr a * has a luminosity of 4.8@xmath16 erg s@xmath9 between 20 - 120 kev .
the spectrum is fitted by a power law with spectral index @xmath54 ( belanger et al .
here , we make the assumption that this source is associated with sgr a * and apply the same ics picture that we argued above for production of x - ray flares between 2 - 10 kev . the difference between the 2 - 10 kev flares and igrj17456 - 2901 are that the latter source is detected between 20 - 120 kev with a steep spectrum and is persistent with no time variability apparent on the long time scales probed by the integral observations .
figure 11b shows the predicted peak luminosity between 20 and 120 kev as a function of the spectral index of relativistic particles for a given magnetic field .
in contrast to the result where the softer spectrum of particles produces higher ics x - ray flux at 1 kev , the harder spectrum produces higher ics soft @xmath6-ray emission .
figure 11b shows that the observed luminosity of 4.8@xmath16 erg s@xmath9 with @xmath19 = 2 can be matched well if the magnetic field ranges between 1 and 3 g , however the observed luminosity must be scaled by at least a factor of three to account for the likely 30 - 40% duty cycle of the near - ir and the consequent reduction in the time - averaged soft gamma - ray flux .
this is also consistent with the possibility that much or all of the detected soft @xmath6-ray emission arises from a collection of sources within the inner several arcminutes of the galactic center .
in order to get a simultaneous spectrum of sgr a * , we used the data from both epochs of observations .
as pointed out earlier , the first epoch data probably represents best the quiescent flux of sgr a * across its spectrum whereas the flux of sgr a * includes flare emission during the second epoch .
figure 12 shows power emitted for a given frequency regime as derived from simultaneous measurements from the first epoch ( in blue solid line ) . we have used the mean flux and the corresponding statistical errors of each measurement for each day of observations for the first epoch .
since there were not any near - ir measurements and no x - ray flare activity , we have added the quiescent flux of 2.8 and 1.3 mjy at 1.6 and 3.8 @xmath1 m , respectively ( genzel et al . 2003 ; ghez et al . 2005 ) and 20 njy between 2 and 8 kev ( baganoff et al .
2001 ) to construct the spectrum shown in figure 12 . for illustrative purposes ,
the hard @xmath6-ray flux in the tev range ( aharonian et al .
2004 ) is also shown in figure 12 .
the f@xmath55 spectrum peaks at 350 @xmath1 m whereas f@xmath56 peaks at 850 @xmath1 m in submillimeter domain . the flux at wavelengths between 2 and 3 mm as well as between 450 and 850 @xmath1 m appear to be constant as the emission drops rapidly toward radio and x - ray wavelengths . the spectrum at near - ir wavelengths is thought to be consistent with optically thin synchrotron emission whereas the emission at radio wavelengths is due to optically thick nonthermal emission .
the spectrum of a flare is also constructed using the flux values in the observing window when the x - ray / near - ir flare took place and is presented in figure 12 as red dotted line .
it is clear that the powers emitted in radio and millimeter wavelengths are generally very similar to each other in both epochs whereas the power is dramatically changed in near - ir and x - ray wavelengths .
we also note that the slope of the power generated between x - rays and near - ir wavelengths does not seem to change during the quiescent and flare phase .
however , the flare substructures shown in figure 9 shows clearly that the spectrum between the near - ir to x - ray subflares must be varying .
the soft and hard @xmath6-ray fluxes based on integral and hess ( belanger et al . 2005b ; aharonian et al .
2004 ) are also included in the plot as black dots .
it is clear that f@xmath55 spectrum at tev is similar to the observed values at low energies .
this plot also shows that the high flux at 20 kev is an upper limit to the flux of sgr a * because of the contribution from confusing sources within a 13@xmath14 resolution of integral .
the simultaneous near - ir and submillimeter flare emission is a natural consequence of optically thin emission .
thus , both near - ir and submillimeter flare emission are nonthermal and no delay is expected between the near - ir and submillimeter flares in this picture .
we also compare the quiescent flux of sgr a * with a flux of 2.8 mjy at 1.6@xmath1 m with the minimum flux of about 2.7 jy at 850@xmath1 m detected in our two observing campaigns .
the spectral index that is derived is similar to that derived when a simultaneous flare activity took place in these wavelength bands , though there is much uncertainty as to what the quiescent flux of sgr a * is in near - ir wavelengths .
if we use these measurements at face value , this may imply that the quiescent flux of sgr a * in near - ir and submillimeter could in principle be coupled to each other . the contribution of nonthermal emission to the quiescent flux of sgr a * at submillimeter wavelength is an observational question that needs to be determined in future study of sgr a*.
in the context of accretion and outflow models of sgr a * , a variety of synchrotron and ics mechanisms probing parameter space has been invoked to explain the origin of flares from sgr a*. a detailed analysis of previous models of flaring activity , the acceleration mechanism and their comparison with the simple modeling given here are beyond the scope of this work .
many of these models have considered a broken power law distribution or energy cut - offs for the nonthermal particles , or have made an assumption of thermal relativistic particles to explain the origin of submillimeter emission ( e.g. , melia & falcke 2001 ; yuan , markoff & falcke 2002 ; liu & melia 2002 ; yuan quataert & narayan 2003 , 2004 ; liu , petrosian & melia 2004 ; atoyan & dermer ( 2004 ) ; eckart et al .
2004 , 2005 ; goldston , quataert & tgumenshchev 2005 ; liu , melia , petrosian 2005 ; guillesen et al .
2005 ) the correlated near - ir and x - ray flaring which we have observed is consistent with a model in which the near - ir synchrotron emission is produced by a transient population of @xmath2gev electrons in a @xmath5710 g magnetic field of size @xmath58 .
although ics and synchrotron mechanisms have been used in numerous models to explain the quiescent and flare emission from sgr a * since the first discovery of x - ray flare was reported ( e.g. , baganoff 2001 ) , the simple model of x - ray , near - ir and submillimeter emission discussed here is different in that the x - ray flux is produced by a roughly equal mix of ( a ) near - ir photons that are up - scattered by the 50mev particles responsible for the quiescent submillimeter emission from sgr a * , and/or ( b ) submillimeter photons up - scattered from the gev electron population responsible for the near - ir flares .
thus , the degeneracy in these two possible mechanisms can not be removed in this simple model and obviously a more detailed analysis is needed .
in addition , we predict that the lack of a correlation between near - ir and x - ray flare emission can be explained by the variation of spectral index and/or the magnetic fields .
the variation of these parameters in the context of the stochastic acceleration model of flaring events has also been explored recently ( liu , melia and petrosian 2005 ; gillesen et al .
2005 ) .
the similar durations of the submillimeter and near - ir flares imply that the transient population of relativistic electrons loses energy by a dynamical mechanism such as adiabatic expansion rather than frequency - dependent processes such as synchrotron cooling .
the dynamical time scale 1/@xmath59 ( where @xmath59 is the rotational angular frequency ) is the natural expansion time scale of a build up of pressure .
this is because vertical hydrostatic equilibrium for the disc at a given radius is the same as the dynamical time scale . in other words , the time for a sound wave to run vertically across the disc , h / c@xmath60 = 1/@xmath59 .
the 3040 minute time scale can then be identified with the accretion disk s orbital period at the location of the emission region , yielding an estimate of @xmath61 for the disc radius where the flaring is taking place .
this estimate has assumed that the black hole is non - rotating ( a / m = 0 ) .
thus , the orbiting gas corresponding to this period has a radius of 3.3 r@xmath3 which is greater than the size of the last stable orbit . assuming that the significant power at 33-minute time scale is real , it confirms our source size assumption in the simple ics model for the x - ray emission .
if this general picture is correct , then more detailed hot - spot modeling of the emission from the accreting gas may be able to abstract the black hole mass and spin from spot images and light curves of the observed flux and polarization ( bromley , melia & liu 2001 ; melia et al .
2001 ; broderick and loeb 2005a , b ) . assuming
the 33-minute duration of most of the near - ir flares is real , this time scale is also comparable with the synchrotron loss time of the near - ir - emitting ( @xmath62gev ) electrons in a 10 g field .
this time scale is also of the same order as the inferred dynamical time scale in the emitting region .
this is not surprising considering that if particles are accelerated in a short initial burst and are confined to a blob that subsequently expands on a dynamical time scale , the characteristic age of the particles is just the expansion time scale .
the duration of submillimeter flare presented here appears to be slightly longer ( roughly one hour ) , than the duration of near - ir flares ( about 2040minutes ) ( see also eckart et al . 2005 ) .
this is consistent with the picture that the blob size in the context of an outflow from sgr a * is more compact than in that at submillimeter wavelength .
the spectrum of energetic particles should then steepen above the energy for which the synchrotron loss time is shorter than the age of the particles , i.e. , in excess of a few gev .
this is consistent with a steepening of the flare spectrum at wavelengths shorter than a micron .
the picture described above implies that flare activity drives mass - loss from the disk surface .
the near - ir emission is optically thin , so we can estimate the mass of relativistic particles in a blob ( assuming equal numbers of protons and electrons ) and the time scale between blob ejections . if the typical duration of a flare is 30 minutes and the flares are occurring 40% of the time , the time scale between flare is estimated to be @xmath275 minutes . assuming equipartition of particles and field with an assumed magnetic field of 11 g and using the spectral index of near - ir flare @xmath34 identical to its x - ray counterpart , the density of relativistic electrons
is then estimated to be n@xmath63 @xmath39 ( the steepening of the spectral index value to 1 increases particles density to 4.6@xmath64 @xmath39 . )
the volume of the emitting region is estimated to be @xmath65 .
the mass of a blob is then @xmath66 g if we use a typical flux of 3.9 mjy at 1.6@xmath1 m .
the time - averaged mass - loss rate is estimated to be @xmath67 .
if thermal gas is also present at a temperature of t@xmath68 k with the same energy density as the field and relativistic particles , the total mass - loss due to thermal and nonthermal particles increases to @xmath69 yr@xmath9 ( this estimate would increase by a factor of 2.5 if we use a flux of 9.3 mjy for a typical flare ) . using a temperature of 10@xmath70 k , this estimate is reduced by a factor of 20 .
it is clear from these estimates that the mass - loss rate is much less than the bondi accretion rate based on x - ray measurements ( baganoff et al .
similarly , recent rotation measure polarization measurements at submillimeter wavelength place a constraint on the accretion rate ranging between 10@xmath71 and 10@xmath72 yr@xmath9 ( marrone et al .
we have presented the results of an extensive study of the correlation of flare emission from sgr a * in several different bands . on the observational side ,
we have reported the detection of several near - ir flares , two of which showed x - ray and submillimeter counterparts . the flare emission in submillimeter wavelength and its apparent simultaneity with a near - ir flare
are both shown for the fist time . also , remarkable substructures in x - ray and near - ir light curves are noted suggesting that both flares are simultaneous with no time delays .
what is clear from the correlation analysis of near - ir data is that relativistic electrons responsible for near - ir emission are being accelerated for a high fraction of the time ( 30 - 40% ) having a wide range of power law indices .
this is supported by the ratio of flare emission in near - ir to x - rays .
in addition , the near - ir data shows a marginal detection of periodicity on a time scale of @xmath232 minutes .
theoretically , we have used a simple ics model to explain the origin of x - ray and soft @xmath6-ray emission .
the mechanism to up - scatter the seed submillimter photons by the gev electrons which produce near - ir synchrotron emission has been used to explain the origin of simultaneous near - ir and x - ray flares .
we also explained that the submillimeter flare emission is due to synchrotron emission with relativistic particle energies extending down to @xmath250 mev .
lastly , the equal flare time scale in submillimeter and near - ir wavelengths implies that the burst of emission expands and cools on a dynamical time scale before they leave sgr a*. we suspect that the simple outflow picture presented here shows some of the characteristics that may take place in micro - quasars such as grs 1915 + 105 ( e.g. , mirabel and rodriguez 1999 ) . acknowledgments : we thank j. mauerhan and m. morris for providing us with an algorithm to generate the power spectrum of noise and l. kirby , j. bird , and m. halpern for assistance with the cso observations . we also thank a. miyazaki for providing us the nma data prior publication .
aschenbach , b. 2005 , in `` growing black holes : accretion in a cosmological context '' , proceedings of the mpa / eso / mpe / usm joint astronomy conference held at garching , germany , 21 - 25 june 2004 .
eds : a. merloni , s. nayakshin , r. a. sunyaev eso astrophysics symposia .
berlin : springer , isbn 3 - 540 - 25275 - 4 , isbn 978 - 3 - 540 - 25275 - 7 , 2005 , p. 302 - 303
( arxiv : astro - ph/0410328 ) stolovy , s.r . , mccarthy , d.w . ,
melia , f. , rieke , g. , rieke , m. j. & yusef - zadeh , f. 1999 , in the central parsecs of the galaxy , asp conference series , vol . 186 . edited by h. falcke , a. cotera , w. j. duschl , f. melia , and m. j. rieke .
isbn : 1 - 58381 - 012 - 9 , p39 | although sgr a * is known to be variable in radio , millimeter , near - ir and x - rays , the correlation of the variability across its spectrum has not been fully studied . here
we describe highlights of the results of two observing campaigns in 2004 to investigate the correlation of flare activity in different wavelength regimes , using a total of nine ground and space - based telescopes .
we report the detection of several new near - ir flares during the campaign based on _ hst _ observations
. the level of near - ir flare activity can be as low as @xmath0 mjy at 1.6 @xmath1 m and continuous up to @xmath240% of the total observing time , thus placing better limits than ground - based near - ir observations . using the nicmos instrument on the _ hst _ , the _ xmm - newton _ and _ caltech submillimeter _
observatories , we also detect simultaneous bright x - ray and near - ir flare in which we observe for the first time correlated substructures as well as simultaneous submillimeter and near - ir flaring .
x - ray emission is arising from the population of near - ir - synchrotron - emitting relativistic particles which scatter submillimeter seed photons within the inner 10 schwarzschild radii ( r@xmath3 ) of sgr a * up to x - ray energies .
in addition , using the inverse compton scattering picture , we explain the high energy 20 - 120 kev emission from the direction toward sgr a * , and the lack of one - to - one x - ray counterparts to near - ir flares , by the variation of the magnetic field and the spectral index distributions of this population of nonthermal particles . in this picture , the evidence for the variability of submillimeter emission during a near - ir flare is produced by the low - energy component of the population of particles emitting synchrotron near - ir emission . based on the measurements of the duration of flares in near - ir and submillimeter wavelengths
, we argue that the cooling could be due to adiabatic expansion with the implication that flare activity may drive an outflow .
= = = cmbx12 scaled 1440 = cmbx10 scaled 1440 = cmmib10 scaled 1000 = cmss10 scaled 1200 # 1 # 2 # 3 = # 1 10 true in minus 10 true in * figure # 2 . *
# 3 # 1 10^#1 # 1 # 2 ^#2 u#1 = # 1 # 1 # 1 # 1 i |
the formation of a plasma of quarks and gluons has been a long - standing goal of contemporary nuclear science for a few decades now , and it is fair to write that this goal has been attained with the advent of rhic ( the relativistic heavy ion collider ) at brookhaven national laboratory . indeed
, new physics has been discovered by the experimental program associated with this facility which now enters an exciting phase of characterization and of precision physics . in this
context , real and virtual photons are penetrating probes as they suffer essentially no final state interaction .
they constitute observables that are complementary to other hard probes , such as qcd jets .
we report here on calculations of jets and real photons at high @xmath0 , and of photon - triggered hadron distributions in relativistic nuclear collisions at rhic .
arguably , one of the most sensational observations at rhic has been the dramatic suppression of high @xmath0 hadrons in central a+a collisions , compared with those measured in p+p events ( multiplied by the number of binary collisions in a nucleus - nucleus event ) .
this is quantified by the nuclear modification factor ( in a given centrality range , which is correlated with the impact parameter , @xmath1 ) : @xmath2 $ ] . at high transverse momentum ,
hadrons mostly originate from the fragmentation of qcd jets .
however , before they escape the medium , hard partons will interact with the thermal medium . in the finite - temperature field theory approach
utilized here @xcite the evolution of the parton distribution function @xmath3 can be modeled by a set of coupled fokker - planck equations generically written as @xmath4\end{aligned}\ ] ] in the first term , a parton @xmath5 of energy @xmath6 is born from a parent @xmath7 of energy @xmath8 . in the integral @xmath9 , which takes care of both energy loss and gain from the thermal medium .
the sum runs over different parton species and @xmath10 is the transition rate for the partonic process @xmath11 , calculated with the techniques of amy @xcite .
the hard partons may loose or gain energy by the radiation of gluons , or by elastic collisions with the hot medium .
the relative importance of these two mechanisms is shown in figure [ rad - elastic ] .
= 400 mev .
the number distribution at various times is plotted as a function of the energy .
the initial state is a delta function at e = 16 gev , and each vertical line corresponds to the average energy at the specified time .
the calculation is from ref .
see however ref .
@xcite for an update bearing some qualitative differences.,width=264 ]
there are many sources that produce photons in relativistic nuclear collisions , and they can be classified in two categories : on whether or not they depend on the temperature of the medium .
prompt photons , emitted during the very first instants of a nuclear collision , will represent an important background in the context of the search for signals of the quark - gluon plasma .
those in turn can be decomposed in two distinct sources : direct and fragmentation photons @xcite .
the direct photons are produced from the early hard collisions between partons in the nucleons of the projectile and target nuclei . at leading order in the strong coupling ,
the photon production proceeds via quark - antiquark annihilation ( @xmath13 ) and qcd compton scattering ( @xmath14 ) .
there is also a contribution to real photon from fragmenting hard qcd jets .
this component is well - known for the case of , say , pp collisions @xcite , but in nucleus - nucleus collisions the jet can propagate through the quark - gluon plasma , interact , and thus lose energy prior to its fragmentation .
this introduces a non - trivial path and angle - dependence into the process of calculating the yield of photons produced through jet fragmentation @xcite .
the most obvious source that requires a finite temperature treatment is that of photon production through the interaction of thermal components either from the quark - gluon plasma side of the qcd phase diagram , or from the hadrons in the confined sector .
emission rates for those have been established @xcite . however , at hight @xmath16 thermal photons will not play an important role but it is nevertheless important to have their emission rate under control .
another thermal component is that of jet - medium photons @xcite .
these photons , owing to phase space considerations , will have the energy of the initial hard particle and their measurement is of primary importance as they represent an independent confirmation of the conditions for jet quenching @xcite .
similary , a jet propagating in the hot and dense medium will produce electromagnetic radiation through bremsstrahlung .
finally , the jet fragmentation photons will now acquire a temperature dependence , through the energy loss mechanism and the coupling to the thermal medium .
note that all of the sources discussed here and above exists for dileptons , with appropriate adjustments @xcite .
an importance milestone in such calculations is to first verify the correctness of photon spectra in p+p collisions , and then of those measured in nuclear collisions @xcite .
importantly , the hydrodynamic evolution used here and in these cited works also yields a set of hadronic observables consistent with measurements .
going beyond one - body observables is important , as correlation studies will impose more stringent requirements on the underlying physics and might therefore highlight some theoretical differences that might otherwise have remain hidden . in this context ,
additional insight on jet quenching should be gained by triggering on a `` near side '' photon and measuring the distribution of charged hadrons in the opposite direction @xcite . for this purpose ,
a useful variable is the photon - triggered fragmentation function : @xmath17 , where @xmath18 , @xmath19 is the yield per trigger : the momentum distribution of produced hadrons on the away side , given a trigger photon of momentum @xmath20 on the near side .
a calculation of @xmath21 is shown in figure [ d_aa ] .
having a complete theory enables a breakdown of the contribution into its different components : at small @xmath22 , roughly half the away - side hadrons are tagged by direct photons , while at higher values of @xmath22 a large amount of hadrons on the opposite side come from jets tagged by jet - medium photons and fragmentation photons .
the agreement with the data is satisfying , and clearly the methods outlined here have great potential for future analyses as different parts of the available phase space will reveal different sources of photons .
finally , as the lhc will produce a plethora of jet events , the techniques briefly alluded to here will also be of great use in analyses there .
s. turbide , c. gale , s. jeon and g. d. moore , phys .
c * 72 * , 014906 ( 2005 ) .
p. arnold , g. d. moore and l. g. yaffe , jhep * 0206 * , 030 ( 2002 ) .
g. y. qin , j. ruppert , c. gale , s. jeon and g. d. moore , arxiv:0906.3280 [ hep - ph ] . g. y. qin , j. ruppert , c. gale , s. jeon , g. d. moore and m. g. mustafa , phys .
lett . * 100 * , 072301 ( 2008 ) .
bjrn schenke , charles gale , and guang - you qin , phys . rev .
c * 79 * , 054908 ( 2009 ) .
p. aurenche , m. fontannaz , j. p. guillet , b. a. kniehl , e. pilon and m. werlen , eur .
j. c * 9 * , 107 ( 1999 ) .
s. turbide , r. rapp and c. gale , phys .
c * 69 * , 014903 ( 2004 ) .
r. j. fries , b. muller and d. k. srivastava , phys .
lett . * 90 * , 132301 ( 2003 ) .
s. turbide , c. gale , d. k. srivastava and r. j. fries , phys .
c * 74 * , 014903 ( 2006 ) ; c. gale and s. turbide , nucl
. phys .
a * 783 * , 351 ( 2007 ) ; s. turbide , c. gale , e. frodermann and u. heinz , phys .
c * 77 * , 024909 ( 2008 ) .
wang , z. huang , and i. sarcevic , phys .
lett . * 77 * , 231 ( 1996 ) ; x .-
wang and z. huang , phys .
c * 55 * , 3047 ( 1997 ) .
a. m. hamed [ star collaboration ] , j. phys .
g * 35 * , 104120 ( 2008 ) . | we calculate the production of real photons in relativistic nuclear collisions at rhic , consistently with the quenching of fast partons .
we go beyond one - body observables , and evaluate photon - triggered fragmentation functions , in the kinematical window corresponding to that of experimental measurements .
address = department of physics , mcgill university , 3600 rue university , montreal , qc , canada h3a |
Camel traders wait to take their meals at Pushkar Fair in the desert Indian state of Rajasthan November 8, 2013.
NEW DELHI About a quarter of India's land is turning to desert and degradation of agricultural areas is becoming a severe problem, the environment minister said, potentially threatening food security in the world's second most populous country.
India occupies just 2 percent of the world's territory but is home to 17 percent of its population, leading to over-use of land and excessive grazing. Along with changing rainfall patterns, these are the main causes of desertification.
"Land is becoming barren, degradation is happening," said Prakash Javadekar, minister for environment, forests and climate change. "A lot of areas are on the verge of becoming deserts but it can be stopped."
Land degradation - largely defined as loss of productivity - is estimated at 105 million hectares, constituting 32 percent of the total land.
According to the Indian Space Research Organisation (ISRO) that prepared a report on desertification in 2007, about 69 percent of land in the country is dry, making it vulnerable to water and wind erosion, salinization and water logging.
Rajasthan, Gujarat, Punjab, Haryana, Karnataka and Andhra Pradesh are the among the most arid. These are some of the cotton and rapeseed growing states of India.
(Reporting by Krishna N Das and Shyamantha Asokan; Editing by Jeremy Laurence) ||||| The seed for Wide00014 was:
- Slash pages from every domain on the web:
-- a ranking of all URLs that have more than one incoming inter-domain link (rank was determined by number of incoming links using Wide00012 inter domain links)
-- up to a maximum of 100 most highly ranked URLs per domain
- Top ranked pages (up to a max of 100) from every linked-to domain using the Wide00012 inter-domain navigational link graph | – Nearly a quarter of India's land is turning to desert thanks to over-use of land, heavy grazing, and changing rainfall patterns, says the country's environment minister. He warns "land is becoming barren," and notes that could threaten food security in a country that houses 17% of the world's population on just 2% of its land, Reuters reports. Forbes points out 25% desertification is a much higher figure than the one an Indian scientific council put forth in 2007. At the time, it said it said roughly 10% of the country's land would become unusable desert by 2050, and by its calculations, India wouldn't hit the 25% threshold for at least a century. At this point, land degradation—which indicates a loss of productivity —is believed to be affecting 32% of the country's land. |
the cern large hadron collider ( lhc ) is scheduled to begin operation in 2007 , beginning a new era wherein the mechanism of electroweak symmetry breaking and fermion mass generation will be revealed and studied in great detail . although alternative mechanisms exist in theory , this is generally believed to be a light higgs boson with mass @xmath3 gev @xcite . more specifically , we expect a fundamental scalar sector which undergoes spontaneous symmetry breaking as the result of a potential which acquires a nonzero vacuum expectation value .
the lhc will easily find a light standard model ( sm ) higgs boson with very moderate luminosity @xcite .
moreover , the lhc will have significant capability to determine many of its properties @xcite , such as its fermionic and bosonic decay modes and couplings @xcite , including invisible decays @xcite and possibly even rare decays to second generation fermions @xcite .
linear collider with a center of mass energy of 350 gev or more can significantly improve these preliminary measurements , in some cases by an order of magnitude in precision , if an integrated luminosity of 500 fb@xmath4 can be achieved @xcite . ] starting from the requirement that the higgs boson has to restore unitarity of weak boson scattering at high energies in the sm @xcite , perhaps the most important measurement after a higgs boson discovery is of the higgs potential itself , which requires measurement of the trilinear and quartic higgs boson self - couplings .
only multiple higgs boson production can probe these directly @xcite .
recent literature is replete with self - coupling measurement studies .
there are numerous quantitative sensitivity limit analyses of higgs boson pair production in @xmath5 collisions ranging from 500 gev to 3 tev center of mass energies @xcite .
for example , one neural net - based study concludes that a 500 gev linear collider with an integrated luminosity of 1 ab@xmath4 @xcite could measure the trilinear higgs coupling @xmath6 for @xmath7 gev , where @xmath8 decays dominate , at the @xmath9 level .
however , none of these analyses addressed the case of @xmath10 gev , where the higgs boson mostly decays into @xmath11 bosons .
studies exploring the potential of the lhc , a luminosity - upgraded lhc ( slhc ) with roughly ten times the amount of data expected in the first run , and a very large hadron collider ( vlhc ) , have come only very recently @xcite .
these studies investigated higgs pair production via gluon fusion with subsequent decay to same - sign dileptons and three leptons via @xmath11 bosons , and cover the broader range @xmath12 gev .
they established that future hadron machines can probe the higgs potential for @xmath13 gev . at the lhc , an integrated luminosity of 300 fb@xmath4 provides for exclusion of vanishing @xmath6 at the @xmath14 confidence level or better over the entire range @xmath15 gev .
a vlhc would provide for precision measurement over much of this mass range , similar to or better than the limits achievable at a 3 tev @xmath5 collider with 5 ab@xmath4 @xcite .
however , we previously concluded that hadron colliders could not probe the mass region @xmath16 gev sufficiently well to be meaningful @xcite .
we reexamine that conclusion in this paper , utilizing rare decay modes in higgs boson pair production for @xmath16 gev at future hadron colliders .
we first review the definition of the higgs boson self - couplings and briefly discuss sm and non - sm predictions for these parameters in sec .
[ sec : theory ] .
an overview of the rare higgs decay modes in the sm ( predominantly @xmath17 final states ) and our analyses of these channels appears in sec . [
sec : lhc ] .
we consider the lhc , slhc and a vlhc , which we assume to be a @xmath18 collider operating at 200 tev with a luminosity of @xmath19 @xcite . in sec .
[ sec : mssm ] we establish the prospects of observing a pair of minimal supersymmetric standard model ( mssm ) higgs bosons in the @xmath17 and @xmath20 decay channels .
we present our conclusions in sec .
[ sec : conc ] .
the trilinear and quartic higgs boson couplings @xmath6 and @xmath21 are defined through the potential @xmath22 where @xmath23 is the physical higgs field , @xmath24 is the vacuum expectation value , and @xmath25 is the fermi constant . in the sm the self couplings are @xmath26 regarding the sm as an effective theory , the higgs boson self - couplings @xmath6 and @xmath21 are _ per se _ free parameters , and @xmath27-matrix unitarity constrains @xmath21 to @xmath28 @xcite . since future collider experiments likely can not probe @xmath21 , we concentrate on the trilinear coupling @xmath6 in the following
. the quartic higgs coupling does not affect the higgs pair production processes we consider . in the sm , radiative corrections decrease @xmath6 by @xmath29 for @xmath30 gev @xcite .
larger deviations are possible in scenarios beyond the sm . for example , in two higgs doublet models where the lightest higgs boson is forced to have sm like couplings to vector bosons , quantum corrections may increase the trilinear higgs boson coupling by up to @xmath31 @xcite . in the mssm ,
loop corrections modify the self - coupling of the lightest higgs boson in the decoupling limit , which has sm - like couplings , by up to @xmath32 for light stop squarks @xcite .
anomalous higgs boson self - couplings also appear in various other scenarios beyond the sm , such as models with a composite higgs boson @xcite , or in little higgs models @xcite . in many cases
, the anomalous higgs boson self - couplings can be parameterized in terms of higher dimensional operators which are induced by integrating out heavy degrees of freedom .
a systematic analysis of higgs boson self - couplings in a higher dimensional operator approach can be found in ref . @xcite .
at lhc energies , inclusive higgs boson pair production is dominated by gluon fusion @xcite .
other processes , such as weak boson fusion , @xmath33 @xcite , associated production with heavy gauge bosons , @xmath34 @xcite , or associated production with top quark pairs , @xmath35 @xcite , yield cross sections which are factors of 1030 smaller than that for @xmath36 @xcite . since @xmath37 production at the lhc is generally rate limited , we consider only the gluon fusion process .
because the total @xmath36 cross section at both the lhc and vlhc is quite small , at most one higgs boson undergoing rare decay will allow for a reasonable number of events to work with .
we therefore consider only final states containing one @xmath38-quark pair , which is the dominant sm higgs boson decay mode for @xmath39 gev , as shown in fig .
[ fig : brs ] .
our previous study demonstrated that at both lhc and vlhc , @xmath40 and @xmath41 final states are overwhelmed by backgrounds @xcite .
while the backgrounds are more moderate for the @xmath42-channel , the observable part of this decay mode unfortunately has multiple additional small branching ratios , and the detectors have rather low efficiency to identify the @xmath42-leptons . as charm quarks are even more difficult to tag than @xmath38-quarks , and the qcd backgrounds become much larger due to similarly less fake - tag rejection , we can immediately discount any colored final states for the rare decay .
weak boson pairs certainly qualify as rare decays in this mass region , but can not be used : the @xmath43 and @xmath44 final states suffer from a huge qcd top pair background .
similarly for @xmath45 with one or more hadronically decaying @xmath46 bosons , and @xmath47 , qcd processes with the same final states are likely to overwhelm the signal ( here , @xmath48 and @xmath49 denote off - shell @xmath11 and @xmath46 bosons ) .
the @xmath50 leptons and @xmath51 channels suffer from too low a rate , due to the small @xmath52 branching ratio .
this leaves only the diphoton @xmath17 and dimuon @xmath20 decay combinations .
sm higgs branching ratios relevant to our analysis of @xmath37 production . for @xmath53 and @xmath54 , one of the gauge bosons is off - shell.,width=453,height=340 ] for all our calculations we assume an integrated luminosity of 600 fb@xmath4 for the lhc , and 6000 fb@xmath4@xcite for the slhc . for the vlhc , we consider both 600 fb@xmath4 and 1200 fb@xmath4 @xcite . we choose @xmath55 @xcite ,
calculate signal and background cross sections using cteq5l @xcite parton distribution functions , and our scale choice for all background processes is @xmath56 .
we include minimal detector effects by gaussian smearing of the parton momenta according to atlas expectations @xcite , and take into account energy loss in the @xmath38-jets via a parameterized function .
we assume a @xmath38-tagging efficiency of @xmath57 for all hadron colliders .
in addition , we include an efficiency of @xmath58 @xcite for capturing the @xmath8 decay of the signal in its 40 gev mass bin .
we calculate all background processes using madgraph @xcite except where otherwise noted , and retain a finite @xmath59-quark mass of 4.6(1.7 ) gev where relevant .
other detector efficiencies are given in the subsections relevant to the respective channels .
we perform the signal calculation , @xmath60 , as in refs .
@xcite , including the effects of next - to - leading order ( nlo ) qcd corrections via a multiplicative factor @xmath61 at lhc(vlhc ) energies @xcite , using factorization and renormalization scales choices of @xmath62 .
there is little scale variation left at nlo .
we use exact matrix elements to incorporate the @xmath8 and @xmath63 decays .
the basic kinematic acceptance cuts for events at the ( s)lhc and vlhc are : @xmath64 which are motivated first by requirements that the events can pass the atlas and cms triggers with high efficiency @xcite , and that the @xmath38-quark and photon pairs reconstruct to windows around the known higgs boson mass , adjusted for an expected capture efficiency of @xmath58 each @xcite .
we take the identification efficiency for each photon to be @xmath65 at all machines considered @xcite . as in the @xmath66 signal case @xcite
, we will later try to determine the higgs boson self - coupling from the shape of the invariant mass of the final state .
for that reason we do not apply any cuts which make use of the fact that the signal involves two heavy massive particles produced in a fairly narrow range of the @xmath17 invariant mass .
the only irreducible background processes are qcd @xmath17 , @xmath67 and @xmath68 production .
however , there are multiple qcd reducible backgrounds resulting from jets faking either @xmath38-jets or photons : * @xmath69 - one or two fake @xmath38 jets ; * @xmath70 - one fake photon ; * @xmath71 - one or two fake @xmath38-jets , one fake photon ; * @xmath72 - one or two fake @xmath38-jets ; * @xmath73 - two fake photons ; * @xmath74 - one or two fake @xmath38-jets , two fake photons ; * @xmath75 - one or two fake @xmath38-jets , one fake photon ; * @xmath76 - one or two fake @xmath38-jets , two fake photons ; * @xmath77 - one or two fake @xmath38-jets , or two fake photons ; * @xmath78 - one fake photon . misidentified charm quarks must be considered separately from non - heavy flavor jets because of the grossly different rejection factors .
table [ rejfac ] summarizes the expected rejection factors for charm and light jets to be misidentified as @xmath38-jets and photons , as well as the expected photon and muon identification efficiencies .
the probability to misidentify a light jet as a @xmath38-jet is significantly higher at the slhc due to the high - luminosity environment @xcite .
the value quoted in table [ rejfac ] for @xmath79 at the lhc is likely to be conservative ; recent studies @xcite using three dimensional @xmath38-tagging have found a light jet rejection factor about a factor two better .
expectations for the probability to misidentify a light jet as a photon at the lhc vary considerably @xcite , so we perform two analyses , one conservative and the other optimistic , to cover this range .
since their design luminosities are similar , it is reasonable to assume that the rejection factors for light quarks and charm quarks , and the jet - photon misidentification probabilities , are similar for the lhc and the vlhc .
studies of how the high luminosity environment of the slhc affects @xmath80 and @xmath81 have not yet been performed . in lieu of better estimates
we therefore use the same values as for the lhc and vlhc .
it should be noted that the rejection factors listed in table [ rejfac ] depend on the transverse momentum of the charm quark , @xmath82 , or jet , @xmath83 .
the values listed in the table correspond to the rejection factor in the @xmath84 range which provides the largest contribution to the cross section .
.[rejfac ] expected photon and muon identification efficiencies , and misidentification probabilities for charm quarks and light jets as @xmath38-quarks @xcite and photons @xcite , at various hadron colliders . [ cols="^,^,^,^,^,^,^ " , ] we summarize our results in table [ tab : sum ] .
the bounds obtained using the conservative background estimate ( labeled `` hi '' ) are @xmath85 less stringent than those found using the more optimistic scenario ( labeled `` lo '' ) . at the slhc , for @xmath7 gev
, a vanishing higgs self - coupling can be ruled out at the @xmath86 cl .
limits for @xmath87 gev are a factor 1.2 2 weaker than those for @xmath7 gev
. it may be possible to subtract large parts of the reducible backgrounds which do not involve charm quarks using the following technique . due to the their large cross sections ( see tables [ tab : xsec.l ] and [ tab : xsec.v ] ) , one can fairly accurately determine the @xmath88 distributions of the individual processes , @xmath77 , @xmath89 , @xmath73 , @xmath72 , @xmath90 and @xmath76 production , imposing the same cuts as in the @xmath91 analysis ( eqs .
( [ eq : cuts1 ] ) and ( [ eq : cuts2 ] ) ) .
if the photon jet and light jet@xmath38 misidentification probabilities are independently measured in other processes such as prompt photon @xcite and @xmath92 jets production , one can simply subtract these backgrounds .
for the background processes involving charm quarks , on the other hand , this procedure will be more difficult to realize , since the smaller charm quark mass and the shorter charm lifetime result in a charm quark tagging efficiency much lower than that for @xmath38-quarks .
the columns labeled `` bgd . sub . ''
list the limits achievable if the non - charm reducible contributions to the background were subtracted with @xmath31 efficiency , but none of the charm quark backgrounds could be reduced .
our results show that reducing the background beyond what can be achieved with kinematic cuts may considerably improve the bounds on @xmath93 at the lhc and slhc , where the @xmath94 process is statistics limited .
the bounds achievable at the slhc ( vlhc ) by analyzing @xmath17 production are a factor 2.5
6 ( 2 3 ) more stringent than those from the @xmath95 channel @xcite . due to the small number of events , the lhc and slhc sensitivity limits depend significantly on the sm cross section normalization uncertainty .
for example , for a normalization uncertainty of @xmath96 on the sm signal plus background rate , the achievable bounds on @xmath93 are almost a factor 2 weaker than those obtained for a normalization uncertainty of @xmath97 .
this sm cross section normalization uncertainty depends critically on knowledge of the qcd corrections to the signal and the ability to determine the background normalization .
the nlo qcd corrections to @xmath36 are currently known only in the infinite top quark mass limit @xcite . to ensure the @xmath97
required precision on differential cross sections we would need the nlo rates for finite top quark masses , as well as the nnlo corrections in the heavy top quark mass limit .
for the background normalization one can rely on either calculations of the qcd corrections or data . as mentioned before ,
none of these nlo background calculations are available .
since there are many processes contributing to the background , and most of them involve hundreds of feynman diagrams already at tree level , nlo calculations appear feasible only if automated one - loop qcd tools become available in the next few years . in the absence of such nlo results ,
one may be able to fix the background normalization instead by relaxing the @xmath98 and @xmath99 invariant mass cuts of eq .
( [ eq : cuts1 ] ) and/or the cuts of eq .
( [ eq : cuts2 ] ) and extrapolating from regions in @xmath100 , @xmath101 , @xmath102 and @xmath103 where the background dominates , back into the analysis region .
this technique should make it possible to determine the background normalization to about @xmath97 at the lhc and slhc , and to about @xmath104 at the vlhc .
both methods rely on monte carlo simulation to correctly predict the @xmath88 distribution shape .
the bounds listed in table [ tab : sum ] should be compared with those achievable at @xmath5 linear colliders .
a linear collider with @xmath105 gev and an integrated luminosity of 1 ab@xmath4 can determine @xmath6 with a precision of about @xmath9 in @xmath106 for @xmath7 gev @xcite .
for @xmath107 gev , the @xmath108 branching ratio and the @xmath109 cross section both fall off quickly .
since the background cross section decreases only slightly , @xmath110 , and thus the bounds on @xmath6 obtainable from @xmath109 , worsen rapidly with increasing values of @xmath62 . by @xmath87
gev they are at only the @xmath111 level @xcite . from table [ tab : sum ] it is clear that the lhc will be able to provide only a first rough measurement of the higgs self - coupling for @xmath7 gev .
a luminosity - upgraded lhc will be able to make a more precise measurement .
however , the sensitivity bounds on @xmath6 obtained from @xmath17 production for @xmath7 gev ( @xmath87 gev ) will be a factor 2 4 ( 1.2 3 ) weaker than those achievable at a linear collider .
in contrast , the sensitivity at a vlhc will approach this level of precision .
it should be noted that if the sm cross section normalization uncertainty could be reduced to a few percent , a vlhc could reach precision similar to that foreseen for clic @xcite ( @xmath5 collisions at 3 tev center - of - mass energy ) .
the @xmath20 signal calculation proceeds as in the @xmath17 case .
the basic kinematic acceptance cuts for events at the lhc and vlhc are : @xmath112 where again the muon invariant mass window is chosen to accept @xmath58 of the @xmath113 decay after detector effects . the signal cross section at the lhc ( vlhc ) for @xmath7 gev before taking into account any efficiencies
is 2.4 ab ( 0.21 fb ) , approximately one order of magnitude smaller than the @xmath17 channel . for larger higgs boson masses the ratio is even smaller , due to the @xmath113 branching ratio , which decreases much more rapidly with @xmath62 than that for @xmath63 ( see fig . [
fig : brs ] ) .
once efficiencies are taken into account , we expect less than one signal event at the lhc .
the slhc would see 2 3 signal events for @xmath7 gev if one assumes that both @xmath38-quarks are tagged , too few for a meaningful coupling extraction . at a vlhc
there would be about 60 signal events for an integrated luminosity of 600 fb@xmath4 , single @xmath38-tag requirement , and the same value of @xmath62 .
we therefore concentrate on the vlhc in the following , and require only one @xmath38-tag .
a potential advantage of the @xmath20 final state is the smaller number of processes contributing to the background .
the main contributions to the background originate from qcd @xmath20 , @xmath114 and @xmath115 production , where the @xmath116 pair originates from an off - shell @xmath46-boson or photon . in the latter two processes , either a charm quark or light jet
is misidentified as a @xmath38-quark .
we calculate the background processes at lo using mcfm @xcite and find that their sum is more than a factor 200 larger than the signal . the signal to background ratio improves by a factor 5 if we additionally require @xmath117 whereas the signal cross section falls by only about @xmath9 .
the @xmath77 background is negligible compared with @xmath115 .
the final signal to background ratio of @xmath118 contrasts starkly with the @xmath119 ratio the @xmath17 channel enjoys .
if instead both @xmath38-jets are tagged , the signal to background ratio improves by an additional factor 2 . however , the signal cross section is reduced by a factor 3 , which yields sensitivity bounds for @xmath93 which are somewhat weaker than those obtained from single @xmath38-tag data . shrinking the @xmath116 invariant mass window could also reduce the background .
the value in eq .
( [ eq : mucuts ] ) was chosen assuming atlas detector muon momentum resolution @xcite .
the cms detector @xcite likely can use a smaller window , @xmath120 gev , which would reduce the background by approximately a factor 1.7 .
the small signal cross section combined with the very large background make it essentially impossible to determine the higgs boson self - coupling in @xmath121 .
we quantify this by performing a @xmath122 test on the @xmath88 distribution , similar to that described in sec .
[ sec : gamgam ] . since the signal cross section is too small to be observable at the lhc and slhc , we derive bounds only for a vlhc . as before ,
we include the effects of nlo qcd corrections via multiplicative factors : @xmath123 for the signal @xcite , @xmath124 for @xmath20 and @xmath114 production , and @xmath125 for the @xmath115 background @xcite . allowing for a normalization uncertainty of @xmath97 of the sm cross sections , for @xmath7
gev we find @xmath126 bounds of @xmath127 at the vlhc for an integrated luminosity of 600 fb@xmath4 . if the @xmath115 background can be subtracted as described in sec .
[ sec : gamgam ] , the limits improve by about a factor 1.4 . using the cms dimuon mass window instead , the bound improves by about a factor 1.3 .
nevertheless , this is about an order or magnitude weaker than the limits from @xmath91 .
the mssm requires two higgs doublets , in contrast to one in the sm , to give mass to the up type and the down type fermions and to avoid anomalies induced by the supersymmetric fermionic partners of the higgs bosons .
this results in the presence of five physical higgs bosons : a charged pair @xmath128 , two neutral scalars @xmath129 and @xmath130 , and a pseudoscalar @xmath131 .
the two scalars are mixed mass eigenstates , the lighter always having a mass @xmath132 gev @xcite . at leading order ,
the entire mssm higgs sector is described by two parameters , usually taken to be the ratio of the two higgs doublets vacuum expectation values , @xmath2 , and the pseudoscalar higgs mass , @xmath133 . in the region
@xmath134 gev , all heavy higgs bosons @xmath135 have similar masses , much larger than the light scalar higgs mass . in this
so called decoupling regime the light higgs boson @xmath136 strongly resembles a sm higgs boson of the same mass
. it will be difficult to distinguish between the sm and the mssm higgs sectors through measurements of its properties @xcite . assuming bottom tau mass unification , only two regions of @xmath2 are allowed : either small values , @xmath137 , or large values , @xmath138 .
direct searches for the heavy higgs bosons are particularly promising in the large @xmath2 regime , since in the decoupling limit the bottom yukawa coupling to heavy higgses is @xmath139 . as a result , @xmath38-quark initiated processes , such as @xmath140 , may have cross sections enhanced by up to three orders of magnitude over the corresponding sm rates for sufficiently large values of @xmath2 . in contrast , for small values of @xmath2 these direct searches fail , because the dominant yukawa coupling becomes @xmath141 . at the lhc , associated production of two neutral mssm higgs bosons via
gluon fusion occurs for all six possible combinations @xcite . in principle , these processes probe the various higgs boson self - couplings , @xmath142 .
however , for large @xmath2 the continuum box diagrams are enhanced by the yukawa coupling squared , while the triangle loop diagram with an intermediate higgs boson is enhanced by only one power of the large yukawa coupling : for large @xmath2 the resonance diagrams are suppressed by @xmath143 as compared to the continuum production diagrams . for @xmath144
we find that the effect of vanishing self couplings @xmath145 is at maximum at the percentage level . for @xmath146 and @xmath147 gev ,
mssm higgs pair production cross sections can be sizable , reaching values up to 100 fb , compared to a few tens of fb in the sm .
the largest cross sections occur for two heavy states @xmath148 and large values of @xmath2 , due to the enhanced coupling of these states to @xmath38-quarks . in this regime
the most promising final state is @xmath20 since the ratio of the muon and the bottom yukawa couplings is preserved in the mssm , but the branching ratio to photons is highly suppressed , typically by several orders of magnitude compared to the sm higgs boson of equal mass . unfortunately , a main background for this is mssm @xmath149 production @xcite .
whether the higgs pair signal could be extracted out of this would require a more detailed investigation which we do not find likely to be fruitful .
lowest order cross section and branching fractions for pair production of light mssm scalar higgs bosons , @xmath150 , with subsequent decay @xmath151 , as a function of the pseudoscalar higgs mass @xmath133 .
we fix @xmath152 , set the squark mass parameters to 1 tev , and assume maximal mixing with @xmath153 tev @xcite .
we do not take into account supersymmetric decay modes of the heavy higgs boson @xmath154 @xcite .
the light higgs boson mass is above the lep limit of @xmath155 gev @xcite for @xmath156 gev .
no cuts or detection efficiencies are included .
the dashed horizontal line shows the lowest order sm @xmath36 cross section for @xmath7 gev .
, width=415 ] in the small @xmath2 regime it is much more difficult to distinguish the sm and the mssm higgs sectors .
none of the heavy higgs bosons will be directly observable at the lhc for @xmath157 , if we rely on the usual decays to fermions .
we find that , for small values of @xmath2 , @xmath158 offers the best chance to detect the heavy scalar higgs boson , @xmath154 : for @xmath159 the @xmath160 branching ratio is sizable @xcite . to take into account off
shell effects we compute the full @xmath150 production rate . as in the sm , we expect the @xmath17 final state to be most promising in the decoupling regime , with increased rate due to the intermediate @xmath154 resonance .
we show the @xmath161 and @xmath162 branching fractions and lowest order @xmath163 cross section as a function of @xmath133 in fig .
[ fig : sig - susy ] .
the light higgs boson mass increases from @xmath164 gev for @xmath165 gev to a plateau value of @xmath166 gev in the large @xmath133 limit .
a few structures in the cross section plot require further explanation .
first , the heavy scalar higgs mass crosses the threshold @xmath167 around @xmath168 gev , which enhances the @xmath169 cross section by almost a factor 100 .
second , the kink at @xmath170 gev represents the top threshold in the top triangle loop . at the same time
we see the onset of the @xmath171 decay channel , which for larger values of @xmath133 dominates over @xmath160 , so the cross section decreases rapidly . nevertheless , the mssm signal rate is still enhanced over the sm rate @xmath172 fb for values of @xmath133 as large as 500 gev . unfortunately , the angular cuts of eq .
( [ eq : cuts2 ] ) which are needed to suppress the background , together with the standard @xmath17 identification cuts of eq .
( [ eq : cuts1 ] ) , force the differential cross section to vanish for @xmath173 gev . pair production of light supersymmetric higgs bosons will thus be unobservable for @xmath174 gev .
when taking into account detection efficiencies , we find that @xmath169 production at the lhc should be observable at the @xmath175 level for @xmath176 gev ( @xmath177 gev ) for an integrated luminosity of 300 fb@xmath4 ( 600 fb@xmath4 ) and @xmath152
. the signal would be rather spectacular : due to @xmath178-channel @xmath154 exchange , the differential cross section peaks for @xmath179 , as shown in fig [ fig : mvis - susy ] .
compared to the sm case the cross section is enhanced by more than an order of magnitude in the resonance region , where it depends on the @xmath180 and @xmath181 couplings . since mssm heavy scalar @xmath154 production with decay into fermions is unobservable at the lhc in the small @xmath2 region , this implies that @xmath169 production can measure only a combination of @xmath182 and the @xmath181 couplings , but not the individual couplings . the visible invariant mass distribution , @xmath88 , for mssm light scalar higgs pair production at the lhc , @xmath183 , for @xmath152 .
the light higgs mass for @xmath184 gev is 120.8 gev and for @xmath185 gev it is 122.2 gev . for comparison , we also show the distribution for sm higgs pair production ( @xmath7 gev).,width=415 ]
after discovery of an elementary higgs boson and tests of its fermionic and gauge boson couplings , experimental evidence that the shape of the higgs potential has the form required for electroweak symmetry breaking will complete the proof that fermion and weak boson masses are generated by spontaneous symmetry breaking . one must determine the higgs self - coupling to probe the shape of the higgs potential .
only higgs boson pair production at colliders can accomplish this .
numerous studies @xcite have established that future @xmath5 machines can measure @xmath6 at the @xmath186 level for @xmath16 gev .
very recent studies @xcite determined that the prospects at hadron colliders for @xmath15 gev are similarly positive , but that the @xmath16 gev region would be very difficult to access @xcite .
we have tried to rectify the situation in this paper by considering highly efficient , lower - background rare decay modes : @xmath17 and @xmath20 .
the latter suffers from very low rate and considerable background from the breit - wigner tail of @xmath187 production , and does not appear to be useful .
this is not surprising upon comparison to our @xmath41 study @xcite .
however , the @xmath17 channel shows considerable promise . imposing photon
photon and photon@xmath38 separation cuts could result in a signal to background ratio of @xmath188 or better .
since the irreducible qcd @xmath17 background is small compared to the reducible background originating from light jets or charm quarks mistagged as @xmath38-quarks , or from jets misidentified as photons , the signal to background ratio depends on the particle misidentification probabilities , and the required number of @xmath38-tags .
we find that the lhc , with an integrated luminosity of 600 fb@xmath4 or more , could make a very rough first measurement for @xmath7 gev ( with @xmath189 signal events ) , but would not obtain useful limits for @xmath87 gev at all due to the lack of signal events .
it would require a luminosity - upgraded run ( slhc , 6000 fb@xmath4 ) to rule out @xmath190 at the @xmath86 cl for @xmath7 gev , and to make a @xmath191 measurement at the @xmath126 level .
a 200 tev vlhc , in contrast , would make possible a @xmath192 measurement of @xmath6 , competitive with future @xmath5 collider capabilities .
we note , however , that current understanding of hadron collider higgs boson phenomenology does nt provide for the necessary precision knowledge of higgs branching ratios to complement this .
it is likely that an @xmath5 collider would still be required to fill this role .
although a luminosity - upgraded lhc can not compete with a linear collider for higgs masses @xmath16 gev , a higgs self - coupling measurement at the slhc will still be interesting if realized before a linear collider begins operation . to fully exploit future hadron collider potential to measure the higgs self - coupling , we need an accurate prediction of the sm @xmath17 rate .
it is mandatory that the residual theoretical cross section uncertainty be reduced to the @xmath193 level for any @xmath37 analysis to be meaningful .
we will need similar precision on background rates probably from experiment by extrapolating from background - dominated phase space regions to that of the signal .
probably the most exciting result of this analysis is the mssm case : the heavy mssm higgs scalar can decay into two light higgs bosons if @xmath159 .
this region of parameter space poses a serious challenge to the lhc , because none of the usual heavy higgs searches will detect a hint of the two higgs doublets required in the mssm .
resonant production of the heavy scalar higgs in gluon fusion and its subsequent decay into light higgs bosons , which then decay to @xmath17 , has two effects on the cross section as compared to the sm case : the total rate is enhanced by about an order of magnitude and the @xmath169 invariant mass peaks at the heavy higgs mass .
even though our analysis is not at all optimized for resonant mssm production , we find a @xmath175 discovery region for @xmath152 and @xmath194 gev at the lhc .
even though the discovery reach of this channel does not extend to much larger values of @xmath2 it still ensures the observation of one heavy higgs boson in a region preferred by bottom
tau unification , inaccessible by other mssm higgs searches .
we would like to thank k. bloom , m. dhrssen , f. maltoni , b. mellado , a. nikitenko , j. parsons , d. wackeroth , d. zeppenfeld and p.m. zerwas for useful discussions .
we also thank c. oleari for providing us with code to calculate the @xmath77 background .
one of us ( u.b . ) would like to thank the fermilab theory group , where part of this work was carried out , for its generous hospitality .
this research was supported in part by the national science foundation under grant no .
phy-0139953 .
m. dittmar and h. k. dreiner , phys .
d * 55 * , 167 ( 1997 ) ; d. rainwater and d. zeppenfeld , phys .
d * 60 * , 113004 ( 1999 ) [ erratum - ibid .
d * 61 * , 099901 ( 2000 ) ] ; n. kauer , t. plehn , d. rainwater and d. zeppenfeld , phys .
b * 503 * , 113 ( 2001 ) ; n. akchurin _ et al .
_ , cms - note-2002/066 ; b. mellado , atl - conf-2002 - 004 . d. rainwater , d. zeppenfeld and k. hagiwara , phys . rev .
d * 59 * , 014037 ( 1999 ) ; t. plehn , d. rainwater and d. zeppenfeld , phys .
b * 454 * , 297 ( 1999 ) and phys .
d * 61 * , 093005 ( 2000 ) .
d. zeppenfeld , r. kinnunen , a. nikitenko and e. richter - was , phys .
d * 62 * , 013009 ( 2000 ) ; m. hohlfeld , atl - phys-2001 - 004 .
d. rainwater , phys .
b * 503 * , 320 ( 2001 ) ; v. drollinger , t. mller and d. denegri , arxiv : hep - ph/0201249 .
v. drollinger , t. mller and d. denegri , arxiv : hep - ph/0111312 ; v. kostioukhine , j. leveque , a. rozanov , and j.b .
de vivie , atl - phys-2002 - 019 ; d. green _ et al .
_ , fermilab - fn-705 ( august 2001 ) ; f. maltoni , d. rainwater and s. willenbrock , phys .
d * 66 * , 034022 ( 2002 ) ; a. belyaev and l. reina , jhep * 0208 * , 041 ( 2002 ) ; a. belyaev , f. maltoni and l. reina , in _ proc . of the aps / dpf / dpb summer study on the future of particle physics ( snowmass 2001 ) _ ed .
n. graf , arxiv : hep - ph/0110274 .
o. j. boli and d. zeppenfeld , phys .
b * 495 * , 147 ( 2000 ) .
t. plehn and d. rainwater , phys .
b * 520 * , 108 ( 2001 ) ; t. han and b. mcelrath , phys .
b * 528 * , 81 ( 2002 ) .
j. a. aguilar - saavedra _ et al . _
[ ecfa / desy lc physics working group collaboration ] , arxiv : hep - ph/0106315 and references therein ; t. abe _ et al . _
[ american linear collider working group collaboration ] , in _ proc . of the aps / dpf / dpb summer study on the future of particle physics ( snowmass 2001 ) _
r. davidson and c. quigg , arxiv : hep - ex/0106056 and references therein .
b. w. lee , c. quigg and h. b. thacker , phys .
lett . *
38 * , 883 ( 1977 ) and phys .
d * 16 * , 1519 ( 1977 ) .
d. a. dicus , c. kao and s. s. willenbrock , phys .
b * 203 * , 457 ( 1988 ) ; e. w. glover and j. j. van der bij , nucl .
b * 309 * , 282 ( 1988 ) ; e. w. glover and j. j. van der bij , cern - th-5022 - 88 , in proceedings of the _
23rd rencontres de moriond : current issues in hadron physics , les arcs , france , mar 13 - 19 , 1988 _ ; g. cynolter , e. lendvai and g. pocsik , hep - ph/0003008 , acta phys .
b * 31 * , 1749 ( 2000 ) .
f. boudjema and e. chopin , z. phys .
c * 73 * , 85 ( 1996 ) ; v. a. ilyin _ et al .
_ , phys .
d * 54 * , 6717 ( 1996 ) .
a. djouadi , w. kilian , m. mhlleitner and p. m. zerwas , eur .
j. c * 10 * , 27 ( 1999 ) ; d. j. miller and s. moretti , eur .
j. c * 13 * , 459 ( 2000 ) .
m. battaglia , e. boos and w. m. yao , in _ proc . of the aps / dpf / dpb summer study on the future of particle physics ( snowmass 2001 ) _ ed .
r. davidson and c. quigg , arxiv : hep - ph/0111276 . c. castanier , p. gay , p. lutz and j. orloff , arxiv : hep - ex/0101028 .
f. gianotti _ et al .
_ , arxiv : hep - ph/0204087 .
u. baur , t. plehn and d. rainwater , phys .
* 89 * , 151801 ( 2002 ) and phys . rev .
d * 67 * , 033003 ( 2003 ) .
a. blondel , a. clark and f. mazzucato , atl - phys-2002 - 029 ( november 2002 ) .
t. plehn , m. spira and p. m. zerwas , nucl .
b * 479 * , 46 ( 1996 ) [ erratum - ibid .
b * 531 * , 655 ( 1998 ) ] ; a. djouadi , w. kilian , m. mhlleitner and p. m. zerwas , eur .
j. c * 10 * , 45 ( 1999 ) .
a. dobrovolskaya and v. novikov , z. phys .
c * 52 * , 427 ( 1991 ) ; d. a. dicus , k. j. kallianpur and s. s. willenbrock , phys .
b * 200 * , 187 ( 1988 ) ; a. abbasabadi , w. w. repko , d. a. dicus and r. vega , phys . lett .
b * 213 * , 386 ( 1988 ) ; w. y. keung , mod . phys .
lett . a * 2 * , 765 ( 1987 ) .
v. d. barger , t. han and r. j. phillips , phys .
d * 38 * , 2766 ( 1988 ) . t. stelzer and w. f. long , comp .
* 81 * , 357 ( 1994 ) ; f. maltoni and t. stelzer , jhep * 0302 * , 027 ( 2003 ) .
s. dawson , s. dittmaier and m. spira , phys .
d * 58 * , 115012 ( 1998 ) .
r. hawkings , atlas note sn - atlas-2003 - 026 .
f. abe _ et al . _
[ cdf collaboration ] , phys .
* 74 * ( 1995 ) 1936 ; f. abe _ et al . _
[ cdf collaboration ] , phys .
lett . * 74 * , 1941 ( 1995 ) ; s. abachi _ et al . _
[ d0 collaboration ] , phys .
lett . * 78 * , 3634 ( 1997 ) ; s. abachi _ et al . _
[ d0 collaboration ] , phys .
* 75 * , 1028 ( 1995 ) . j. m. campbell and r. k. ellis , phys .
d * 62 * , 114012 ( 2000 ) ; j. campbell , r. k. ellis and d. rainwater , arxiv : hep - ph/0308195 . g. degrassi _ et al .
j. c * 28 * , 133 ( 2003 ) .
e. boos , a. djouadi and a. nikitenko , arxiv : hep - ph/0307079 .
s. heinemeyer , w. hollik and g. weiglein , comput .
commun . *
124 * , 76 ( 2000 ) and arxiv : hep - ph/0002213 .
a. djouadi , j. kalinowski and m. spira , comput .
commun . * 108 * , 56 ( 1998 ) . | 13.pt we investigate higgs boson pair production at hadron colliders for higgs boson masses @xmath0 gev and rare decay of one of the two higgs bosons .
while in the standard model the number of events is quite low at the lhc , a first , albeit not very precise , measurement of the higgs self - coupling is possible in the @xmath1 channel .
a luminosity - upgraded lhc could improve this measurement considerably .
a 200 tev vlhc could make a measurement of the higgs self - coupling competitive with a next - generation linear collider . in the mssm
we find a significant region with observable higgs pair production in the small @xmath2 regime , where resonant production of two light higgs bosons might be the only hint at the lhc of an mssm higgs sector . |
the increasing availability of the complete sequence for a growing number of genomes has been revolutionizing the way biologists work .
the full exploitation of this data bonanza is hampered by the limitations in sequence annotation .
these limitations result from an imbalance between the accumulation rate of new sequences , and the throughput of the so called wet bench researchers .
the gap is usually filled by in silico analysis , mostly done automatically through software pipelines ( e.g. embl bank to trembl ) .
the results are more often than not stored in secondary databases after a brief assessment due to natural limitations in staff ( 1 ) .
this state of affairs results in the need to enforce a most strict set of parameters for the automatic annotation in order to avoid or limit the emergence of artifacts ( e.g. annotation transfer from analogs ) .
today most genome centred databases offer pre - computed comparative genomic results to speed the analysis required by the ordinary user .
those results are often available through rich graphic user interfaces that ease the burden of finding the intended data , although this easiness of usage is counterbalanced by the conservative nature of released datasets .
so from time to time a user can stumble upon missing homolog data , especially when dealing with genomes with low coverage ratio .
these gaps in the available annotation could be easily plugged by giving to the user the ability to recheck the data with less strict parameters .
bi - directional blast is a traditional procedure for the first approach to homolog detection , but nowadays it must be supplemented with domain architecture information , and should be also corroborated by synteny analysis . but domain detection is still a very heavy task in terms of computation time , and synteny is seldom available , so there is room for simpler intermediate solutions , like checking global alignments of both the open reading frame ( orf ) and its conceptual product .
the extension of the alignment and the similarity can be used to discard blast false hits ( e.g. partial matches due to a shared conserved domain ) .
bidiblast enables the average user with an ordinary personal computer to perform the matching of two sets of sequences by means of a bi - directional 2 , 3 , 4 blast ( 5 ) or tblastx search . the resulting of local alignment matches can be complemented by a subsequent global alignment in order to evaluate each hit properly .
if one of the two sets of sequences happens to be well documented , this application may be used to transfer annotation among the putative orthologs found .
it will also allow for the assignment of gene ontology annotation to either bi - directional or simple matches in order to allow assessment of the distribution of ontologically related sequences .
finally for every match the evolution rates dn and ds are determined along with their estimated standard error ( 6 ) .
all these capabilities do not usually concur in a single application , and in bidiblast they let the user perform a range of analysis from the customised comparative genomics to molecular evolution measurements on batches of sequences , or to the equivalent to a more detailed blast search against a limited set of sequences .
the pipeline code is entirely written in javase 1.5 ( sun microsystems , inc . ) , using the db4o ( versant corp . ) to manage the data and the results .
the biojava library ( 7 ) is used solely for translation work and translation table management .
the searches for local similarity among sequences are done by a local installation of the ncbi blast tool ( ftp://ftp.ncbi.nlm.nih.gov/blast/executables/ ) .
this tool is special because it does not penalise the introduction of gaps at the ends of the smaller sequence , thus allowing for a smoother result .
stretcher from the emboss suite ( 9 ) was used to align pair of translated orf because the available
the tool creator acknowledged the problem , but he did nt have the opportunity to solve this problem in time .
molecular evolution rates are also calculated through the yn00 tool from paml package ( 10 ) .
this set of tools may be updated separately from the pipeline application by the users themselves .
they also limit the portability of this application to environments other than 32 bits of ms windows to the availability of native binaries for those tools .
this portability will also require that the java code should be changed where direct calls are made to the operating system .
the input for the pipeline consists of two text files containing sequences in fasta format , one is the query set , and the other plays the role of reference ( figure 1 ) .
these sequences should be uninterrupted orfs if the analyses are to be carried in full , otherwise only the bi - directional blast procedure would make sense . the user may order the application to compile the sequences as blast binary databases before invoking the blast procedure or may supply the databases already compiled .
other text files may be supplied to enrich the analysis , namely a go slim terms ( 11 ) list ( http://downloads.yeastgenome.org/literture_curation/go_terms.tab ) , and a map assigning go slim terms to the relevant reference sequences ( http://downloads.yeastgenome.org/literature_curation/go_slim_mapping.tab ) as available from saccharomyces genome database project ( ftp://ftp.yeastgenome.org/yeast/ ) .
similar files for other genomes may be supplied instead , provided the column arrangement is respected .
the first stage of the program execution is to upload the sequences into the internal database .
then the bi - directional blast procedure begins matching up every sequence in the query set against the reference sequences . at each step the top hit in the results list is subjected to the reverse process .
if the top hit in this final search is the starting sequence , a bi - directional hit is scored , and uni - directional hits are also recorded as they may point to paralogous orfs .
if the sequences that are being compared are not from close related taxa , tblastx variant should be the option of choice because matches are evaluated at the conceptual translation level . in this case
the local alignments will be shorter and ungapped , therefore less emphasis should be put in the resulting blast statistics .
the procedure will also require the selection of an adequate substitution matrix , and it will take longer to complete .
in a second stage , go slim data are imported into the internal database and mapped only to blast hits , through the reference sequences to the ones being queried . the results from the previous blast search
the first aligns the original nucleotide sequences , while the second aligns the conceptual translation products , and concomitantly a codon - wise alignment for the former sequences is generated .
for this alignment procedure to be carried with best results , the substitution matrix should be chosen by the user according to the estimate of the average similarity between both proteomes .
the results from this last round are most important when the matched sequences are more divergent , and tblastx is used .
this is specified for each set of sequences , and is also applied to the eventual tblastx searches .
several statistics are calculated about observed substitution frequency against base position in the codon , and the degree of conservation among amino acids .
bidiblast enables the average user with an ordinary personal computer to perform the matching of two sets of sequences by means of a bi - directional 2 , 3 , 4 blast ( 5 ) or tblastx search . the resulting of local alignment matches can be complemented by a subsequent global alignment in order to evaluate each hit properly .
if one of the two sets of sequences happens to be well documented , this application may be used to transfer annotation among the putative orthologs found .
it will also allow for the assignment of gene ontology annotation to either bi - directional or simple matches in order to allow assessment of the distribution of ontologically related sequences .
finally for every match the evolution rates dn and ds are determined along with their estimated standard error ( 6 ) .
all these capabilities do not usually concur in a single application , and in bidiblast they let the user perform a range of analysis from the customised comparative genomics to molecular evolution measurements on batches of sequences , or to the equivalent to a more detailed blast search against a limited set of sequences .
the pipeline code is entirely written in javase 1.5 ( sun microsystems , inc . ) , using the db4o ( versant corp . ) to manage the data and the results .
the biojava library ( 7 ) is used solely for translation work and translation table management .
the searches for local similarity among sequences are done by a local installation of the ncbi blast tool ( ftp://ftp.ncbi.nlm.nih.gov/blast/executables/ ) .
this tool is special because it does not penalise the introduction of gaps at the ends of the smaller sequence , thus allowing for a smoother result .
stretcher from the emboss suite ( 9 ) was used to align pair of translated orf because the available
the tool creator acknowledged the problem , but he did nt have the opportunity to solve this problem in time .
molecular evolution rates are also calculated through the yn00 tool from paml package ( 10 ) .
this set of tools may be updated separately from the pipeline application by the users themselves .
they also limit the portability of this application to environments other than 32 bits of ms windows to the availability of native binaries for those tools .
this portability will also require that the java code should be changed where direct calls are made to the operating system .
the input for the pipeline consists of two text files containing sequences in fasta format , one is the query set , and the other plays the role of reference ( figure 1 ) .
these sequences should be uninterrupted orfs if the analyses are to be carried in full , otherwise only the bi - directional blast procedure would make sense . the user may order the application to compile the sequences as blast binary databases before invoking the blast procedure or may supply the databases already compiled .
other text files may be supplied to enrich the analysis , namely a go slim terms ( 11 ) list ( http://downloads.yeastgenome.org/literture_curation/go_terms.tab ) , and a map assigning go slim terms to the relevant reference sequences ( http://downloads.yeastgenome.org/literature_curation/go_slim_mapping.tab ) as available from saccharomyces genome database project ( ftp://ftp.yeastgenome.org/yeast/ ) .
similar files for other genomes may be supplied instead , provided the column arrangement is respected .
the first stage of the program execution is to upload the sequences into the internal database .
then the bi - directional blast procedure begins matching up every sequence in the query set against the reference sequences . at each step the top hit in the results list is subjected to the reverse process .
if the top hit in this final search is the starting sequence , a bi - directional hit is scored , and uni - directional hits are also recorded as they may point to paralogous orfs .
if the sequences that are being compared are not from close related taxa , tblastx variant should be the option of choice because matches are evaluated at the conceptual translation level . in this case
the local alignments will be shorter and ungapped , therefore less emphasis should be put in the resulting blast statistics .
the procedure will also require the selection of an adequate substitution matrix , and it will take longer to complete .
the user is also given the possibility of using dust filter and masking . in a second stage ,
go slim data are imported into the internal database and mapped only to blast hits , through the reference sequences to the ones being queried .
the results from the previous blast search are now complemented by two rounds of paired global alignment .
the first aligns the original nucleotide sequences , while the second aligns the conceptual translation products , and concomitantly a codon - wise alignment for the former sequences is generated .
for this alignment procedure to be carried with best results , the substitution matrix should be chosen by the user according to the estimate of the average similarity between both proteomes .
the results from this last round are most important when the matched sequences are more divergent , and tblastx is used .
this is specified for each set of sequences , and is also applied to the eventual tblastx searches .
several statistics are calculated about observed substitution frequency against base position in the codon , and the degree of conservation among amino acids .
each application run may take up to three days for orfomes of the size of those known for sacharomyces species in a low end pc computer .
the total data and results can be browsed with the proprietary db4o object browser ( versant corp ; http://developer.db4o.com/files/folders/objectmanager_1746/entry24858.aspx ) , but they are mainly intended to be imported as text delimited files into a spreadsheet editor or relational database management system ( e.g. ms access or mysql ) .
such applications will allow the user to filter , process , and explore the results in the most efficient way .
the approach taken in the inception of this application was to minimize the amount of filtration of input data ( e.g. malformed orfs with intervening stop codons ) or the yielded results , in order to give the user a greater flexibility at analysis time .
this empowerment should compel the user to be more careful in the evaluation of the result dataset .
artifacts are bound to arise and they must be discarded before undertaking any kind of analysis . the most straightforward filter to apply when looking for putative ortholog sequences should be the paired extension of the global alignment relative to the query sequence length .
this simple operation will allow for the detection of one of the most common sources of false hits , partial alignments due to a shared conserved domain . | bi - directional blast is a simple approach to detect , annotate , and analyze candidate orthologous or paralogous sequences in a single go .
this procedure is usually confined to the realm of customized perl scripts , usually tuned for unix - like environments .
porting those scripts to other operating systems involves refactoring them , and also the installation of the perl programming environment with the required libraries . to overcome these limitations ,
a data pipeline was implemented in java .
this application submits two batches of sequences to local versions of the ncbi blast tool , manages result lists , and refines both bi - directional and simple hits .
go slim terms are attached to hits , several statistics are derived , and molecular evolution rates are estimated through paml .
the results are written to a set of delimited text tables intended for further analysis .
the provided graphic user interface allows a friendly interaction with this application , which is documented and available to download at http://moodle.fct.unl.pt/course/view.php?id=2079 or https://sourceforge.net/projects/bidiblast/ under the gnu gpl license . |
These crawls are part of an effort to archive pages as they are created and archive the pages that they refer to. That way, as the pages that are referenced are changed or taken from the web, a link to the version that was live when the page was written will be preserved.Then the Internet Archive hopes that references to these archived pages will be put in place of a link that would be otherwise be broken, or a companion link to allow people to see what was originally intended by a page's authors.The goal is to fix all broken links on the web . Crawls of supported "No More 404" sites. ||||| Keith Jackson, who was widely regarded as the voice of college football by several generations, died late Friday night, his family said. He was 89.
Editor's Picks Keith Jackson was the sound of college football Close your eyes and listen. You'll hear and know that voice. Sure, Keith Jackson did other sports -- all so very well -- over a legendary career, but that voice sounded so perfect when college football was its backdrop.
Jackson, who retired in 2006, spent some 50 years calling the action in a folksy, down-to-earth manner that made him one of the most popular play-by-play personalities in the business.
"For generations of fans, Keith Jackson was college football," said Bob Iger, chairman and CEO of The Walt Disney Company. "When you heard his voice, you knew it was a big game. Keith was a true gentleman and a memorable presence. Our thoughts and prayers go out to his wife, Turi Ann, and his family."
Jackson got his start on the radio in 1952, broadcasting Washington State games, but went on to provide the national television soundtrack for the biggest games in the most storied stadiums. His colorful expressions -- "Whoa, Nellie" and "Big Uglies" among the many -- became part of the college football lexicon.
Keith Jackson was credited with coining the phrase "The Granddaddy of Them All" for the Rose Bowl, where he made his final broadcast for ABC Sports in 2006. Richard Shotwell/Invision via AP
He was credited with nicknaming the Rose Bowl "The Granddaddy of Them All" and Michigan's stadium "The Big House."
"That big smiling face, and just the thrill and the love he had for doing college football," Bob Griese told SportsCenter when asked what he'd remember about Jackson, his longtime broadcast partner whom he started working with in 1985.
"He did it for a long, long time. ... He never intruded on the game. It was always about the kids on the field. Never, never shining the light on himself. And that was one of the things that I most admired about him."
In 1999, Jackson was awarded the National Football Foundation and Hall of Fame Gold Medal -- its highest honor -- and named to the Rose Bowl Hall of Fame, the first broadcaster accorded those distinguished honors.
Having a hard time finding the right words to express what the icon Keith Jackson meant to me personally, Michigan football and CFB, in general. May his family find some comfort in knowing how much joy he brought us for so many years and that his legacy endures. #RIP #Legend pic.twitter.com/Q5CWRp9gmp — Desmond Howard (@DesmondHoward) January 13, 2018
Keith Jackson was the voice of college football. Rest In Peace my friend 🙏🏼 pic.twitter.com/2YcAaRKoan — Marcus Allen (@MarcusAllenHOF) January 13, 2018
One of my favorite memories from my time in college was getting to do production meetings with Keith Jackson and Dan Fouts. Keith was the voice of my childhood Saturday football afternoons. Rest In Peace my friend. #legend https://t.co/7SD1hmzdVg — Aaron Rodgers (@AaronRodgers12) January 13, 2018
THE voice of college football and one of the most iconic voices of all time, RIP Keith Jackson. Thank you for all of the incredible Saturday's. — JJ Watt (@JJWatt) January 13, 2018
Jackson began calling college football games for ABC Sports when it acquired the broadcast rights for NCAA football in 1966. He also worked NFL and NBA games, 11 World Series and LCS, 10 Winter and Summer Olympics, and auto racing. In addition, he traveled to 31 countries for "Wide World of Sports."
Among his broadcasting accomplishments, Jackson was the first play-by-play voice of Monday Night Football when the program debuted in 1970. He called Bucky Dent's home run against the Red Sox in 1978 as well as Reggie Jackson's three-homer game in the 1977 World Series.
His Olympics highlights include Mark Spitz's record seven gold medals in the 1972 Games and speedskater Eric Heiden's five golds in 1980.
Jackson announced he would retire from college football play-by-play after the 1998 season but ended up continuing with ABC Sports. He walked away for good in May 2006, telling The New York Times he was finished "forever."
"I am saddened to hear the news of Keith Jackson's death," USC athletic director Lynn Swann, another broadcast partner of Jackson, said in a statement Saturday. "Keith covered games I played in and we worked together at ABC Sports for decades. Every step of the way, he shared his knowledge and his friendship.
"Not just the voice, but the spirit of college football. My heart and prayers go out to his wife and children on this day and I thank them for allowing so many of us to have shared in Keith's life."
Keith Jackson (center) teamed with Howard Cosell (left) and Don Meredith for the first "Monday Night Football" telecast on ABC on Sept. 21, 1970. ABC Photo Archives/ABC/Getty Images
His final game ended up being the 2006 Rose Bowl, the thrilling national-title showdown between USC and Texas that saw Vince Young and the Longhorns prevail over the Trojans and their two Heisman Trophy winners, Matt Leinart and Reggie Bush, with 19 seconds remaining.
Other memorable college football moments with Jackson on the play-by-play call included the 2003 Fiesta Bowl (Ohio State vs. Miami), Kordell Stewart's Hail Mary in the 1994 "Miracle at Michigan," Desmond Howard's "Hello Heisman" moment in 1991 for Michigan, and "Wide Right I" and "Wide Right II" in the Florida State-Miami rivalry.
He was inducted into the American Sportscasters Hall of Fame in 1994, and he received the Amos Alonzo Stagg Award from the American Football Coaches Association.
Classic calls by Keith Jackson ESPN Classic and ESPNU will feature these college football games called by Keith Jackson, who died Friday. ESPN Classic on Saturday
• 1994 Colorado vs. Michigan, 4 p.m. • 1997 Ohio State vs. Michigan, 6 p.m. • 2003 Ohio State vs. Miami, 8 p.m. • 2006 Rose Bowl: Texas vs. USC, 11 p.m. ESPNU on Sunday
• 2003 Ohio State vs. Miami, 6 a.m. • 2006 Rose Bowl: Texas vs. USC, 9 a.m. All times ET
Jackson was born on Oct. 18, 1928, in Roopville, Georgia -- near the Alabama state line. He spent four years in the Marine Corps before attending Washington State and graduating with a broadcast journalism degree. He worked at the ABC affiliate in Seattle, KOMO, for 10 years, including conducting the first live sports broadcast from the Soviet Union to the United States in 1958 with his radio call of a University of Washington rowing victory.
He became sports director of ABC Radio West in 1964 and was a freelancer for ABC Sports until becoming part of its college football announcing crew.
The National Sportswriters and Sportscasters Association named him the National Sportscaster of the Year five times, among other honors.
The Associated Press contributed to this report. ||||| The National Sportscasters and Sportswriters Association, now known as the National Sports Media Association, named Mr. Jackson sportscaster of the year five consecutive times, from 1972 to 1976.
He told The New York Times how the broadcaster Ted Husing had inspired his breezy style, advising him: “Never be afraid to turn a phrase. If you can say something in such a way that’s explanatory, has flavor and people can understand it, try it. If it means quoting Shakespeare or Goethe, do it.’’
He was more partial to the lingo of his native rural South.
Mr. Jackson’s “Whoa, Nellie!” punctuating an exciting play was his best-remembered good ol’ boy touch, though he maintained that he didn’t use it all that often.
He said he had a mule named Pearl while growing up on a Georgia farm but attributed the expression to his great-grandfather Jefferson Davis Robison, who evidently plowed many a field holding the reins of a mule.
“He was a farmer and he was a whistler,” Mr. Jackson told The Los Angeles Times in 2013. “He loved two phrases: ‘Dad gummit’ and the other was ‘Whoa Nellie.’”
Mr. Jackson informally christened the University of Michigan’s cavernous stadium at Ann Arbor “the Big House”; he relished broadcasting the Rose Bowl game, “the granddaddy of ’em all”; and he admired the enormous linemen, who were “the Big Uglies in the trenches.” | – "For generations of fans, Keith Jackson was college football," ESPN quotes Disney CEO Bob Iger as saying. "When you heard his voice, you knew it was a big game." Jackson, who retired from a 54-year broadcasting career in 2006, died Friday night in Los Angeles at the age of 89, his family says. The Georgia-born Jackson was known for his "folksy" turns of phrase, most famously "Whoa, Nellie." But he also lastingly dubbed the Rose Bowl "The Granddaddy of Them All" and would frequently say things like, of a small player, "If he keeps eating his cornbread, he'll be man-sized some day," according to the Los Angeles Times. Jackson was so beloved ABC wouldn't let him retire in 1998, keeping him on another eight years. "If I've helped people enjoy the telecast, that's fine," Jackson said. "That's my purpose." Jackson, a former Marine, never lost his passion for college football over the decades. "It's still fun to see new generations enjoy the game," the New York Times quotes him as once saying. "I get there an hour and a half before the game and watch the bands rehearse, the people carry on. You let it seep into you." In addition to college football, Jackson called 10 Olympics and 11 World Series and was the first play-by-play man on Monday Night Football. He also delivered the first live sports broadcast from what was then the Soviet Union. "He never intruded on the game," longtime broadcasting partner Bob Griese says. "It was always about the kids on the field." Jackson agreed, once saying: "This is not my stage. The stage belongs to the athletes and coaches." He is survived by his wife, three children, and three grandchildren. |
one exciting aspect of dense stellar systems is the simultaneous importance of three principal areas of stellar astrophysics : dynamics , evolution , and hydrodynamics .
many simulation codes focus on one of these areas and have often been lifelong works in progress .
the first attempts at unifying these treatments into a coherent model to describe clusters have begun only recently . in june 2002 ,
specialists in stellar dynamics , stellar evolution , hydrodynamics , cluster observation , visualization , and computer science gathered at the american museum of natural history in new york city to begin discussing a framework for modelling dense stellar systems , without having to modify existing stellar codes extensively .
the workshop - style meeting , organized by p. hut and m. shara , became known as modest-1 @xcite .
the second such meeting , modest-2 , was organized by s. portegies zwart and p. hut and held in december 2002 at the anton pannekoek institute in amsterdam @xcite . from modest-2 ,
a set of eight `` working groups '' were established , each focusing on a different aspect of the modest endeavour . attempting to integrate stellar dynamics , evolution , and hydrodynamics codes into one fully functional package
will be challenging , largely because each area treats stellar properties that evolve on different time - scales .
however , by combining these areas , we will be able to better model the origins , dynamics , evolution , and death of globular clusters , galactic nuclei , and other dense stellar systems . in this paper ,
our focus is on modelling hydrodynamic interactions between stars .
the goal is to develop a software module for quickly generating collision product models , ultimately for any type of stellar collision , that could be incorporated into simulations of dense star clusters .
@xcite presented an appropriate formulation for treating parabolic single - single star collisions between low - mass main - sequence stars . here
we extend that study to include situations in which one of the parent stars is itself a thermally _ un_relaxed collision product .
such scenarios can occur during binary - single or binary - binary interactions , when the time between collisions is much less than the thermal relaxation time - scale .
stellar collisions and mergers can strongly affect the overall energy budget of a cluster and even alter the timing of important dynamical phases such as core collapse .
furthermore , hydrodynamic interactions are believed to produce a number of non - canonical objects , including blue stragglers , low - mass x - ray binaries , recycled pulsars , double neutron star systems , cataclysmic variables , and contact binaries .
such stars and systems are among the most challenging to model , but they are also among the most interesting observational markers .
blue stragglers , for example , exist on an extension of the main - sequence , but beyond the turnoff point .
blue stragglers are therefore appropriately named , as they are more blue than the remaining ordinary main - sequence stars , and , compared to other stars of similar mass , are straggling behind in their evolution .
this aberration from the common path of stellar evolution is believed to be due to mass transfer or merger in a binary system , or from the direct collision of two or more main - sequence stars ( for a review , see * ? ? ?
predicting the numbers , distributions , and other observable characteristics of stellar exotica will be essential for detailed comparisons with observations .
stellar dynamics codes determine the motions of stars .
the primary approaches to evolving clusters or galactic nuclei dynamically are direct n - body integrations , solving the fokker - planck equation ( e.g. , * ? ? ?
* ) , monte carlo approaches ( e.g. , * ? ? ?
* ; * ? ? ?
? * ; * ? ? ?
* ; * ? ? ?
* ; * ? ? ?
* ) , and gaseous models ( e.g. , * ? ? ?
for a review of the ongoing nbody effort for accurate n - body simulations , see @xcite ; for a general review of cluster dynamics , see and @xcite .
the most important quantities that a stellar evolution software module can provide to a dynamics module are the stellar masses , as well as the stellar radii if collisions are included .
at least in principle , these results could come from a live ( i.e. , concurrent with the cluster dynamics ) stellar evolution calculation , from fitting formulae , or from interpolation among prior calculations .
due to the large number of stars , it would be wasteful to expend a considerable amount of time for a live computation of each star s evolution . for the ordinary stars
whose evolution is not perturbed by an event such as a collision , it is much more efficient and entirely appropriate to interpolate among , or to use analytic fitting formulae based upon , previously calculated evolutionary tracks . the parameter space associated with non - canonical stars , however , is too enormous to be adequately covered by interpolation or fitting formulae , and it will ultimately be necessary to invoke a full stellar evolution calculation in parallel with the stellar dynamics for such stars . although live stellar evolution calculations have not yet been combined with stellar dynamics codes , some parametrized codes , such as seba @xcite , sse @xcite and bse @xcite , have been successfully integrated .
when physical collisions between stars are modelled in a cluster calculation or a scattering experiment , it is usually done using a method known as `` sticky particles , '' in which a collision product is given a mass equal to the combined mass of the two parent stars and a velocity determined by momentum conservation . in a collision between two main - sequence stars , for example , the result would be modelled as a rejuvenated , thermally relaxed main - sequence star .
this simple method is a reasonable first approximation for many situations .
however , there are important characteristics of collision products that are neglected , including their rapid rotation , peculiar composition profiles , and enhanced sizes due to shock heating .
if the thermal relaxation time - scale of the collision product is much less than the time until its next collision , then it is appropriate to assume the product becomes instantaneously thermally relaxed , as is done in the classic simulations of @xcite .
this approximation becomes questionable when the collision has been mediated by a binary , as there is then at least one star in the immediate vicinity of the collision product and the likelihood of a subsequent collision will depend sensitively on the product s thermally _ un_relaxed size .
binaries are subject to enhanced collision rates for two primary reasons : ( 1 ) their collision cross section depends on the semi - major axis of the orbit , as opposed to the radius of a single star , and ( 2 ) due to mass segregation , binaries tend to be found in the core of the cluster , the densest and most active region . in clusters with a binary fraction exceeding about 20 per cent , binary - binary collisions are expected to occur more frequently than single - single and binary - single collisions combined @xcite .
it is probably not uncommon for binary fractions to be this large : the inner core of ngc 6752 , for example , is thought to have a binary fraction in the range 1538 per cent @xcite .
binary populations can lead to complex and chaotic resonant interactions .
these interactions tend to exchange energy between the binaries and the other stars in the cluster , and therefore are critical in determining its dynamics and observable characteristics @xcite .
a star intruding on a binary could , depending on parameters such as the separation of the binary and the velocity of the incoming star , escape to infinity , destroy the binary , form a new binary with a star from the original , or form a triple .
the outcomes of three body encounters can be categorized using a nomenclature based upon typical atomic processes ( see * ? ? ?
* who introduced the use of terms like ionization and exchange , to describe resultant scenarios ) .
see @xcite for an informative narration of the intimate interactions , usually involving binaries , that stars regularly undergo during cluster evolution .
the starlab computing environment is a very useful tool for modelling and analyzing all types of stellar phenomena .
the general technique for using starlab to determine the cross sections or branching ratios for the various outcomes of binary interactions is presented by @xcite .
they also highlight an example set of cases in which a @xmath1 star intrudes upon a binary with a @xmath2 primary and a @xmath3 secondary .
their assumed mass radius relation , @xmath4 , is appropriate for thermally relaxed main sequence stars and is applied both to the parent stars and any collision products .
they find that for binaries with semimajor axes of 0.2 , 0.1 , 0.05 , and 0.02 au , triple - star mergers comprise about 1 , 2 , 5 , and 15 per cent , respectively , of all merger events .
as the results of the present paper will help show , we expect that accounting for the enhanced , thermally unrelaxed size of the first collision product will greatly increase these percentages as well as the range of semimajor axes in which triple - star mergers are significant .
simulations of moderately dense galactic nuclei initially containing solar - mass main - sequence stars demonstrate that runaway mergers can readily produce stars with masses @xmath5 .
these massive stars then undergo further mergers to produce seed black holes with masses as large as @xmath6 @xcite
. this process may be responsible for massive black holes at the centres of most galaxies , including our own . for star clusters , recent n - body simulations
reveal that runaway mergers can lead to the creation of central black holes within a few million years ( e.g. , * ? ? ?
* ; * ? ? ?
* ) . with the help of monte carlo simulations , @xcite
show that the runaway process will occur in a typical cluster with a relaxation timescale less than about 30 myr .
observational evidence for a possible intermediate - mass black hole in m15 has been recently reported by @xcite , although the data is more reasonably modelled with a large concentration of stellar - mass compact objects @xcite . mass loss and expansion due to shock heating when two stars collide are examples of hydrodynamical processes that can ultimately affect the future evolution of the cluster . mostly using the smoothed particle hydrodynamics ( sph , see [ sph ] ) method , numerous scenarios of stellar collisions and mergers have been simulated in recent years , including collisions between two main - sequence stars @xcite , collisions between a giant star and compact object @xcite , and common envelope systems @xcite .
the first published sph calculations of three - body encounters were done by @xcite , who performed over 100 very low resolution simulations and implemented a mass - radius relation appropriate for white dwarfs . other three- and four - body interaction simulations include binary - binary encounters among @xmath7 polytropes @xcite as well as neutron star
main - sequence binary encounters with a neutron star , main - sequence or white dwarf intruder @xcite .
see @xcite for more information concerning the use of sph in stellar collisions , and see @xcite for a qualitative overview of the progress in stellar collision research .
if the structure and composition profiles of colliding stars were available ( perhaps from a live stellar evolution calculation ) during a cluster simulation , then the sticky particle method could be replaced by a more detailed hydrodynamics module .
sph calculations could then , at least in principle , be run on demand within this cluster simulation in order to determine the orbital trajectory of the product(s ) , as well as their structure and chemical composition distributions .
however , at least @xmath8 sph fluid particles may be necessary to allow an accurate treatment of the subsequent evolution of collision products @xcite .
the trouble , therefore , is that the integration of just a single interaction could consume hours , days or even weeks of computing time ( depending on the initial conditions , desired resolution , and available computational resources ) .
although the use of equal - mass particles , or the more accurate sph equations of motion derived by @xcite , or both , could decrease the total number of particles required , it is still currently impractical to implement a full hydrodynamics calculation for every close stellar encounter during a cluster simulation .
one approach for incorporating strong hydrodynamic interactions and mergers into a grand simulation of a cluster , already successfully implemented by @xcite in the context of galactic nuclei , is to interpolate between the results of a set of previously completed sph simulations .
the sph database of freitag & benz treats all types of hyperbolic collisions between main - sequence stars ( mergers , fly - bys and cases of complete destruction ) , while also varying the parent star masses as well as the eccentricity and periastron separation of their initial orbit .
the tremendous amount of parameter space surveyed precludes having high enough resolution to determine the detailed structure and composition profiles of the collision products for all cases ; however , critical quantities such as mass loss and final orbital elements can be determined accurately .
a second possibility is to forgo hydrodynamics simulations and instead model collision products by physically motivated algorithms and fitting formulae that sort the fluid from the parent stars @xcite .
one advantage of such an approach is that it can handle cases in which one or both of the parent stars is itself a former collision product ( with chemical and structural profiles that are substantially different than that of a standard isolated star of similar mass and type ) . in this paper , we use both sph calculations and a much faster fluid sorting algorithm to study scenarios in which a newly formed collision product collides with a third parent star . by varying the order and orbital parameters of the collision , we investigate how factors such as shock heating affect the chemical composition and structure profiles of the collision product .
section [ procedure ] presents our procedures and numerical methods , both for our sph calculations ( [ sph ] ) and our fluid sorting algorithm ( [ sorting ] ) .
sph results are presented in
[ sphresults ] , and then compared to the results of our fluid sorting algorithm in [ mmasresults ] . in
[ discussion ] we discuss our findings and possible directions for future work .
one means by which we generate collision product models is with the parallel sph code used in @xcite .
the original serial version of this code was developed by @xcite , specifically for the study of stellar interactions such as collisions and binary mergers ( see , e.g. , * ? ? ?
* ; * ? ? ?
* ; * ? ? ?
. introduced by @xcite and @xcite , sph is a hydrodynamics method that uses a smoothing kernel to calculate local weighted averages of thermodynamic quantities directly from lagrangian fluid particle positions ( for a review , see * ? ? ?
. each sph particle can be thought of as a parcel of gas that traces the flow of the fluid , with the kernel providing each particle s spatial extent and the means by which it interacts with neighbouring particles .
the sph code solves the equations of motion of a large number of particles moving under the influence of both hydrodynamic and self - gravitational forces .
all of the scenarios we investigate using sph involve a @xmath2 parent star , represented with 12800 equal - mass sph particles , and two @xmath1 parent stars , each represented with 9600 equal - mass particles .
each of our sph particles therefore has a mass of @xmath9 . for comparison ,
this particle mass is between the masses of the central particles used in the @xmath10 and @xmath11 calculations of @xcite , who used unequal - mass particles to study in detail the outer layers of the fluid in a collision between two @xmath12main - sequence stars . for our purposes
, the use of equal - mass particles is more appropriate , as it allows for higher resolution in the stellar cores and does not waste computational resources on the ejecta .
local densities and hydrodynamic forces at each particle position are calculated by appropriate summations over @xmath13 nearest neighbours .
the size of each particle s smoothing kernel determines the local numerical resolution and is adjusted during each time step to keep @xmath13 close to a predetermined value , 48 for the present calculations .
neighbour lists for each particle are recomputed at every iteration using a linked - list , grid - based parallel algorithm @xcite .
the hydrodynamic forces acting on each particle include an artificial viscosity contribution that accounts for shocks . as in @xcite , we adopt the artificial viscosity form proposed by @xcite , with @xmath14 , @xmath15 , and @xmath16 .
this form treats shocks well and has the tremendous advantage that it introduces only relatively small amounts of spurious shock heating and numerical viscosity in shear layers @xcite .
a number of physical quantities are associated with each sph particle , including its mass , position , velocity , and entropic variable @xmath17 .
here we adopt a monatomic ideal gas equation of state , appropriate for the stars in our mass range .
that is , @xmath18 , where the adiabatic index @xmath19 with @xmath20 and @xmath21 being the pressure and density , respectively .
the entropic variable is closely related ( but not equal ) to specific entropy : both of these quantities are conserved in the absence , and increase in the presence , of shocks .
our code uses an fft - based convolution method to calculate self - gravity .
the fluid density is placed on a zero - padded , 3d grid by a cloud - in - cell method , and then convolved with a kernel function to obtain the gravitational potential at each point on the grid .
gravitational forces are calculated from the potential by finite differencing , and then interpolated for each particle using the same cloud - in - cell assignment scheme . for each collision simulation in this paper ,
the number of grid cells is @xmath22 .
the ejecta leaving the grid interact with the enclosed mass simply as if it were a monopole .
following the same approach as in @xcite , we begin by using a stellar model from the yale rotational evolution code ( yrec ) to help generate sph models of the parent stars .
we focus on collisions involving 0.8 and @xmath1 main - sequence stars , with a primordial helium abundance @xmath23 and metallicity @xmath24 . using yrec ,
these stars were evolved with no rotation to an age of 15 gyr , the amount of time needed for the @xmath0 star to reach turnoff .
the total helium mass fractions for the @xmath25 and @xmath2 parent stars are 0.286 and 0.395 , and their radii are @xmath26 and @xmath27 , respectively .
see figs . 1 and 2 of @xcite for thermodynamic and composition profiles of the parent stars presented as a function of enclosed mass , as determined by yrec .
to generate our sph models , we use a monte carlo approach to distribute particles according to the desired density distribution , determining values of @xmath17 for each sph particle from its position . to minimize numerical noise , an artificial drag force
is implemented , with artificial viscosity turned off , to relax each sph parent model to the equilibrium configuration used to initiate the collision calculations .
fourteen different chemical abundance profiles are available from the yrec parent models to set the composition of the sph particles .
the abundances of an sph particle are assigned according to the amount of mass enclosed by an isodensity surface passing through that particle in the relaxed configuration .
[ refpar2 ] plots fractional chemical abundances ( by mass ) versus @xmath28 in each parent star in its relaxed sph configuration .
note that the dense core of the turnoff star is at a smaller @xmath17 , and its diffuse outer layers are at a larger @xmath17 , than all of the fluid in the @xmath1 star , which has direct consequences for the hydrodynamics of collisions involving these stars . also note that lithium and beryllium exist only in the outermost layers of the parent stars . , c@xmath29 , c@xmath30 , n@xmath31 , o@xmath32 , li@xmath33 , li@xmath34 , and be@xmath35 versus @xmath28 , where the entropic variable @xmath36 is in cgs units , for our @xmath1 ( solid curve ) and @xmath2 ( dotted curve ) parent stars , as determined by yrec . , width=317 ]
we focus on triple - star collisions , modelling each collision separately and in succession .
we do not consider fly - bys or grazing collisions in our sph calculations : all of our collisions lead to mergers .
we neglect any direct or indirect effects , including tidal forces , that the third star may have on the dynamics of the first collision .
we assume that the second collision occurs before the first collision product thermally relaxes : a reasonable approximation since contraction to the main - sequence occurs on a thermal time - scale , lasting at least @xmath37 yr for non - rotating products and as long as @xmath38 yr for rapidly rotating products @xcite , much longer than the typical time between collisions in some binary - single or binary - binary interactions ( but see [ future ] ) . the orbital trajectory in all our collisions is taken to be parabolic .
this is clearly not appropriate for galactic nuclei , where collisions are typically hyperbolic .
however , in globular clusters , the velocity dispersion is only @xmath39 km s@xmath40 , much less than the 600 km s@xmath40 escape speed from the surface of our @xmath2 turnoff star , and hence all single - single star collisions are essentially parabolic . for collisions involving binaries ( even including some hard binaries ) , the escape speed can still be large compared to the effective relative velocity at infinity .
for example , consider two @xmath2 turnoff stars in a circular orbit of radius 0.05 au in a globular cluster .
this is a hard binary , as each star moves with a velocity of about 60 km s@xmath40 with respect to the center of mass of the binary , a speed significantly larger than the cluster velocity dispersion .
yet , the effective relative velocity at infinity for a collision between one of the binary components and an intruder would typically not be much more than the orbital speed , and therefore still significantly less than the escape speed from our turnoff star .
we therefore expect collisions between a slow intruder and a binary component to be close to parabolic not only for all soft binaries , but also for some ( moderately ) hard binaries for the first single - single star collision , the stars are initially non - rotating and separated by 5 @xmath41 , where @xmath42 is the radius of our turnoff star .
the initial velocities are calculated by approximating the stars as point masses on an orbit with zero orbital energy and a periastron separation @xmath43 .
a cartesian coordinate system is chosen such that these hypothetical point masses of mass @xmath44 and @xmath45 would reach periastron at positions @xmath46 , @xmath47 , where @xmath48 and @xmath49 refers to the more massive star .
the orbital plane is chosen to be @xmath50 . with these choices , the centre of mass resides at the origin .
for the first collisions , the gravity grid maintains a fixed spatial extent from -4@xmath41 to + 4@xmath41 along each dimension . around the @xmath51-axis and
then by an angle @xmath52 around the @xmath53-axis .
the above figure shows how a vector @xmath54 initially pointing along the @xmath53-axis is transformed into the new vector @xmath55 by these rotations .
, width=317 ] for the second collision , we want to control the relative orientation of the first collision product s rotation axis and the orbital plane ( or , equivalently , the direction of approach of the third parent star ) . to do so , we begin with the final state of the first collision and make two rotations to its particle positions and velocities , through the angles @xmath56 and @xmath52 .
more specifically , the first rotation is clockwise through an angle @xmath56 about the @xmath51 axis , while the second rotation is clockwise through an angle @xmath52 about the @xmath53 axis ( see fig .
[ rotate ] ) .
finally , the particle positions and velocities are uniformly shifted parallel to the @xmath51-@xmath57 plane , and the third star is introduced such that the system s centre of mass will remain at the origin and the periastron positions ( in the two body point mass approximation ) will occur on the @xmath51 axis . in order to allow the bulk of the fluid to remain within the gravity grid
, the grid is extended up to a full width of @xmath58 in the @xmath51 and @xmath57 directions for some of the second collisions .
we use the same iterative procedure as @xcite to determine the bound and unbound mass .
sph structure and composition profiles presented in this paper result from averaging in 100 equally sized bins in the bound mass .
unfortunately , it is extremely difficult to use sph simulations to specify the equilibrium structure of the outermost few per cent of mass in any collision product .
some sph particles , although gravitationally bound , are ejected so far from the system s centre of mass that it would take many dynamical time - scales for them to rain back onto the central product and settle into equilibrium .
our requirement for stopping an sph calculation is that the entropic variable @xmath17 , when averaged over isodensity surfaces , increases outward over at least the inner 95 per cent of the bound mass in the first collision product , and at least 92 per cent in the second collision product .
many calculations are run longer in order to confirm that no rapid changes are still occurring in the structure and chemical composition distributions .
the results of parabolic collisions between low - mass main - sequence stars can be well explained by simple physical arguments . to a good approximation , the fluid from the parent stars sorts itself such that fluid with the lowest values of @xmath17 sinks to the core of the collision product while the larger @xmath17 fluid forms its outer layers .
therefore , the interior structure and the chemical composition profiles of the collision product can be predicted accurately using simple algorithms , instead of hydrodynamic simulations .
based on these ideas , @xcite have recently created a publicly available software package , dubbed make me a star ( mmas ) .
this package produces collision product models close to those of an sph code in considerably less time , while still accounting for shock heating , mass loss , and fluid mixing .
sorting the shocked fluid according to its entropic variable @xmath17 gives the @xmath17 profile of the collision product as a function of the mass @xmath59 enclosed inside an isodensity surface . in the case of the non - rotating products formed in head - on ( @xmath60 ) collisions ,
knowledge of the @xmath61 profile is sufficient to determine the pressure @xmath62 , density @xmath63 , and radius @xmath64 profiles .
using the @xmath17 profile determined by sorting , mmas numerically integrates the equation of hydrostatic equilibrium with @xmath65 to determine the @xmath21 and @xmath20 profiles , which are related through @xmath66 .
the outer boundary condition is that @xmath67 when @xmath68 , where @xmath69 is the desired ( gravitationally bound ) mass of the collision product .
the virial theorem provides a check of the resulting profiles .
this approach allows for the quick generation of collision product models , without hydrodynamic simulations , and has already been tested with single - single star collisions . presented in this paper
are the results from mmas for triple - star collisions ( see [ mmasresults ] ) .
our procedure is simple .
we call the mmas routine twice , using the output model from the first collision as one of the input parent models in the second . these mmas calculations therefore account for the differences in shock heating that arise from changing the order , or the periastron separations , or both , of the collisions . in addition to investigating all of the scenarios considered with the sph code
, we also use mmas to examine more completely how the sizes of products vary with the periastron separations of the collisions .
furthermore , we include a @xmath3 parent star of radius @xmath70 , whose structure is determined by yrec under the same conditions described in [ sph ] . for an off - axis collision , knowledge of the specific angular momentum distribution in the collision product
is necessary to determine its structure fully , which by itself is a challenging problem .
although mmas outputs an approximate specific angular momentum profile of the first collision product , we use only its entropic variable @xmath61 profile to help initiate the second collision .
that is , for one of the parent stars in the second collision , we always give as input to mmas the structure of a non - rotating star with the desired @xmath61 profile , a simplification that both eases and quickens computations .
the validity of this approximation is supported by the sph calculations presented in [ spin ] .
@xcite implemented version 1.2 of mmas , while the results of this paper use version 1.6 . besides cosmetic changes , the primary enhancement is that the structure of the collision product is integrated with a fehlberg fourth - fifth order runge - kutta method .
in addition , we have fine - tuned the fitted parameter @xmath71 from its previous value of -1.1 to the new value -1.0 , which has the effect of distributing shocks slightly more uniformly throughout the fluid .
table [ singlesingle ] summarizes five single - single star [ cols="^,^,^,^,^,^,^,^,^ " , ] from table [ singlesingle ] ( referred to as the first collision product ) is collided with a third parent star .
the table shows : the case number ; the name of the single - single collision that yielded the first collision product ; the mass @xmath72 of the third ( @xmath73 ) parent star ; the periastron separation @xmath74 of the second collision ; the rotation angles @xmath56 and @xmath52 ( see fig . [ rotate ] ) ; the time @xmath75 when the calculation was terminated ; the ratio of kinetic to gravitational binding energy @xmath76 in the centre - of - mass frame of the final sph collision product model ; the average radius of the isodensity surface enclosing 90 per cent of the bound mass , as calculated by sph , @xmath77 , and by mmas , @xmath78 ; and the mass of the product as calculated both by sph , @xmath79 , and by mmas , @xmath69 .
all twenty cases presented in table [ second ] involve two @xmath80 stars and a single @xmath2 star .
if mass loss were neglected completely , the mass of the final collision product would therefore simply be @xmath81 .
the sph calculated masses range from about 1.76 to @xmath82 , with the largest mass loss occurring for cases with successive head - on collisions . the cases in table [ second ] group naturally together in a variety of ways . cases 1 and 2
each involve two head - on collisions .
cases 5 and 6 differ only in the orientation of the first collision product s spin axis , and an identical statement can be made for cases 7 through 10 , as well as for cases 14 through 19 .
cases 2 , 11 , and 7 differ only in the periastron separation of the first collision , as do cases 12 , 13 and 14 . also
, many of the cases differ only in the periastron separation for the second collision : for example , cases 7 , 5 , and 14 , as well as cases 10 , 6 , 17 , and 20 . even without running an sph or mmas calculation
, one can generate a `` zeroth order '' collision product model , valid for all twenty cases , simply by sorting the fluid of the three parent stars by their @xmath17 values , with @xmath17 increasing from the core to the surface . in those regions in which more than one parent star contributes ,
chemical abundances can be determined by an appropriate weighted average : the fraction of fluid with entropic variable in some range @xmath83 that originated from any one parent star is just equal to the fluid mass in that same range from that star divided by the total fluid mass in that range from all three stars .
[ zeroth_order ] shows the composition profiles resulting from for a `` zeroth order '' collision product model generated by sorting the fluid from one @xmath84 and two @xmath1 stars according to their @xmath17 values , without accounting for mass loss , shock heating or fluid mixing . here
@xmath59 is the mass enclosed within a surface of constant density and @xmath85 is the total mass of the collision product , @xmath86 in this model .
, width=317 ] this exercise for the merger of two @xmath1 stars and a @xmath87 star , with shock heating , mass loss , and fluid mixing all completely neglected . the innermost 6 per cent and outermost 9 per cent of this collision product model consist of fluid that originated entirely in the @xmath2 parent star , and the profiles there consequently mimic those of the innermost and outermost regions , respectively , of that parent .
the profiles in the intermediate region , where all three parent profiles contribute , can be understood by looking back at fig .
[ refpar2 ] and remembering that the smaller @xmath17 fluid is placed deeper in the product .
for example , the c@xmath29 near @xmath88 originated in the @xmath1 star , while the higher @xmath17 , c@xmath29-rich fluid peaked near @xmath89 originated mostly in the @xmath2 star .
although some of the composition profiles turn out to agree reasonably with our more precise sph and mmas calculations , the structure of the star produced by this simple merging procedure does not .
in particular , because shock heating is neglected , energy is not conserved during the merger , and the resulting radius for the product is a considerable underestimate . the primary usefulness of the model presented in fig .
[ zeroth_order ] is that it serves as a reference to help us evaluate in what ways the shock heating , mass loss , and fluid mixing treated by our sph and mmas calculations affect the composition and structure profiles of the collision product .
cases 1 and 2 each involve two consecutive head - on collisions between the same three parent stars .
the only variation between these two cases is the order in which the collisions occur : in case 1 , the @xmath2 parent is involved in the first collision , while in case 2 it is involved in the second .
[ chem12 ] shows the resulting chemical composition profiles of the collision products . because the innermost few per cent of the final collision products consist of low-@xmath17 fluid that originated in the centre of the @xmath0 parent ( see fig .
[ frac12 ] ) , the composition profiles are nearly identical there . more generally , the resulting profile of each chemical species is at least qualitatively similar throughout the products .
the differences in the composition profiles of fig .
[ chem12 ] are arguably most pronounced for c@xmath29 . in the parent stars
, this element exists in appreciable amounts only in a relatively thin shell , and , as shown in fig .
[ refpar2 ] , this shell is at a higher value of @xmath17 in the @xmath2 star than in the @xmath1 star .
exactly where the c@xmath29-rich fluid is ultimately deposited depends on the details of the shock heating , and hence the order in which the stars collide .
in particular , the final c@xmath29 profile in the case 1 product has two distinct peaks at enclosed mass fractions of @xmath90 0.1 and 0.5 ( as in our zeroth order model , see fig [ zeroth_order ] ) , whereas the case 2 profile has a single extended peak centred near @xmath91 = 0.2 .
[ f12c13 ] displays , as a function of enclosed mass fraction @xmath91 within the product , how each parent contributes to the overall c@xmath29 profile .
the inner peak in the case 1 profile is due mostly to c@xmath29 that originated in the @xmath12 parent stars , while the outer peak is due mostly to the higher-@xmath17 , c@xmath29-rich fluid from the @xmath0 parent . in case 2 ,
the @xmath0 star is involved in only the second collision .
it therefore experiences less shock heating than in case 1 and more of its fluid is able to penetrate to the core of the final collision product .
consequently , much of the c@xmath29 from the @xmath1 stars is displaced out to larger enclosed mass fractions , while the c@xmath29 from the @xmath2 star is shifted inward .
the net result is the single extended peak that includes c@xmath29 from all three parent stars . in the final collision product.,width=317 ] to the mass of the collision product from each parent star as a function of enclosed mass fraction @xmath91 within the product for case 1 ( top ) and case 2 ( bottom ) , as determined by sph calculations .
each of these cases involves head - on ( @xmath92 ) collisions among one @xmath84 and two @xmath93 stars ; however in case 1 the @xmath2 star is part of the initial collision , whereas in the case 2 scenario it is part of the second collision .
different line types are used for each parent star : @xmath49 ( solid curve ) , @xmath94 ( dotted curve ) , and @xmath73 ( dashed curve ) .
parents with an index @xmath49 or 2 are involved in the first collision , while @xmath73 refers to the third parent star from the second collision . the contribution profile from the @xmath2 parent
is labelled in each case .
, width=317 ] versus the enclosed mass fraction @xmath91 in the final collision products of case 1 ( dashed curve ) and case 2 ( dotted curve ) , as determined by sph calculations .
the top pane shows the total c@xmath29 abundance .
the second pane shows the contribution from the @xmath2 parent , and the bottom two frames show the contribution from the @xmath1 parents , so that the curves in the bottom three panes add up to give the overall profile shown in the top pane .
, width=317 ] the detailed differences between the composition profiles of the other elements in fig .
[ chem12 ] can be understood by considering the @xmath17 and composition profiles of the parent stars , along with the distribution and amount of shock heating .
for example , the core of the @xmath87 parent star is rich in he@xmath95 and n@xmath31 , but depleted of c@xmath30 ( see fig .
[ refpar2 ] ) . the lower shock heating to the @xmath2 star in case 2 allows more of its core to sink to the centre of the collision product .
consequently , the he@xmath95 and n@xmath31 levels are enhanced as compared to case 1 , while the c@xmath30 levels are diminished , for final enclosed mass fractions @xmath91 in the range from 0.05 to 0.2 .
the mass that is ejected during the collisions comes preferentially from the outer layers of the parent stars , exactly where elements such as li@xmath33 , li@xmath34 , and be@xmath35 exist .
the surface abundances ( by mass ) in the final case 1 collision product for these three elements are approximately @xmath96 , @xmath97 , and @xmath98 , which is , respectively , about 30 , 6 , and 3 times less than at the surface of the @xmath0 parent star . in case 2 ,
the surface layers are comparably depleted in these elements : the corresponding abundances are @xmath98 , @xmath99 , and @xmath98 .
the bottom three panes in fig .
[ chem12 ] show that the distribution of li@xmath33 , li@xmath34 , and be@xmath35 does differ somewhat between cases 1 and 2 .
because the @xmath2 star suffers less shock heating in case 2 than in case 1 , it loses less of its mass as ejecta and , consequently , can contribute more li@xmath33 , li@xmath34 , and be@xmath35 to the outermost layers of the final collision product .
furthermore , when the fluid containing li@xmath34 and be@xmath35 from the @xmath1 stars is involved in both collisions ( case 2 ) , it is shocked more and ultimately either ejected or deposited in the outer @xmath10010 per cent of the product .
however in case 1 , the one @xmath1 star that is involved in only a single collision can deposit its li@xmath34 and be@xmath35 of comparatively low-@xmath17 deeper in the product , resulting in flattened profiles extending further into the interior .
[ frac12 ] shows , as a function of @xmath91 , the fractional contribution to the final product s mass from each of the three parent stars for cases 1 and 2 , as determined by sph calculations . in each case , the innermost few per cent of the final collision product consists of low-@xmath17 fluid that originated in the centre of the @xmath0 parent .
because the first collision in cases 1 and 2 is head - on ( @xmath60 ) , fluid from the first two parent stars is _ not _ distributed axisymmetrically in the first collision product ( the composition distribution is therefore not axisymmetric , even though the structure of the first product is ) . in case 2 , the @xmath2 star strikes the first collision product on the side with fluid from the first ( @xmath49 ) @xmath1 parent .
fluid from the first @xmath1 parent is therefore heated more than fluid from the second , and the former is buoyed out to larger enclosed mass fractions in the final product . in off - axis collisions , rotation induces shear mixing , so that if two identical stars are involved in the first collision , they contribute essentially equally within the final product : @xmath101 .
the profiles of fig .
[ chem12 ] and fig .
[ f12c13 ] demonstrate that the order in which the stars collide can influence shock heating enough to affect , at least slightly , the chemical composition distribution within the final collision product . while the difference in resulting chemical composition profiles is small , fig .
[ slog12 ] shows that the difference in the structure of the collision product would be completely negligible for most purposes .
although changing the order of these head - on collisions affects how the shock heating is distributed ( and hence where any particular fluid element settles ) , it does not greatly affect the overall amount of heating that occurs nor the amount of mass that is ejected .
at least for low mach number collisions ( as with parabolic collisions ) that are nearly head - on ( so that the merger occurs quickly ) , shock heating can be thought of as a mild perturbation ; consequently , the final @xmath17 profile , and hence the structure of the final product , is not sensitive to the collision order in such cases .
, the natural logarithm of the average entropic variable @xmath17 , and the base 10 logarithm of the average density @xmath21 , are all plotted as a function of the average distance @xmath102 from the centre of the collision product to an isodensity surface .
units are cgs.,width=317 ] we now investigate how the direction of approach of the third star toward the first collision product affects the final collision product .
one might wonder , for example , whether an impact in the first collision product s equatorial plane ( @xmath103 or 180@xmath104 ) would yield a qualitatively different result than if the impact had instead occurred on the rotation axis .
cases 5 and 6 , cases 7 to 10 , and cases 14 to 19 can all be used to examine such effects , as the cases within each set differ only in the angles @xmath56 and @xmath52 , by which the first product is rotated ( see fig . [ rotate ] ) .
we find that while the spin of the final product is of course sensitive to such variations ( e.g. , see the @xmath76 column of table [ second ] ) , the composition profiles are nearly unaffected . fig .
[ chem14151719 ] shows the chemical abundance star , with a different orientation of the first collision product s rotation axis in each case .
[ chem14151719 ] , width=317 ] profiles of the collision product resulting in four cases in which the angle of approach of the third star is varied ( cases 14 , 15 , 17 , and 19 ) .
each of these cases involves an off - axis collision between a case k collision product and a @xmath2 star . in case 14 , the first collision product s spin vector is parallel to the orbital angular momentum of the second collision . in the other cases , the case k collision product
is tilted various ways according to the values of @xmath56 and @xmath52 listed in table [ second ] . in case 19 , for example , the case k collision product is flipped over 180@xmath104 so that it rotates in an opposite direction to that of the case 14 rotation . in case 14
, the fluid of the first product is rotating with the third star s motion as it impacts ( @xmath105 ) . consequently , the merger process is relatively gentle . for larger @xmath56 ,
the relative impact velocity is larger and the merger is somewhat more violent .
cases 14 , 17 , 15 , and 19 have @xmath56 values of 0 , 45 , 90 , and 180@xmath104 , respectively ; as @xmath56 increases , slightly less fluid from the @xmath0 star can sink down into the core of the final collision product , and the c@xmath29 profile rises at a slightly smaller enclosed mass fraction @xmath91 ( see fig . [ chem14151719 ] ) . nevertheless , as shown in fig .
[ f14151719 ] , the contribution of each from the third parent star as a function of enclosed mass fraction @xmath91 within the collision product of cases 14 ( solid curve ) , 15 ( dotted curve ) , 17 ( long dashed curve ) , and 19 ( short dashed curve ) , as determined by sph calculations . in these scenarios ,
the first two parent stars are both @xmath1 ; the third parent star is @xmath2 and approaches from a different angle @xmath56 relative to the rotation of the first collision product in each case .
the fractional contribution from each of the first two parent stars is essentially equal and can therefore be determined easily from the @xmath106 curve : @xmath107 .
[ f14151719 ] , width=317 ] parent star to the product varies very little from case to case .
consequently , the chemical profiles in the collision products also vary little as @xmath56 is changed . indeed , the he@xmath95 , c@xmath30 , n@xmath31 , and o@xmath32 profiles in fig .
[ chem14151719 ] all look remarkably similar to the corresponding profiles in fig .
[ zeroth_order ] for our simple , zeroth order model . however , the c@xmath29 profile has a single broad peak , for the same reasons as in case 2 .
furthermore , because of mass loss , the beryllium and lithium surface abundances are found to be greatly less than our zeroth order model ( that neglects mass loss ) would indicate .
the structure of the final collision product ( see fig .
[ slog14151719 ] ) can be affected by the direction of approach .
[ slog14151719 ] , width=317 ] for two primary reasons .
firstly , having larger relative velocity at impact leads to larger shock heating .
notice , for example , how the case 19 product has the largest @xmath17 values in fig . [ slog14151719 ] .
secondly , having less angular momentum in the system leads to a more compact product .
for example , the case 19 product has the largest enclosed mass fraction for almost any average radius @xmath102 , despite the additional shock heating undergone in this case .
furthermore , by comparing the final masses @xmath79 listed in table [ second ] for the products of cases 5 and 6 , of cases 7 through 10 , and of cases 14 through 19 , one can see that the amount of mass ejected is only very weakly dependent upon the direction of approach of the third parent star , varying by about @xmath108 or less within each of these sets of cases .
we now investigate the effects that the periastron separation of the first collision has on the final collision product .
cases 12 , 13 , and 14 all involve off - axis collisions with first collision products that are created from the same @xmath1 parent stars but with different periastron separations ( cases j , jk , and k , respectively ) , and hence different inherited angular momenta .
[ frac121314 ] plots the fractional contribution of the third parent star within the merger product . of the first collision : cases 12 ( solid curve ) , 13 ( dotted curve ) , and 14 ( dashed curve ) . in all three cases , the first two parent stars are @xmath1 , while the third parent is @xmath2 .
the @xmath2 parent penetrates into the product the least in case 12 , because of the relatively small amount of shock heating suffered by the first product during the first collision in this case .
[ frac121314],width=317 ] in all cases , the low-@xmath17 core of the @xmath0 star is able to sink to the core of the final product . however , as the periastron separation of the first collision is increased , the two @xmath1 parent stars experience more shock heating , and the @xmath2 parent is able to have more fluid penetrate down to the depths near @xmath109 .
[ chem121314 ] presents chemical composition profiles of the first collision : cases 12 ( solid curve ) , 13 ( dotted curve ) , and 14 ( dashed curve ) .
[ chem121314],width=317 ] of the final collision product for these three cases .
these profiles demonstrate that the angular momentum of the first collision product has only a small effect on the final collision product . as expected from fig .
[ frac121314 ] , the profiles of the three cases are essentially identical in the innermost 5 per cent of the bound mass , because only the core of the @xmath110 parent contributes there .
the abundance profiles of each chemical species are at least qualitatively , and usually quantitatively , similar throughout the products .
the variations that do exist can be understood in terms of the different shock heating during the first collisions . because the amount of shock heating increases with periastron separation , the case j , jk , and k products have increasingly larger values of @xmath17 at almost any enclosed mass fraction ( this trend is not immediately evident in fig . [ slog121314 ] only because @xmath28 is being plotted versus radius and not enclosed mass ) .
the fluid from the @xmath0 star is therefore able to penetrate the case j product the least , the case jk product a little more , and the case k product even more still ( see fig . [ frac121314 ] ) .
consequently the rise in c@xmath30 and c@xmath29 abundance is pushed out to increasingly larger enclosed mass fractions @xmath91 in fig .
[ chem121314 ] as one considers cases 12 , 13 , and 14 , in that order . in case 12 and arguably case 13 , the cases with the lesser amounts of shock heating , traces of two separate peaks are evident in the c@xmath29 profile . as in our zeroth order model
( see fig .
[ zeroth_order ] ) , the inner peak is due mostly to c@xmath29 from the @xmath12 stars while the outer peak is mostly due to c@xmath29 from the @xmath0 star . of the first collision : cases 12 ( solid curve ) , 13 ( dotted curve ) , and 14 ( dashed curve ) .
the particular quantities plotted are as in fig .
[ slog12 ] .
[ slog121314],width=317 ] fig .
[ slog121314 ] shows that the structure of the bulk of the fluid in the final collision product is not significantly affected by the periastron separation of the first collision , and hence the spin of the first collision product .
there is , nevertheless , a visible trend for the enclosed mass fraction at a given average radius to decrease for products with more spin .
for example , the isodensity surface with an average radius of @xmath111 encloses about 94 per cent of the case 12 product , about 92 per cent for the case 13 product , and only about 90 per cent of the case 14 product .
such a trend is expected , simply because of expansion due to rotational support .
cases 10 , 17 , and 20 can be used to investigate the effects that the periastron separation of the second collision has on the profiles of the final product .
cases 10 , 17 , and 20 involve collisions between a case k product and a @xmath0 star , with periastron separations for the second collision of @xmath112 , 0.505 , and @xmath113 , respectively , corresponding to a number of passages or interactions @xmath114 , 2 , and 3 , again respectively .
see the discussion of fig . 6 in @xcite for the details of how @xmath115 is determined .
[ frac101720 ] reveals the way in which the @xmath2 parent of the second collision : cases 10 ( solid curve ) , 17 ( dotted curve ) , and 20 ( dashed curve ) . in all three cases ,
the first two parent stars are @xmath1 , while the third parent is @xmath2 .
the first collision product results from case k. [ frac101720],width=317 ] contributes to the final product in each of these three cases . as usual , the low-@xmath17 core of the @xmath0 star sinks to the core of the collision product . as the periastron separation of the second collision is increased , the resulting collision products tend toward larger mass - averaged values of @xmath17 .
fluid from the the @xmath0 star therefore can penetrate the case 10 product the most , the case 17 product a little less , and the case 20 product even less still .
[ chem101720 ] shows that the composition profiles in these three cases are essentially identical in the innermost few per cent of the collision products .
indeed , the abundance profiles are again at least qualitatively , and usually quantitatively , similar throughout the entire product . as before , slight variations do result from having different distributions of shock heating .
in particular , the rise in c@xmath30 and c@xmath29 abundance is drawn in to smaller enclosed mass fractions @xmath91 in fig .
[ chem101720 ] as one examines cases 10 , 17 , and 20 , in that order .
[ slog101720 ] reveals the differences in the final product structure for these three cases .
the top pane shows that the mass distribution of the final product is affected by the periastron separation of the second collision in a way that is simple to understand : increasing the second periastron separation increases both shock heating and rotation , and so a given radius encloses less mass . of the second collision :
cases 10 ( solid curve ) , 17 ( dotted curve ) , and 20 ( dashed curve ) .
[ chem101720],width=317 ] of the second collision : cases 10 ( solid curve ) , 17 ( dotted curve ) , and 20 ( dashed curve ) . [ slog101720],width=317 ] in [ direction ] we found that the direction of approach of the third star only weakly affects the profiles and mass of the final product .
we therefore do not account for the angles @xmath56 and @xmath52 of the second collision when applying our fluid sorting package mmas . as a result , the product model that mmas generates is identical within each of the following sets : cases 5 and 6 , cases 7 through 10 , and cases 14 through 19 .
in all twenty cases presented in table [ second ] , the final product mass given by mmas agree with those from sph to within 1.5 per cent .
[ chem12mmas ] compares the chemical composition profiles of final collision products , as determined by both mmas and sph models , for two scenarios ( cases 1 and 2 ) in which each collision is head - on ( @xmath92 ) .
these cases involve the same three parent stars ; however the order in which the stars collide is varied .
the mmas abundance profiles maintain the same qualitative shape as those of the sph data for almost all of the elements .
one possibly important difference is that the mmas package slightly over - mixes the core in case 1 , and , consequently , the central helium abundance is not quite as high as in the sph calculation .
another noteworthy difference is that the li@xmath33 profile , especially in case 1 , is not well represented near the surface . because li@xmath33 exists in an even thinner shell at the surface of the @xmath0 star than does li@xmath34 and be@xmath35 , its abundance profile in the product is particularly sensitive to the mass loss distribution during the collisions .
note that mmas does correctly predict that most li@xmath33 is ejected during the collisions .
furthermore , the abundance profiles generated by mmas much more closely resemble the sph results than our zeroth order model does ( see fig .
[ zeroth_order ] ) , indicating that mmas is capturing the important effects of mass loss and shock heating . in scenarios such as cases 1 and 2 for which the final product is non - rotating ,
it is straightforward to obtain the enclosed mass @xmath59 and density @xmath21 profiles from the @xmath17 profile by integrating the equation of hydrostatic equilibrium ( see [ sorting ] ) .
[ slog12mmas ] shows the resulting structure of the final collision products .
the kink in the @xmath17 profile a little inside @xmath116 marks the boundary within which fluid from only the @xmath0 star contributes , and mmas reproduces this feature quite well .
the central density of the sph model is slightly less than that of the mmas model , mostly due to how density is calculated as a smoothed average in sph . despite this difference ,
the overall structure of the collision product is extremely well reproduced by mmas .
[ chem4mmas ] compares the chemical composition profiles for sph and mmas data for case 4 , a situation in which both collisions are off - axis .
the most noticeable discrepancies are that mmas again slightly over - mixes the core and underestimates the surface li@xmath33 abundance .
nevertheless , the chemical abundance profiles produced by the mmas package and the sph code are extremely similar . for example
, mmas correctly reproduces the c@xmath29 abundance , with three peaks each corresponding to a different parent star .
the inner peak is due to the low-@xmath17 fluid from the @xmath12 star involved in only the second collision , the middle peak represents fluid from the other @xmath12 star , and the outer peak represents high-@xmath17 fluid from the @xmath0 parent ( see fig .
[ compare4c13 ] ) . in the final collision product for both the mmas ( solid curve ) and the sph ( dotted curve ) data of the case 4 collision product .
[ chem4mmas ] , width=317 ] versus the enclosed mass fraction @xmath91 in the final case 4 collision product , as determined both by mmas ( solid curve ) and by sph ( dotted curve ) .
the top pane shows the total c@xmath29 abundance , while the bottom three panes show the contributions from each individual parent star : c@xmath117 is the contribution from the @xmath2 parent , c@xmath118 is the contribution from the @xmath1 parent in the first collision , and c@xmath119 is the contribution from the @xmath1 parent in the second collision .
[ compare4c13 ] , width=317 ] note that this feature is reproducible by mmas only because it accounts for shock heating in each collision ( compare to our zeroth order model of fig .
[ zeroth_order ] , in which there are only two peaks in the c@xmath29 profile ) . in many of the mmas models , small kinks , or discontinuities ,
are evident in some of the abundance profiles : such features mark locations outside of which an additional parent star either starts or stops contributing .
for example , in the case 5 and 6 collision product , a kink exists in the c@xmath30 and c@xmath29 profiles near @xmath120 ( see fig .
[ chem56mmas ] ) . as is evident from fig .
[ compare56 ] , fluid inside of the in the final collision product for both the mmas ( solid curve ) and the sph data of the case 5 ( dotted curve ) and case 6 ( dashed curve ) collision products .
note that our mmas results do not distinguish between cases 5 and 6 , as they differ only in the orientation of the first product s spin .
[ chem56mmas ] , width=317 ] of each parent star @xmath121 versus the enclosed mass fraction @xmath91 in the case 5 and 6 collision products , as determined both by mmas ( solid curve ) and by sph .
the dotted curve corresponds to the case 5 sph results , while the dashed curve gives the case 6 sph results .
the same mmas model is valid for both cases , as they differ only in the direction of the first product s rotation axis .
the @xmath49 and 2 parents are @xmath122 , while the @xmath73 parent is @xmath2 .
[ compare56 ] , width=317 ] @xmath123 shell originated solely in the @xmath0 parent star . in the range @xmath124 , all three parent stars contribute .
the smoothing that is inherent to the sph scheme makes it difficult to resolve such features with our hydrodynamics code .
it is possible that similarly abrupt changes in abundance could occur in nature within real collision products . by comparing the sph data within fig .
[ chem56mmas ] , as well as within fig .
[ compare56 ] , we also see an example of the trend discussed in [ direction ] .
namely , the direction of the first collision product s rotation axis ( or equivalently the direction of approach of the third parent ) has little effect on the final collision product .
indeed , when using mmas , our approach is to neglect completely the rotation of the first collision product , which is why the same mmas model applies to both cases 5 and 6 .
[ slog1819mmas ] plots the entropic variable @xmath17 versus the enclosed mass fraction for the final collision products of cases 18 , 19 , and 20 , as determined both by sph and mmas . cases 18 and 19 differ only in the direction of approach of the third star , and we again see that this variation has little effect on the sph results .
the kink in all of the profiles slightly inside @xmath125 marks the boundary within which fluid from only the @xmath0 star contributes .
mmas again reproduces this feature quite well .
mmas does underestimate the shock heating to the core and hence the central value of @xmath17 , although some of this discrepancy is due to the spurious heating evident in longer sph simulations @xcite .
nevertheless , it is likely that this difference between the mmas and sph models would last only as a transient during the thermal relaxation in a stellar evolution calculation .
it is also worth noting that , while the sph calculations need to be terminated before all of the bound fluid can settle into equilibrium ( see [ sph ] ) , the mmas @xmath17 profile does steadily increase outward throughout the entire product .
unfortunately it is a difficult task to determine the overall size of a collision product , either with sph simulations or with a package like mmas .
whenever there is any mass loss in an sph simulation , there will also be sph particles that are nearly unbound and , in practice , still moving away from the product when the simulation is terminated .
these particles would ultimately form the outermost layers of the collision product , but it would take an utterly unfeasible amount of time to wait for them to come back and settle into equilibrium .
the entropic variable @xmath17 profile produced by mmas seems quite reasonable , both because it increases all the way out to the surface and because the sph results tend to approach its form as more of the fluid settles into equilibrium .
however , there are no simulation data to compare against for the very outermost layers of a product and so the exact form of the profile there is difficult to validate .
not surprisingly , the radius of the collision product is rather sensitive to the @xmath17 profile .
for example , simply by changing the parameter @xmath71 from -1.0 to the still very reasonable value of -1.1 , which tends to distribute slightly more shock heating to the outer layers ( see * ? ? ?
* ) , the radii of our mmas final collision product models for case 1 and for case 2 increase by about a factor of 2 .
despite such uncertainties , it is still interesting to get a crude estimate of the sizes of the collision products immediately from mmas . in making these estimates ,
we do not account for the expansion due to rotation , but instead simply integrate the equation of hydrostatic equilibrium for a non - rotating star with the same @xmath17 profile , using the outer boundary condition that the pressure vanishes .
the radii calculated therefore represent the sizes that the products would have if some mechanism were to brake their rotation without disturbing their @xmath17 profiles .
[ rvr ] plots the radii at various enclosed mass fractions for products generated in single - single star collisions involving 0.4 , 0.6 and @xmath0 parent stars , as determined by mmas .
these radii are plotted against the normalized periastron separation @xmath126 , which we allow to exceed unity slightly to account for bulges in the parent stars .
the general trend is that as the periastron separation increases , the collisions are more long - lived , there is more shock heating , and the radii of the collision products increase . because the fluid in the deep interior of the product is largely shielded from shocks , the @xmath17 profile there , and hence the radius @xmath102 profile , are not too strongly dependent on the periastron separation of the collision . as a result , the radii versus periastron separation curves of fig .
[ rvr ] become closer to horizontal as one looks to smaller enclosed mass fraction . for the cases examined in fig .
[ rvr ] , the full ( 100 per cent enclosed mass ) radius of the collision product is always at least about twice the sum of the radii of the parent stars , and often even much larger than this .
for example , if two @xmath0 stars suffer a grazing ( @xmath127 ) collision , the collision product then has a full radius of about @xmath128 , about 20 times larger than the sum @xmath129 of the parent star radii .
we therefore expect that the collisional cross - section of these first products will be significantly enhanced over that of their thermally relaxed counterparts .
[ bw ] is similar to fig .
[ rvr ] , but for triple - star collisions .
we use different line types to represent various normalized periastron separations @xmath126 for the first collision , and along the horizontal axis we vary the normalized periastron of the second collision . the curves
give the radii at three different enclosed mass fractions .
for each of the six @xmath43 values in a frame of fig .
[ bw ] , we performed a nested loop over 45 equally spaced values of the normalized periastron separation for the second collision , from 0 to 1.1 . therefore , mmas treated 270 different triple - star collisions ( in a few minutes on a pentium iv workstation ) for each of the four plots .
note that there is a general trend for the radius of the collision product to increase as the first periastron separation increases , as expected ; this effect is mild for the 50 per cent enclosed mass radius , and rather dramatic for the full radius . even more significant is the second periastron separation , with grazing second collisions resulting in products that are substantially larger than those from head - on collisions : the shock heating suffered by the already diffuse outer layers of the first collision product is severe when multiple pericentre passages occur before merger .
once @xmath74 grows large enough for the third star s initial impact to be outside of the first product s core , so that more than one pericentre passage would result before merger , then the shock heating is no longer as sensitive to @xmath74 and the full radius surfaces in fig .
[ bw ] tend to plateau .
how strongly the full radius varies with the @xmath74 therefore depends on the mass distribution within the first product . for first products with a more uniform density , such as in the product of two @xmath130 stars ,
the final product size increases more gradually and consistently with @xmath74 . as the mass of any one of the three parent stars is increased , the trend is for the radius of the collision product to increase as well .
for example , fig .
[ bw](a ) shows that for collisions in which two @xmath12 stars collide and then a @xmath0 collides with the first product , the final collision product radius does not exceed a few times @xmath131 . if one of the @xmath12 stars is substituted with a @xmath0 star , then the final radius can be as large as about @xmath132 [ see fig .
[ bw](b ) ] .
this extreme size is due to the phenomenally diffuse outer layers of the product : the average density of such a star is only @xmath133 g @xmath134 .
the noise visible on some of the full radius curves is due to approaching the limiting numerical precision during the structure integration in these diffuse regions . from fig .
[ bw ] we see that the radius that encloses 95 per cent of the total mass , while still large , is often orders of magnitude smaller than the full radius of the final product .
because of the low densities involved , the full radius calculated is rather sensitive to the details of the shock heating during the collision . changing the mmas parameter @xmath71 from -1.0 to -1.1 ,
for example , can increase the full radius by a factor of a few , although the radius enclosing 50 per cent of the total mass does not change by more than a few per cent .
nevertheless , any reasonable form and amount of shock heating yields products that are significantly larger than a thermally relaxed star with the same mass and composition .
colliding the same three parent stars in a different order does not drastically affect the mass of the final product , although it does significantly affect its size .
consider , for example , frames ( c ) and ( d ) of fig .
if two @xmath130 stars collide and then the resulting product collides with a @xmath0 star , the final product typically has a radius of order @xmath135 , but if the @xmath0 star is switched into the first collision instead , the final radius is usually in the range from @xmath136 .
the primary reason for this difference is that a collision between the 0.4 and @xmath0 stars yields a product with especially diffuse outer layers , and , as a result , is subject to a larger number of passages and hence more shock heating during a second collision .
we have used sph and the software package mmas to study triple - star collisions .
although such collisions span a tremendous amount of parameter space , our modest number of sph calculations do provide some valuable insights .
for the ( parabolic ) encounters that we consider , we find that the order in which stars collide ( see [ order ] ) , the angle of approach of the third star ( [ direction ] ) , and the periastron separation of the collisions ( [ spin ] ) have only a slight effect on the chemical composition distribution within the final collision product .
the order and orbital parameters of the collisions can , however , significantly affect the size and structure of the product .
the results of [ comparison ] help establish that the simple fluid sorting algorithm of mmas reproduce the important features of our sph models , even when one of the parent stars is itself a collision product . the mmas package can therefore be considered an adequate , if not an accurate , substitute for a hydrodynamics code in many situations . this realization will help simplify the process of generating collision product models in cluster simulations , because a full hydrodynamics calculation will not necessarily need to be run for each collision . indeed ,
we hope the mmas package will be used to help account for stellar collisions in dynamics simulations of globular clusters . toward this end
, mmas is already being incorporated into two software packages , triptych and tripletych , that respectively treat encounters between two stars and among three stars ( see * ? ? ?
these packages are controlled through a web interface and treat the orbital trajectories , possible merger(s ) , and evolution of the merger product and therefore incorporate three main branches of stellar astrophysics : dynamics , hydrodynamics , and evolution .
the product size estimates of [ sizes ] are admittedly crude .
for example , partial ionization and radiation pressure are neglected .
although the exact size of a collision product is difficult to determine , our calculations indicate that the first and final collision products are always significantly larger than their thermally relaxed counterparts would be .
indeed , according to our mmas calculations in [ sizes ] , the final collision product can have a radius up to @xmath137 , easily exceeding the size of a typical red giant .
furthermore , these calculations have assumed that some mechanism has braked the often rapid rotation of the products , so any rotation that does remain will only further enhance the size of the products .
the extended sizes of the products will increase the multi - star collision rate over that calculated in previous treatments of binary - single and binary - binary encounters .
all of the scenarios we consider with sph in this paper involve one @xmath84 and two @xmath12 stars . without shock heating ,
the low-@xmath17 fluid of the @xmath0 star would sink to the core of the final collision product , while its high-@xmath17 portions of the @xmath0 would settle in the outer layers .
the intermediate layers of the product would consist of fluid with the same @xmath17 range from all the parents . simply sorting the fluid in this way , without running a hydrodynamics calculation ,
can therefore provide a zeroth - order model of the collision product that captures some of its important qualitative features ( see fig .
[ zeroth_order ] ) .
however , non - uniform shock heating during the collisions somewhat alters the relative values of the entropic variable @xmath17 in the fluid , resulting in a slightly different sorting pattern .
because the amount and distribution of shock heating are dependent on the details of a collision , the sorting of the fluid varies with , for example , the order in which the stars collide .
shock heating can have larger consequences on the chemical abundance profiles of elements , such as c@xmath29 , that exist in substantial amounts only in a small shell in the initial parent stars .
however , the chemical abundance profiles of most elements , particularly helium , are always qualitatively the same , regardless of how the three stars are merged . because the abundance and distribution of helium ( and hence hydrogen ) is one of the most important factors in determining the collision product s subsequent course of stellar evolution , we believe that the order and geometry of the collisions will not significantly affect the stellar evolution of the product .
indeed , @xcite have recently presented a set of stellar evolution calculations for a collision product for which the starting yrec models were generated from sph calculations of different resolutions .
the variations in the initial helium profiles of their models are roughly comparable to those in our helium profiles resulting from colliding three parent stars in different ways .
although @xcite do find detailed differences in the evolution , especially during the `` pre - main - sequence '' contraction , the evolutionary tracks and time - scales are quite similar .
we therefore feel that , for low - velocity collisions , the hydrodynamical details of how three stars are merged will not significantly affect the stellar evolution of the collision product the major caveat here being that the geometry of the collisions can of course affect the rotation of the product , which in turn can greatly affect its evolution @xcite .
surface abundances of lithium and beryllium are particularly interesting to monitor , as these elements can be used as observational indicators of mixing and perhaps collisional history . as in the single - single star collisions presented by @xcite , we find that the triple - star mergers presented here yield collision products that are severely depleted of lithium and somewhat of beryllium at the surface
. even in the relatively gentle ( parabolic ) cases that we have considered , the collisions are energetic enough to expel most of the lithium and beryllium from the outer layers of the parents .
there are many scenarios to explore when dealing with collisions in environments as chaotic as dense stellar systems .
different orbital geometries besides the parabolic trajectories treated in this paper still need to be considered in more detail .
large stellar velocities in galactic nuclei lead to hyperbolic collisions . in globular clusters , perturbations to a binary can lead to an elliptical collision , while an encounter with a very hard binary can lead to significantly hyperbolic collisions .
future studies may want to include a more detailed look at the hydrodynamics during grazing encounters , which could be done efficiently with the help of grape ( short for gravity pipe ) special purpose hardware for calculating the self - gravity of the system .
furthermore , encounters involving more than three stars , such as in binary - binary interactions , may also warrant further examination : for example , the final collision product generated in a triple - star merger is typically so extended that it could immediately start suffering roche lobe overflow if left in orbit around a fourth star . collisions among a larger variety of stellar types and masses , reflective of the diverse populations of clusters , will also need to be explored . we have been concentrating on low - mass main - sequence stars , but collisions between high - mass main - sequence stars in young compact star clusters , or giants located in the dense cores of globular clusters , for example , are frequent
. a logical first step would be to examine high - mass main - sequence stars in a runaway merger scenario .
it would therefore be very useful to develop a generalization of the fluid sorting method that includes radiation pressure in the equation of state . due to shock heating during the collision , the product is much larger than a thermally equilibrated main - sequence star of the same mass .
how much of an effect this increased radius has on the effective cross section for merger is subject to many variables , including the structure of the product s outer layers and the velocity of approaching stars . in environments such as active galactic nuclei , where relative velocities tend to be high ,
the low - density outer layers of a newly formed collision product could likely get stripped by passing stars .
however , in globular clusters , where stellar velocities tend to be small , collisions with even low - density envelopes may lead to significantly increased rates of merger .
it would be useful to develop a robust collision module that could quickly predict whether any given collision trajectory will lead to a merger , and , if not , describe how the stars are affected by the interaction .
one simple approximation often implemented in cluster simulations is that a collision product instantaneously achieves its thermally relaxed radius , a good approximation when the time between collisions is much longer than the thermal time - scale . arguing instead that the global thermal time - scale of the first product can be much larger than the time between collisions in interactions involving binaries
, we make a different approximation in this paper , namely that the first product s radius ( and more generally its structure ) does not substantially evolve between collisions .
future scattering experiments could model thermally relaxing stars and study more carefully the timescale between collisions mediated by binaries .
the thermal time - scale in the outer layers of a collision product can be orders of magnitude less than its global thermal time - scale ( see table 1 of * ? ? ?
* ) , so that it may actually be necessary to follow the thermal contraction and stellar orbits simultaneously .
indeed , in the extremely low density layers of a collision product , it is even possible for the thermal time - scale to be comparable to the ( hydro)dynamical time - scale , so that the product could undergo significant thermal contraction even before it reaches hydrodynamical equilibrium .
it would be helpful if future stellar evolution calculations of collision products included a detailed description of the products size and structure throughout the thermal relaxation stage .
how quickly the outer layers of the thermally expanded product change with time will substantially affect its likelihood of subsequent collisions .
initial conditions for such stellar evolution calculations could be provided by the publicly available mmas package .
the primary hurdle for incorporating collisions into realistic stellar dynamics simulations is currently the stellar evolution of the collision products .
such stars are highly non - canonical , typically with very peculiar structural and composition profiles , and present a challenging set of initial conditions for stellar evolution codes . to make matters even more intricate ,
rotation , which is typically rapid after merging , will affect the structural properties and chemical compositions of the stars as they evolve ( e.g. , * ? ? ?
this rapid rotation also has the effect of ejecting mass as the product thermally contracts .
studying this emitted mass will be worthwhile , as it may likely carry away angular momentum and at least partially brake rotating collision products .
we would like to thank fred rasio for helpful comments and the use of his sph code , josh faber for having parallelized this code , randall perrine for assistance in preliminary sph calculations , alison sills for providing yrec models for the parent stars , and the referee marc freitag for valuable comments that helped improve this paper .
we are also grateful to the participants of the first two modest workshops for useful discussions , especially jarrod hurley , piet hut , steve mcmillan , onno pols , simon portegies zwart , and peter teuben .
this work was supported by nsf grants ast-0071165 , mri-0079466 , and ast-0205991 .
this work was also supported by the national computational science alliance under grant ast980014n and utilized the ncsa sgi / cray origin2000 parallel supercomputer .
rasio f. a. , freitag m. , grkan m. a. , 2003 , to appear in carnegie observatories astrophysics series , vol . 1 : coevolution of black holes and galaxies , " ed
. l. c. ho , cambridge : cambridge univ .
press , astro - ph/0304038 | in dense stellar clusters , binary - single and binary - binary encounters can ultimately lead to collisions involving two or more stars during a resonant interaction .
a comprehensive survey of multi - star collisions would need to explore an enormous amount of parameter space , but here we focus on a number of representative cases involving low - mass ( 0.4 , 0.6 , and @xmath0 ) main - sequence stars . using both smoothed particle hydrodynamics ( sph ) calculations and a much faster fluid sorting software package ( mmas ) , we study scenarios in which a newly formed product from an initial collision collides with a third parent star . by varying the order in which the parent stars collide , as well as the orbital parameters of the collision trajectories , we investigate how factors such as shock heating affect the chemical composition and structure profiles of the collision product .
our simulations and models indicate that the distribution of most chemical elements within the final product is not significantly affected by the order in which the stars collide , the direction of approach of the third parent star , or the periastron separations of the collisions .
although the exact surface abundances of beryllium and lithium in the product do depend on the details of the dynamics , these elements are always severely depleted due to mass loss during the collisions .
we find that the sizes of the products , and hence their collisional cross sections for subsequent encounters , can be sensitive to the order and geometry of the collisions .
for the cases that we consider , the radius of the product formed in the first ( single - single star ) collision ranges anywhere from roughly 2 to 30 times the sum of the radii of its parent stars .
the size of the final product formed in our triple - star collisions is more difficult to determine , but it can easily be as large or larger than a typical red giant . although the vast majority of the volume in such a product contains diffuse gas that could be readily stripped in subsequent interactions , we nevertheless expect the collisional cross section of a newly formed product to be greatly enhanced over that of a thermally relaxed star of the same mass
. our results also help establish that the algorithms of mmas can quickly reproduce the important features of our sph models for these collisions , even when one of the parent stars is itself a former product .
[ firstpage ] stars : chemically peculiar globular clusters : general galaxies : star clusters hydrodynamics
blue stragglers stars : interiors |
the species of cryptococcus genus have been identified from different environmental sources such as air , water , soil , wood , and pigeon excreta . cr .
this kind of yeast was reported from soil of tabriz in iran as well , . cr .
friedmannii and some of the yeast species were isolated from the atacama desert , with high daily temperature variations
a 57-year - old man was admitted to our department in october 2015 ( at day 0 ) with distal subungual hyperkeratosis clinical type of onychomycosis on the first right toenail ( fig .
1 ) . the patient did not have disease like diabetes , psoriasis , immunodeficiencies or any chronic disease .
however , four months ago ( at day 4 months ) , the patient had a history of ungueal trauma and small traumatic lesion was emerged on toenail .
three direct microscopic examinations ( at day 0 ) of nail scrapings , with 20% potassium hydroxide revealed single or budding yeast cells .
three nail specimens were cultured on sabourauds dextrose agar plates with chloramphenicol ( at day 0 ) , and incubated at 2025 c , which were produced creamy and smooth colonies after a few days ( fig .
microscopic examinations ( at day 7 ) of the colonies with chinese ink were shown single or budding cells with a thin capsule ( fig .
the yeast was cultured on yeast extract peptone dextrose agar and incubated at 2025 c.fig .
2fig . 3budding cells with a thin capsule with chinese ink wet mount ( 400).fig .
mount slide of colonies ( 400).fig . 4 distal subungual thickening and hyperkeratosis of patient .
genomic dna was extracted using qiagen tissue kit ( germany ) ( at day 11 ) .
the its1 - 5.8s - its2 region was amplified with its1 ( tcc gta ggt gaa cct gcg g ) and its4 ( tcc tcc gct tat tga tat gc ) universal primers by the following profile : 98 c ( 5 min ) , 40 cycles of 98 c ( 30 s ) , annealing temperature 56 c ( 30 s ) , and 72 c ( 30 s ) , followed by a final extension of 72 c ( 5 min ) .
amplification of the isolate with its1 and its4 primers yielded 500 bp fragment ( at day 11 ) .
sequence analyzes was compared with the reference sequences of genbank database using blast ( basic local alignment search tool ) ( http://www.ncbi.nlm.nih.gov/blast ) .
friedmannii ( naganishia friedmannii ) with the accession number km243311.1 ( at day 23 ) .
the sequence of the its region was submitted to the genebank as the accession number kx268322 ( fig . 5).fig .
5the sequence of the its region was submitted to the genebank as the accession number kx268322.fig .
5 the sequence of the its region was submitted to the genebank as the accession number kx268322 .
the antifungal susceptibilities were conducted according to the clinical and laboratory standard institute method ( document m27-s3 ) .
friedmannii to fluconazole , itraconazole and amphotericin b was determined at 72 h. the mics results revealed this isolate was susceptible to these drugs with mic values of 0.25 , 0.125 and 0.25 g / ml for fluconazole , itraconazole and amphotericin b , respectively ( at day 30 ) . the patient was treated with oral itraconazole at the dosage of 200 mg daily .
direct microscopic examinations of nail scrapings with 20% potassium hydroxide had not revealed single or budding yeast cells and culture of the nail sample was negative .
the yeast cr . friedmannii was first reported as a new species of basidiomycetous yeast of antarctic in 1985 .
this increased may reveal enhanced immunocompromised patients with impaired cell - mediated immunity , organ transplantation , diabetic patients , azole prophylaxis , and etc .
humicola were reported as opportunistic pathogens over the last few years , . a case of cr .
also , prothoteca spp was reported as the first causative agent of onychomycosis in brazil which was the 3rd case in the world .
furthermore , onychomycosis is related to the host ones such as age , occupation , chronic diseases , nail care and lifestyle .
| yeasts are common etiologic agents of onychomycosis . this study reported a case of onychomycosis due to cryptococcus friedmannii ( naganishia friedmannii ) .
this yeast was isolated of the right great toenail of 57-year - old man .
microscopic examination of nail scrapings showed budding cells with thin capsule .
sequence analyzes of the internal transcribed spacer regions was closely related to cryptococcus friedmannii .
the results of susceptibility testing showed the cryptococcus friedmannii to be sensitive to fluconazole , itraconazole and amphotericin b. |
the study of @xmath7 meson decays plays an important role in determining the cp - violating parameters in the standard model ( sm ) and discovering new physics in the flavor - changing processes .
in particular , @xmath7 non - leptonic two - body decays provide an abundant sources of information about the ckm matrix .
for example , the most promising measurement of @xmath8 ( @xmath9,@xmath10 and @xmath11 are three angles in the unitarity tri - angle ) is from measurement of the time - dependent cp - asymmetry in @xmath12 .
@xmath0 , @xmath13 and @xmath14 are also very important for determining @xmath15 .
however , the theoretical calculation of these hadronic decays suffers from the complicated strong interactions which compromise the precision of the determination of the ckm matrix elements from the experimental data .
thus the higher order calculation of such decays are essential for a better understanding of the cp violations . besides the non - leptonic decays , the new measurements on @xmath16 mixing and @xmath5 have a great impact on constraining the unitarity tri - angle
. both of two decay modes are also sensitive to new physics .
the @xmath7 decays also offer us a good place to study the strong interaction dynamics in heavy flavor systems . in the abundant decay products of @xmath7 mesons ,
experimentalists observed a lot of new hadron resonances at babar and belle .
they are mainly excited ( or exotic ) charmed mesons and charmonium states .
the excited @xmath7-mesons and @xmath6-baryons can be only studied at the hadron colliders .
we will briefly review the spectrum of the excited @xmath7 mesons and @xmath6-flavored baryons from the measurements at tevatron .
@xmath17 and @xmath1 are the most popularly studied two - body decay modes in @xmath7 physics in addition to @xmath18 .
the big experimental achievement is that the direct cp asymmetries in these decays have been observed .
the latest world averages are@xcite @xmath19 both of which are @xmath20 away from zero .
@xmath0 are dominated by tree amplitudes . due to the isospin symmetry ,
the decay amplitudes of tree @xmath21 modes can be parameterized graphically as the following @xmath22 where @xmath23 , @xmath24 and @xmath25 stand for the color - allowed , color - suppressed tree amplitude and penguin amplitude respectively . according to the naive factorization , the ratios @xmath26 and @xmath27 are expected to be small .
this leads to that the @xmath28 is almost half of @xmath29 , and @xmath30 is very small .
however , this expectation is strongly against the experimental data@xcite @xmath31 which requires @xmath32 .
it means that the color - suppression is not valid any more . for @xmath33 decays ,
the similar graphical parameterization can be written as @xmath34+e^{-i\gamma}[t^\prime+c^\prime],\nonumber \\ a(\pi^+k^{-})&= & p^\prime+e^{-i \gamma}t^\prime , \\
-\sqrt{2}a(\pi^0\bar k^{0})&=&[p^\prime - p^{ew}]-e^{-i\gamma}c^\prime,\nonumber\end{aligned}\ ] ] in which penguin amplitude @xmath35 dominates comparing with the color - allowed tree amplitude @xmath36 , color - suppressed tree amplitude @xmath37 and the electro - weak penguin @xmath38 .
@xmath39 is expected if @xmath40 and @xmath41 are small .
however , the recent experimental data shows @xmath42 .
it requires either the enhancement in @xmath38 or @xmath37 .
the large @xmath38 scenario @xcite seems to be going away with the new experimental measurement of @xmath43 .
@xmath44 is needed to meet the data .
so similar to the situation in @xmath0 , the color - suppression for @xmath45 is not valid any more . in qcd
factorization , these color - suppressed amplitudes are related to the qcd coefficient @xmath46 . for illustration@xcite ,
@xmath47_{v_2 } \\&+ & \left\{\begin{array}{lc } [ 0.18]_{\rm lohsi } & \qquad \mbox{(default ) } \\ { [ 0.46]_{\rm lohsi } } & \qquad \mbox{(s4 ) } \end{array } \right . \nonumber\end{aligned}\ ] ]
the accidental cancellation between the leading order ( lo ) and next - to - leading order ( nlo ) vertex corrections ( @xmath48 ) makes the hard spectator interaction ( hsi ) very important .
the nlo corrections to the hsi are recently studied by beneke , jager and yang@xcite . in their papers , the corrections from the two scale regions are encoded into the jet function and hard coefficient respectively , both of which enhance the color - suppressed amplitude effectively . in table
[ table1 ] , the predictions in qcdf with nlo hsi in a certain parameter setting ( @xmath49 ) is shown .
the agreement between the prediction and experimental data is very good except for the direct cp asymmetries @xmath50 .
it means that the strong phases still need further study .
recently , the efforts towards the nlo corrections to the imaginary part of the amplitude has started . in @xcite , the next - next - to - leading order
vertex corrections has been considered .
.predictions in qcdf with nlo hsi vs. experimental data . [ cols="^,^,^,^,^,^",options="header " , ]
almost 30 years after the discovery of @xmath6 quark , and 7 years @xmath7 factories running , @xmath7 physics has entered a precision test era . the higher order theoretical calculations are essential to explain the more and more accurate experimental data , especially the data for non - leptonic decays .
this does not only require the straight - forward but tough computation but also developing new theoretical concepts in heavy flavor physics . for the theoretically clean decays , the new experimental measurements shed a light on the precision test of the sm and the door towards new physics .
@xmath7 factories are also good places to find charmed mesons and charmonium states .
the recently observed new mesons properties still need further theoretical study .
lhc will run in next year , @xmath7 physics will enter a new era .
we can fully explore all the @xmath6-flavored hadrons and their decays .
theorists will find more interesting subjects there .
this work is partly supported by the national natural science foundation of china under grant number 10375073 and 90403024 .
m. beneke and s. jager , nucl .
b * 751 * , 160 ( 2006 ) [ arxiv : hep - ph/0512351 ] .
m. beneke and s. jager , nucl .
b * 768 * , 51 ( 2007 ) [ arxiv : hep - ph/0610322 ] .
m. beneke and d. yang , nucl .
b * 736 * , 34 ( 2006 ) [ arxiv : hep - ph/0508250 ] .
g. bell , arxiv:0705.3127 [ hep - ph ] .
b. aubert _ et al .
_ , phys .
lett . *
91 * , 171802 ( 2003 ) ; phys .
lett . * 93 * , 231804 ( 2004 ) ; [ hep - ex/0408093 ] ; k. f. chen _ et al .
_ , phys .
lett . *
91 * , 201801 ( 2003 ) ; phys .
lett . * 94 * , 221804 ( 2005 ) ; j. zhang _
et al . _ , [ hep - ex/0505039 ] .
h. y. cheng and k. c. yang , phys .
b * 511 * , 40 ( 2001 ) [ arxiv : hep - ph/0104090 ] ; x. q. li , g. r. lu and y. d. yang , phys . rev .
d * 68 * , 114015 ( 2003 ) [ erratum - ibid .
d * 71 * , 019902 ( 2005 ) ] [ arxiv : hep - ph/0309136 ] ; a. l. kagan , phys .
b * 601 * , 151 ( 2004 ) [ arxiv : hep - ph/0405134 ] ; p. colangelo , f. de fazio and t. n. pham , phys .
b * 597 * , 291 ( 2004 ) [ arxiv : hep - ph/0406162 ] ; w. s. hou and m. nagashima , arxiv : hep - ph/0408007 ; h. n. li and s. mishima , phys .
d * 71 * , 054025 ( 2005 ) [ arxiv : hep - ph/0411146 ] ; y. d. yang , r. m. wang and g. r. lu , phys .
d * 72 * , 015009 ( 2005 ) [ arxiv : hep - ph/0411211 ] ; p. k. das and k. c. yang , phys .
d * 71 * , 094002 ( 2005 ) [ arxiv : hep - ph/0412313 ] ; h. n. li , phys .
b * 622 * , 63 ( 2005 ) [ arxiv : hep - ph/0411305 ] ; c. s. kim and y. d. yang , arxiv : hep - ph/0412364 ; w. j. zou and z. j. xiao , phys . rev .
d * 72 * , 094026 ( 2005 ) [ arxiv : hep - ph/0507122 ] ; c. s. huang , p. ko , x. h. wu and y. d. yang , phys .
d * 73 * , 034026 ( 2006 ) [ arxiv : hep - ph/0511129 ] ; q. chang , x. q. li and y. d. yang , jhep * 0706 * , 038 ( 2007 ) [ arxiv : hep - ph/0610280 ]
. h. w. huang , c. d. lu , t. morii , y. l. shen , g. song and jin - zhu , phys .
d * 73 * , 014011 ( 2006 ) [ arxiv : hep - ph/0508080 ] ; s. baek , a. datta , p. hamel , o. f. hernandez and d. london , phys .
d * 72 * , 094008 ( 2005 ) [ arxiv : hep - ph/0508149 ] .
m. beneke , j. rohrer and d. yang , nucl .
b * 774 * , 64 ( 2007 ) [ arxiv : hep - ph/0612290 ] .
m. beneke , j. rohrer and d. yang , phys .
lett . *
96 * , 141801 ( 2006 ) [ arxiv : hep - ph/0512258 ] .
a. abulencia _ et al .
_ [ cdf - run ii collaboration ] , phys . rev .
lett . * 97 * , 062003 ( 2006 ) [ aip conf .
proc . * 870 * , 116 ( 2006 ) ] [ arxiv : hep - ex/0606027 ] ; v. m. abazov _ et al . _
[ d0 collaboration ] , phys .
* 97 * , 021802 ( 2006 ) [ arxiv : hep - ex/0603029 ] .
k. ikado _ et al .
_ , phys .
rev . lett . * 97 * , 251802 ( 2006 ) [ arxiv : hep - ex/0604018 ] .
g. nardo _ et al . _
[ babar collaboration ] , arxiv:0708.2260 [ hep - ex ] .
ckmfitter group , . | we firstly address the recent efforts on calculations of the next - to - leading order corrections to the color - suppressed tree amplitude in qcd factorization method which may be essential to solve the puzzles in @xmath0 and @xmath1 decays .
then we discuss the polarization puzzles in @xmath2 and @xmath3 .
the impacts of the newly measured @xmath4 mixing and @xmath5 on the ckm unitarity triangle global fitting are mentioned .
we also briefly review the recent measurements of the new resonances at babar and belle .
finally , some new results from hadron colliders , especially the @xmath6-flavored hadron spectra , are discussed . |
null | masked
acyl cyanide ( mac ) reagents are shown to be effective umpolung
synthons for enantioselective michael addition to substituted enones .
the reactions are catalyzed by chiral squaramides and afford adducts
in high yields ( 9099% ) and with excellent enantioselectivities
( 8598% ) .
the addition products are unmasked to produce dicyanohydrins
that , upon treatment with a variety of nucleophiles , provide -keto
acids , esters , and amides .
the use of this umpolung synthon has enabled ,
in enantiomerically enriched form , the first total synthesis of the
prenylated phenol ( + ) -fornicin c. |
The Runaways
March 17, 2010
An all-girl rock band is named and trained by a rock manager of dubious sexuality, goes on the road, hits the charts, has a lesbian member and another who becomes a sex symbol, but crashes from drugs. This is the plot of a 1970 film named "Beyond the Valley of the Dolls," which inadvertently anticipated the saga of the Runaways five years later. Life follows art.
"The Runaways" tells the story of a hard-rock girl band that was created more or less out of thin air by a manager named Kim Fowley. His luck is that he started more or less accidentally with performers who were actually talented. Guitarists Joan Jett and Lita Ford are popular to this day, long after the expiration of their sell-by dates as jailbait. The lead singer, Cherie Currie, co-starred in the very good "Foxes" (1980) with Jodie Foster, had drug problems, rehabbed, and "today is a chain-saw artist living in the San Fernando Valley." The ideal art form for any retired hard rocker.
The movie centers on the characters of Jett (Kristen Stewart), Currie (Dakota Fanning) and the manager Fowley (Michael Shannon). Jett was the original driving force, a Bowie fan who dreamed of forming her own band. Fowley, known in the music clubs of Sunset Strip as a manager on the prowl for young, cheap talent, told her to give it a shot, and paired her with Currie, whose essential quality is apparently that she was 15. That fit Fowley's concept of a jailbait band who would appeal because they seemed so young and so tough. He rehearses them in a derelict trailer in the Valley, writing their early hit "Cherry Bomb" on the spot.
Shannon is an actor of uncanny power. Oscar nominated for a role as an odd dinner guest in "Revolutionary Road" (2008), he was searing as he turned paranoid in William Friedkin's "Bug" (2006). Here he's an evil Svengali, who teaches rock 'n' roll as an assault on the audience; the girls must batter their fans into submission or admit they're losers. He's like a Marine drill sergeant: "Give me the girl. I'll give you back the man." He converts Cherie, who begins by singing passively, into a snarling tigress.
The performance abilities of the Runaways won respect. The rest was promotion and publicity. The film covers the process with visuals over a great deal of music, which helps cover an underwritten script and many questions about the characters. We learn next to nothing about anyone's home life, except for Currie, who is provided with a runaway mother (Tatum O'Neal), a loyal but resentful sister (Riley Keough) and a dying, alcoholic father (Brett Cullen). Although this man's health is important in the plot, I don't recall us ever seeing him standing up or getting a clear look at his face.
So this isn't an in-depth biopic, even though it's based on Currie's 1989 autobiography. It's more of a quick overview of the creation, rise and fall of the Runaways, with slim character development, no extended dialogue scenes, and a whole lot of rock 'n' roll. Its interest comes from Shannon's fierce and sadistic training scenes as Kim Fowley, and from the intrinsic qualities of the performances by Stewart and Fanning, who bring more to their characters than the script provides.
Another new movie this week, "The Girl With the Dragon Tattoo" from Sweden, has a role for a young, hostile computer hacker. Stewart has been mentioned for the inevitable Hollywood remake. Reviewing that movie, I doubted she could handle such a tough-as-nails character. Having seen her as Joan Jett, I think she possibly could.
Note: Many years ago, while I was standing at a luggage carousel at Heathrow Airport, I was approached by a friendly young woman. "I'm Joan Jett," she told me. "I liked 'Beyond the Valley of the Dolls.'"
Just sayin'. ||||| The strength and beauty of "The Runaways" are that it tells the truth. It doesn't always tell the literal truth about the pioneering all-girl rock band, the Runaways, though it gets the basic facts and most of the details right. More crucially, it conveys precisely what it was like to be young in the mid-1970s, a peculiar juncture in American social history. Back then, there was an almost post-apocalyptic feeling in the air, that all norms had been tossed aside, that nothing mattered, that the whole country and the world had spun out of control.
Other films have attempted to convey this. Ang Lee's "The Ice Storm" got a piece of this feeling, but it couldn't get all of it. Its failure was that it was, in a sense, too good a movie, too artful. "The Runaways," by contrast, is precisely the kind of gritty, seamy and occasionally awkward picture that the 1970s deserve. And in getting that one thing right - in capturing that strange combination of despair and frustrated energy - it gets everything right. It explains why kids needed rock 'n' roll, and why the Runaways still mean so much to those who remember them.
Based on "Neon Angel," the memoir of lead singer Cherie Currie, "The Runaways" tells the story of the creation of the band, focusing mainly on Joan Jett, who became the group's rhythm guitarist and principal songwriter, and Currie, who was discovered by Jett and producer Kim Fowley at Los Angeles nightclub when she was 15. They liked her look ("a little Bowie, a little Bardot") and had no idea whether she could sing. She could.
To be a teenager can feel like being stuck in mud. The world is alive with promise and excitement, but you can't get to it. You have no power. But music gives the feeling of power, the illusion of it, and sometimes that's enough to keep you sane. Currie (Dakota Fanning) and Jett (Kristen Stewart) start off as rock-obsessed high school misfits, Currie with a falling-apart family and an obsession with David Bowie, and Jett with her leather gear and a dream of becoming a female rocker - of a variety that did not yet exist. "The Runaways" shows how rock 'n' roll can save your life and almost wreck it.
Jett done perfectly
The soundtrack includes artists that influenced the Runaways (such as Suzi Quatro), original Runaways recordings and live re-creations of Runaways songs, with Fanning singing lead vocals. It all sounds terrific, though it must be said that Fanning isn't half the singer Currie was. Where Fanning excels is in suggesting the misery and confusion under the assumed air of teenage cool, and the gradual loss of herself to all the pressures and the drugs. She becomes the prime whipping girl for Fowley, ably played by Michael Shannon as an almost demonic presence, part sadistic idiot, part rock 'n' roll seer.
Stewart, known mainly for mumbling and stumbling through the "Twilight" movies, is the revelation here. She has made a meticulous study of Jett - of her posture, her manner, her expressions, even in the way thoughts cross her eyes. And she has Jett's stage manner down, the way this seemingly shy person assumes total authority when she gets up to play. The visuals help - the costuming and art direction are spot-on.
Unlikely pair
At the heart of "The Runaways" is Fanning and Stewart and their portrait of an unlikely friendship between two very different teenage girls - a friendship that, for a time, becomes very close indeed. It's also a showbiz story, of one girl who just didn't want success bad enough, and another who recognized her chance and clung to it like a lifeline.
Some will complain, understandably, that "The Runaways" ultimately tells a downbeat story that drifts and fades into a diminuendo. It feels ungainly, as though something else - something big - should be happening. But no, the filmmaker knew exactly what she was doing: It just wouldn't be the '70s if it didn't leave audiences with a cocaine hangover.
This article appeared on page E - 1 of the San Francisco Chronicle ||||| Late in The Runaways, Michael Shannon’s cold-blooded Svengali Kim Fowley dismisses the seminal ’70s all-girl punk band of the title as nothing more than a failed conceptual project. Those are the bitter words of a star-maker cavalierly tossed aside by his own creation, but there’s an element of truth to them as well. Like the Sex Pistols, The Runaways combined raw punk anarchy and cynical commercial calculation. They were prefabricated yet authentic, the product of estrogen-fueled rage and a sleazy music-industry lifer intent on exploiting ripe teenage sexuality. There is a fascinating film to be made about Fowley’s slick commoditization of adolescent rebellion, but in her numbingly familiar feature-length debut, writer-director Floria Sigismondi apparently isn’t interested in Fowley so much as she is in giving rock ’n’ roll movie conventions a distaff spin.
A disturbingly precocious, scantily clad Dakota Fanning stars as Cherie Currie, a spooky David Bowie super-fan who more or less stumbles into a gig as the Runaways’ lead singer. The dead-eyed talent vacuum that is Kristen Stewart co-stars as Joan Jett, a snarling badass whose tomboy attitude and songwriting perfectly complemented Currie’s purring sex kitten onstage, on record, and in bed.
Yes, The Runaways is as filled with softcore underage lesbian sex as it is with rock-movie clichés, from the montage of rapid-fire ecstatic magazine and newspaper covers that take the group from obscurity to superstardom (in Japan, at least, where the locals have a weakness for young girls in tight pants) to the use of blurry, distorted visuals to convey Currie’s ever-deteriorating mental state during the proverbial nightmare descent into booze and pills. Shannon plays Fowley as the P.T. Barnum of the Sunset Strip, a prankish provocateur whose tough love for his protégés looks an awful lot like emotional and verbal abuse. Shannon gives the film an unpredictable, live-wire energy, but as it staggers into its third act, Shannon more or less disappears from the proceedings, and the film focuses intently on Currie (whose memoir inspired the film) and Jett (who executive-produced). The Runaways were the first major all-girl punk band. In honor of this distinction, they’re now the first major all-girl punk band to inspire a bleary, excessive, and altogether mediocre big-screen biography. ||||| "Come on, you filthy pussies, let's rock and roll."
That trash talk is aimed at Kristen Stewart, 19, and Dakota Fanning, 16, stars of Twilight: New Moon, by Michael Shannon, in fierce, flamboyant form as evil-genius manager Kim Fowley. Kim is cursing the girls as members of the Runaways, a pioneering band of five jailbait rockers from broken homes that he wants to turn into the female Beatles.
Peter Travers reviews The Runaways in his weekly video podcast, "At the Movies With Peter Travers."
Stewart gives as good as she gets. She's playing Joan Jett, 15, the shag-haired guitarist, singer and songwriter who co-founded the Runaways in 1975 and went on – after the L.A. band dissolved in 1979 – to achieve star status as a solo act. Fanning has it tougher as Cherie Currie, 15, a blond Valley girl molded by Kim into the band's lead singer and jerk-off fantasy. Cherie is so naive she almost breaks down. In a killer scene early in the film, written and directed by — music-video whiz Floria Sigismondi, Kim preps the girls for life in a man's game. Rehearsing in a crummy trailer, the girls are hit by bottles, cans, dirt and dog shit tossed by Kim and his toadies. Cherie is told to sell the sexual heat in a song Kim and Joan create for her: "Hello, Daddy, hello, Mom, I'm your ch-ch-ch-ch-ch-cherry bomb."
The "Cherry Bomb" scene is a raunchy blast of rock history. And Fanning and Stewart, who do their own singing, seize the moment. As Kim tells Cherie the dirty secrets of rock, "Fuck you, fuck authority, I want an orgasm!" she shows him what a wild child can be.
Fanning scores a knockout. And Shannon, as the "Frankenstein motherfucker," is a fireball of potent perversity. Sadly, The Runaways fades into dull predictability. Joan must wait for Cherie to screw up on drugs and sex (the make-out session between Stewart and Fanning is delicate to a fault) so she can step in and front the band. Stewart is just getting rolling when the movie ends. But face it, The Runaways is based on Neon Angel, Currie's 1989 memoir. She's the only one who gets a backstory.
Get more news, reviews and interviews from Peter Travers on The Travers Take.
The result is a walk on the wimp side. Guitarist Lita Ford (Scout Taylor-Compton) and drummer Sandy West (Stella Maeve) barely register in their own band. And Alia Shawkat shows up as an amalgam of Runaways bassists. Jett served as a producer, but the script never shows what drives her. What's left are colorful scenes of life on the road, especially in Japan, where the girls hit it big with a live album. But there's no sense of rock anarchy. Say what you will about the Runaways – they never played it safe. The movie does. | – Critics praise the performances in The Runaways, about Joan Jett's seminal '70s band, but whether the flick succeeds is up for debate. Some takes: "Say what you will about the Runaways," Peter Travers writes in Rolling Stone, but "they never played it safe. The movie does." He wants more out of the verboten relationship between Dakota Fanning's Cherie Currie and Kristen Stewart's Jett. Still, "Fanning scores a knockout." The movie "gets everything right" about the decade's "strange combination of despair and desperate energy," writes Mick LaSalle in the San Francisco Chronicle. And Stewart "is the revelation here," not Fanning. The highlight is Michael Shannon's "fierce and sadistic training scenes as Kim Fowley," the group's "evil Svengali" manager, Roger Ebert writes in the Chicago Sun-Times. Granted, Stewart and Fanning "bring more to their characters than the script provides." And there's "a whole lot of rock 'n' roll." Beyond the "softcore underage lesbian sex," there's not much to recommend, writes Nathan Rabin for the Onion AV Club. "The Runaways were the first major all-girl punk band. In honor of this distinction, they’re now the first major all-girl punk band to inspire a bleary, excessive, and altogether mediocre big-screen biography." Fanning is "disturbingly precocious" and "scantily clad," while Stewart remains a "dead-eyed talent vacuum." |
as reported in monteiro _ et al . _
( 2006 , and references therein ) , the stellar models provided by the codes cesam2k ( morel , 1997 ) and cles ( scuflaire _ et al .
_ , 2007a ) for a given set of standard input physics , differ by less than 0.5% . at variance with previous comparisons , in this new esta - task3 we deal with stellar models that include microscopic diffusion .
the treatment of the microscopic diffusion process in the evolution codes we test here , is not exactly the same .
cles code computes the diffusion coefficients by solving the burgers equations ( burgers , 1969 ) with the formalism developed in thoul _ et al . _
( 1994 , thereafter tbl94 ) .
cesam2k provides two approaches to compute diffusion velocities : one ( which we will call cesam2k mp ) is based on michaud & proffitt ( 1993 ) approximation , the other ( hereafter cesam2k b ) is based on the bugers formalism , with collision integrals derived from paquette _ et al . _
( 1986 ) .
we will compare three sets of models task3.1 ( 1.0 ) , task3.2 ( 1.2 ) and task3.3 ( 1.3 ) , whose input parameters and physics specifications are described in lebreton ( 2007 ) . in the next sections we will present the results of comparing the stellar models that were calculated by cles , cesam2k mp and cesam2k b for the three sets of models , and we try to find out the reason of the differences we get .
= 0.35 and 0.01 , and with a he - core mass @xmath0 . ]
for each task3 we select three evolutionary stages : a : a main sequence stage with a central hydrogen content @xmath1 ; b : a stage close to the core hydrogen exhaustion @xmath2 , and c : a post - main sequence stage in which the mass of the helium core ( defined as the central region where the hydrogen mass fraction is @xmath3 ) is @xmath4 .
cesam stellar models have a number of mesh points between 2700 and 3100 , depending on the evolutionary stage , while cles models have about 2400 mesh points .
moreover , for all the models considered in these comparisons , the stellar structure ends at @xmath5 . concerning the time step , both codes make from 1000 to 1500 ( depending on the stellar mass ) time steps to reach stage c , and the specifications for the stages a , b and c are achieved with a precision better than @xmath6 .
[ fighr ] displays , for each microscopic diffusion implementation , the evolutionary tracks for task3.1 , 3.2 and 3.3 , and the hr diagram location of the target models a , b and c. for each stellar mass , the main sequence computed with cesam2k b is slightly hotter ( @xmath7% for task3.1 , to 0.3% for task3.2 and 3.3 ) than those calculated by cesam2k mp and cles .
furthermore , cles and cesam2k mp models are quite close ( @xmath8% , and @xmath9% ) with the exception of models in the second overall contraction phase , for which the differences can reach 1% in the stellar radius and 0.5% in luminosity .
the fact that cesam2k b models are hotter than cesam2k mp and cles ones could suggest that the outer layer opacity for the former is lower than for the latter because of a different content of hydrogen in their convective envelope . the evolution of the helium abundance in the stellar convective envelope ( @xmath10 ) is an eloquent indicator of the microscopic diffusion effects .
[ figys ] shows , for each considered stellar mass and diffusion treatment , the variation of @xmath10 as the central hydrogen content @xmath11 decreases , and reveals that the diffusion efficiency in cles is always larger than in cesam : about 8 , 10 and 20% larger than in cesam2k mp for 1.0 , 1.2 and 1.3 respectively , and 40% larger than in cesam2k b for all stellar masses under consideration .
the irregular behaviour of @xmath10 _ vs. _ @xmath11 curves for task3.2 and 3.3 , is a consequence of a semiconvection phenomenon that appears below the convective envelope and , the longer main sequence for cesam2k mp models is probably due to semiconvection at the border of the convective core ( see next section ) . _ vs. _ @xmath11 ) for 1.0 ( left panel ) , 1.2 ( central panel ) and 1.3 ( right panel ) . ]
[ figys ] the internal structure at the given stages a , b and c can be studied by means of the sound speed , @xmath12 , and of the adiabatic exponent , @xmath13 , variations .
the lagrangian differences , @xmath14 and @xmath15 , between cesam2k ( both b and mp ) and cles models ( calculated at the same mass by using the adipls package tools ) are plotted in fig . [ figdif ] as a function of the normalised radius . note that the vertical scale in @xmath14 and @xmath16 plots are respectively five and three times smaller for 1.0 than for 1.2 and 1.3 .
the @xmath14 values reflect : i ) the differences in stellar radius ( note that the largest values are reached in task3.2 b cles - cesam2k mp comparison , for which dlnr is of the order of 0.01 ) ; _ ii _ ) the different chemical composition gradients below the convective envelope ( features between @xmath17 and 0.8 ) , as well as differences in the location of convection region boundaries ( at @xmath18 for the convective core in task3.2 and 3.3 ) .
the value of @xmath13 in the external regions is particularly sensitive to the he abundance .
therefore , as one can see in the bottom panels of fig .
[ figdif ] , the variations @xmath16 are smaller for cesam2k mp cles comparisons than for cesam2k b cles ones , and these differences increase with the mass of the model ; these results are in good agreement with what we would expect from @xmath10 curves in fig .
[ figys ] . to clarify how all these differences affect the seismic properties of the models , we compute by means of the adiabatic seismic code losc ( scuflaire _ et al .
_ , 2007b ) the frequencies of oscillations of all the models at the evolutionary stage a ( main sequence models ) . in fig .
[ figfreq ] the frequency differences between cles and cesam models of 1.2 ( left panel ) and 1.3 ( right panel ) are shown for p - modes with degrees @xmath190 , 1 , 2 , 3 .
the similar behaviour of curves with different degree indicates that the observed frequency differences reflect mainly the near surface difference of the models . in particular , the oscillatory component in cles - cesam2k b frequency differences
is the characteristic signature of the different he content in the convective envelope .
note that the vertical scale in both panels is not the same , and that the amplitude of the oscillatory component is related to the difference of surface he content .
comparisons for 1.0 models showed frequency differences of about 0.4 @xmath20hz .
the evolution of the convective region boundaries in models with metal diffusion is difficult to study .
in fact , as it was already noted by bahcall _ et al . _
( 2001 ) in the case of 1.0 models , the accumulation of metals below the convective envelope can trigger the onset of semiconvection . as the metal abundance increases below the convection region , the opacity locally increases and the affected layers end up by becoming convectively unstable .
the evolution of these unstable layers strongly depends on the numerical treatment of convection borders used in the stellar evolution code .
cles does not treat semiconvection , and the algorithm computing the chemical composition in convective regions includes a kind of `` numerical diffusion '' . in cles
, the convectively unstable shells may grow and eventually join the convective envelope . as a consequence ,
the latter becomes suddenly deeper , destroys the z gradient , recedes , and the process starts again .
so , the crinkled profiles of @xmath10 for task3.2 and 3.3 are a consequence of the sudden variations of the depth of the convective envelope . since the timescale of diffusion decreases as the mass of the convective envelope decreases , semiconvection appears earlier in 1.3 than in 1.2 models .
furthermore , in contrast with bahcall _ et al . _
( 2001 ) results , semiconvection does not appear in our evolved 1.0 models , probably because of the effect of `` numerical diffusion '' that reduces the efficiency of metal accumulation .
all these effects can be seen in fig .
[ semicon ] . in fig .
[ cecesam ] we plot the evolution of the convective envelope for cesam models .
the different treatment of convection borders in both codes leads to different depth of the convective envelope . at @xmath21 ,
cles models have convective envelopes of about 0.1% deeper than cesam2k b ones , and of about 2.3% , 0.6% and 0.4% shallower than cesam2k mp models for 1.0 , 1.2 and 1.3 respectively .
semiconvection can also appear at the border of the convective core .
as explained in richard _ et al . _
( 2001 ) , because of the he abundance gradient generated at the border of the convective core by nuclear burning , the diffusion term due to the composition gradient counteracts the he settling term and he ends up by going out of the core . since the outward he flux interacts also with the metals , these may as well diffuse outward the core and prevent the metals settling .
the enhancement of metals at the border of the convective core induces an increase in opacity and , finally , the onset of semiconvection . for the masses considered in task3.2 and task3.3 , semiconvection appears very easily , as the mass of the convective core increases with time , leading to a quasi discontinuity in the he abundance . as for the convective envelope , the numerical treatment of the border of the convective regions is a key aspect of the convective core evolution . in fig .
[ semicon_core ] we plot the evolution of the convective regions in the central part of 1.2 models computed with cles ( left panel ) and with cesam ( right panel ) .
while cles treatment of convective borders keeps convectively unstable shells separated from the convective core ( grey region ) , it seems that cesam tends to connect these shells to the central convective region .
in fact , the envelope of the curve m@xmath22 vs. @xmath11 for cesam2k mb model approximately coincides with the `` semiconvection '' region in cles plot . as a consequence ,
a larger central mixed region in cesam2k mp than in cles leads to a longer main sequence phase , as seen in fig .
[ fighr ] .
in fact , the value of @xmath23 , just before it begins to decrease , is 6% and 12% , respectively for 1.2 and 1.3 , larger for cesam2k mb models than for cles ones . on the other hand ,
the corresponding values for cesam2k b are 2% and 10% larger than cles ones .
the discrepancies we found between cesam2k mp and cles diffusion efficiency are in good agreement with the comparisons already published by tbl94 .
the large differences between cesam2k b and cles are instead rather unexpected . both codes , in fact ,
derive the diffusion velocities by solving the burgers equations , however , the values of friction coefficients appearing in those equations are different in cesam2k b and cles approaches .
the resistance coefficients @xmath24 , which represent the effects of collisions between the particles i and j , are @xmath25 in cesam2k b , and @xmath26 in cles ( tbl94 ) .
the term @xmath27 is the same in both formulations and depends on the mass , charge and concentration of the particles i and j. the values of the quantity @xmath28 are derived from the numerical fits of the collision integrals ( paquette _ et al._,1986 ) , and the term @xmath29 is the coulomb logarithm from iben & macdonald ( 1985 ) .
furthermore , while tbl94 adopt for the heat flux terms @xmath30 , @xmath31 and @xmath32 their low density asymptotic values , cesam2k b computes them by using the collision integrals from paquette _ et al . _ ( 1986 )
. as shown in thoul & montalbn ( 2007 ) , the assumptions done in tbl94 can lead , for the task3.2 a model , to diffusion velocities between 6 and 20% larger than those that would be obtained by using the paquette s coefficients . to further clarify this point , we replaced in burgers equations the coefficients used in cles with those used in cesam2k b and we re - computed the models for task3.2 . the new evolution of he surface abundance is plotted in fig
. [ yspaquette ] ( left panel , thick line ) together with the curves obtained directly by cesam2k b , standard cles , and cesam2k mp .
we see that the approximation adopted in tbl94 implies a helium surface abundances slightly smaller than those that would be obtained by using the numerical fits by paquette .
the new cles values are close to cesam2k mp ones , but still quite far from cesam2k b results .
another important difference between cesam and cles diffusion routines is that , while cesam follows separately each element inside z and determine the ionization degree of all the species , the standard version of cles adopts full ionization , and follows only four species : h , he , electrons and an `` average '' element z with atomic mass 8 , charge 17.84 . to test the consequences of these approximations we computed the evolution of 1.2 with an updated version of cles that computes the ionization degree , and allows to follow separately up to 22 elements . in fig .
[ yspaquette ] ( right panel ) we plot the evolution of the he surface abundance for calculations considering full ionization , and partial ionization for the eleven most relevant elements in z. we can conclude that , at least for masses lower than or equal to 1.2 the effect of partial ionization on the he diffusion velocity is negligible . finally , we checked the effect of the time step by computing cles evolution tracks with smaller and larger steps , but no significant effect was detected in the diffusion efficiency .
we compared models corresponding to task3.1 , task3.2 and task3.3 which were computed with three different implementations of microscopic diffusion .
the largest discrepancy ( @xmath3340% ) appears between codes that model diffusion velocities by solving the burgers equations ( cesam2k b and cles ) .
a detailed analysis showed that the approximations used in thoul et al .
1994 for the friction coefficients are not at the origin of this discrepancy .
computations with partial ionization has also shown that for masses smaller or equal to 1.2 , the full ionization assumption has no detectable effects .
therefore , we conclude that the difference between cles and cesam2k b results origins from the routine solving the burgers equation system . moreover , we showed that the effect of the different treatments of the convection borders can lead , when diffusion is included , to significant discrepancies ( up to 12% ) for the mass and radius of convective regions .
the authors thank helas for financial support .
jm and st are supported by prodex 8 corot ( c90197 ) monteiro , m. k. j. p. f. g. , lebreton , y. , montalban , j. , christensen - dalsgaard , j. , castro , m. , deglinnocenti , s. , moya . , a. , roxburgh , i. w. , and scuflaire , r. et al . : 2006 , in f. favata , a. baglin , and j. lochard ( eds . ) , _ esa publications division , esa sp ; esa spec.publ .
1306 _ , pp 363372 | we present the results of comparing three different implementations of the microscopic diffusion process in the stellar evolution codes cesam and cles . for each of these implementations we computed models of 1.0 , 1.2 and 1.3 .
we analyse the differences in their internal structure at three selected evolutionary stages , as well as the variations of helium abundance and depth of the stellar convective envelope .
the origin of these differences and their effects on the seismic properties of the models are also considered . |
Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period. ||||| Texas Rep. Dan Flynn (R-Canton) wants daylight saving time removed in the state.
Flynn filed a bill in November calling for legislators to end daylight saving time, and the public hearing for his bill is Wednesday.
“It was November of last year when we did the fall back, and I’m sitting there changing all of the clocks in my house and in my cars, and I’m … thinking, ‘Why in the world do we do this?’” Flynn said.
The bill, if passed, would mean Texas could opt out of the twice-a-year time change, which the Uniform Time Act of 1966 established.
In 1966, the Uniform Time Act set nationwide start and end times for daylight saving time — the last Sunday in April and October. Since the act’s implementation, daylight saving time has been moved to the second Sunday in March and the first Sunday in November.
The act also allows states not to follow daylight saving. Currently, Hawaii, Arizona and some parts of Indiana do not practice the time change.
Flynn said he has found that removing daylight saving time in the state would not negatively impact farmers or increase energy usage. Additionally, he said mothers have expressed concern about leaving their children at bus stops when it is darker in the mornings because of daylight saving time.
“I think the trouble that [daylight saving time] causes far outweighs any benefits that it could possibly have,” Flynn said.
There is not a clear answer as to whether daylight saving saves energy. According to a recent Dallas Morning news article, a 2008 study in Indiana found residential energy use increased by 1 percent when daylight saving time was implemented statewide. On the other hand, a 2008 report from the U.S. Department of Energy found extended daylight saving hours, established in 2007, saved 0.3 percent of the year’s energy.
Engineering junior Rohan Nagar said he thinks daylight saving time is pointless in today’s society and would like to see it abolished nationwide.
“Currently it’s up to the states to decide whether they want to follow daylight saving or not, but I think that causes a lot of confusion, especially if half the states are following day light saving and half aren’t,”
Nagar said.
Martha Habluetzel, a retired postal carrier, organized a gathering at the Capitol on Tuesday against daylight saving time, and she was one of a handful of people to attend and speak with Flynn about his bill. If Texas were to stop using daylight saving time, Habluetzel said she thinks other states in the central time zone and eventually the U.S. as a whole would stop using the measure.
“If Texas [abolishes] daylight saving time … then I think it will be easy for it to be abolished nationally,” Habluetzal said. “[Texas is] so large.”
Engineering junior Oriana Wong said she is tired from losing an hour of sleep on daylight saving.
Nevertheless, Wong said if Texas were to eliminate daylight saving time, there would be confusion, especially for business people working across state lines.
“Just by driving home to their workplace, they would have to remember to change their clocks everyday,“ Wong said.
Flynn said he does not foresee this being a problem.
“They will be able to adjust to whatever [the time change] is,” Flynn said. “I personally kind of feel like this is what I’m going to call ‘Texas Time.’” ||||| State Rep. Dan Flynn filed a bill to exempt Texas from daylight-saving time and let it stay on Central Standard Time all year
PLANTATION, FL - MARCH 06: Howard Brown repairs a clock at Brown?s Old Time Clock Shop March 6, 2007 in Plantation, Florida. This year day light savings time happens three weeks early and some people fear that it could cause some computer and gadget glitches. (Photo by Joe Raedle/Getty Images) (Photo: Joe Raedle, Getty Images)
Spring forward; fall back.
It's a phrase we've all heard for years when it's time to change the clocks by an hour as we move to daylight-saving time or off it.
State Rep. Dan Flynn, R-Van, is ready for it to stop. He and others say they are tired of messing with their clocks and just want one time year-round.
"It seems that every time we have to change the time, we get lots of complaints in our office," Flynn said. "A lot of people are just upset about the hassle of turning the clock forward and backward. … It seems as though I have had more calls than ever this year."
So he filed a bill to exempt Texas from daylight-saving time and let it stay on Central Standard Time all year.
Rep. James White, R-Hillister, said he hopes to sign on in support of Flynn's bill in the legislative session, which starts Jan. 13. But he has also filed his own measure.
White's bill is a little different because it would create a task force to study whether the state should continue daylight-saving time. Members would make their recommendations to top state officials by Dec. 1, 2016.
"I'm ready to stop it as well," White said. "I get so many calls from constituents about 'Why are we doing this?' 'Do we have to do it?' and … [complaints] about the physical and mental pressures it brings about. … It seems to be really painful and physically tough for some people."
If Texas opts out of daylight-saving time, it would be one of only three states to do so. Hawaii doesn't participate and neither does most of Arizona (the Indian reservations there do observe it).
Click here to read the full story from our content partners at the Star-Telegram
Read or Share this story: http://www.wfaa.com/story/news/local/2014/11/27/texas-petition-against-daylight-savings-time/19575535/ ||||| Lawmaker hopes to end Daylight Saving Time in Texas
Texans are ready to rebel against the scourge of Daylight Saving Time. State Rep. Dan Flynn, R-Canton, has proposed a bill that would eliminate the biannual time change come September.
The legislation, HB 150, will be debated by the House’s Government Transparency and Operation Committee on Wednesday. If the bill passes, the Lone Star State will switch off Daylight Saving Time forever.
That idea has some momentum since studies show DST has a negligible effect on saving energy today, and leads to a spike in car and workplace accidents in the weeks following springing forward.
The Texas Monthly has a neat history on the state’s unstable relationship with Daylight Savings Time. Two Texas-born presidents propagated the policy:
Lyndon Johnson signed the Uniform Time Act into law in 1966, establishing the period running from the last Sunday of April to the last Sunday of October. Then George W. Bush extended the law’s enforcement so that it began on the second Sunday of March and ended on the first Sunday in November.
States can opt out of DST (Arizona and Hawaii already ignore it). After its original passing, a local representative tried quickly to unravel the law in 1967 by trying to pass “Texas Exceptional Time,” but the measure failed. Flynn’s bill however might have the support to end government intrusion on our clocks. ||||| Under the Uniform Time Act, as amended, States may exempt themselves from observing Daylight Saving Time by State law. If a State chooses to observe Daylight Saving Time, it must begin and end on federally mandated dates.
Daylight Saving Time is not observed in Hawaii, American Samoa, Guam, Puerto Rico, the Virgin Islands, and most of Arizona.
Purpose of Daylight Saving Time
Daylight Saving Time is observed for several reasons: ||||| Daylight saving time strikes again Sunday at 2 a.m., at least for every state outside Hawaii and Arizona. Though DST has been a part of life in the United States since World War I, its origin and effects remain misunderstood, even by some of the lawmakers responsible for it. Here are some common myths about daylight saving time.
1. Daylight saving time was meant to help farmers.
Many of us have heard that DST was developed because of farming. The idea that more daylight means more time in the field for farmers continues to get airtime on the occasional local news report and in state legislatures — “Farmers wanted it because it extends hours of working in the field,” Texas state Rep. Dan Flynn offered after filing a bill that would abolish DST. Even Michael Downing, who wrote a book about DST, has said that before researching the subject, “I always thought we did it for the farmers.”
In fact, the inverse is true. “The farmers were the reason we never had a peacetime daylight saving time until 1966,” Downing told National Geographic. “They had a powerful lobby and were against it vociferously.” The lost hour of morning light meant they had to rush to get their crops to market. Dairy farmers were particularly flummoxed: Cows adjust to schedule shifts rather poorly, apparently.
Daylight saving time, in this or any other country, was never adopted to benefit farmers; it was first proposed by William Willett to the British Parliament in 1907 as a way to take full advantage of the day’s light. Germany was the first country to implement it, and the United States took up the practice upon entering World War I, hypothetically to save energy. How did farmers end up being the mythical source of DST? Downing suggests that because they were such vocal opponents, “they became associated into the popular image of daylight-saving and it got inverted on them. It was just bad luck.”
2. The extra daylight makes us healthier and happier.
That additional vitamin D is good for us, right? Sen. Ed Markey, D-Mass., thinks so. “In addition to the benefits of energy savings, fewer traffic fatalities, more recreation time and increased economic activity, Daylight Saving Time helps clear away the winter blues a little earlier,” he said in a statement last year. “Government analysis has proven that extra sunshine provides more than just smiles. . . . We all just feel sunnier after we set the clocks ahead.” Gwyneth Paltrow agrees, opining to British Cosmopolitan in 2013: “We’re human beings and the sun is the sun — how can it be bad for you? I think we should all get sun and fresh air.”
A little more vitamin D might be healthy, but the way DST provides it is not so beneficial to our well-being. Experts have warned about spikes in workplace accidents, suicide and headaches — just to name a few health risks — when DST starts and ends. One 2009 study of mine workers found a 5.7 percent increase in injuries in the week after the start of DST, which researchers thought was most likely due to disruption in the workers’ sleep cycles. An examination of Australian data found a slight uptick in male suicides in the weeks following time shifts, to the effect of half an excess death per day, which the researchers blamed on the destabilizing effect of sleep disruption on people with mental health problems. And some physicians warn that changes in circadian rhythm can trigger cluster headaches, leading to days or weeks of discomfort.
The literature on these health effects is far from conclusive, but spring sunshine does not outweigh the downsides of sleep disruption across the board.
3. It helps us conserve energy.
Congress passed the Energy Policy Act — which extended DST by a month — in 2005, ostensibly to save four more weeks’ worth of energy. “An annual rite of spring, daylight saving time is also a matter of energy conservation. By having a little more natural daylight at our disposal, we can help keep daily energy costs down for families and businesses,” Rep. Fred Upton, R-Mich., who co-sponsored the legislation along with Markey, said in a 2013 statement.
But in a follow-up study on the effects of the extension, the California Energy Commission found the energy savings to be a paltry 0.18 percent at best. Other studies have indicated that people may use less of some kinds of energy, such as electric lights, but more of others. More productive daylight hours might be meant to get you off the couch and recreating outside, but they’re just as likely to lead to increased air-conditioner use if you stay home and gas guzzling if you don’t.
A study in Indiana found a slight increase in energy use after the entire state adopted DST (for years, only some counties followed it), costing the state’s residents about $9 million; the researchers believed that more air conditioning in the evening was largely to blame. That’s a far cry from the $7 million that Indiana state representatives had hoped residents would save in electricity costs.
4. DST benefits businesses.
We know that businesses think daylight saving time is good for the economy — just look at who lobbied for increased DST in 2005: chambers of commerce. The grill and charcoal industries, which successfully campaigned to extend DST from six to seven months in 1986, say they gain $200 million in sales with an extra month of daylight saving. When the increase to eight months came up for a vote in 2005, it was the National Association of Convenience Stores that lobbied hardest — more time for kids to be out trick-or-treating meant more candy sales.
But not all industries love daylight saving time. Television ratings tend to suffer during DST, and networks hate it. “Come March, when daylight savings time and the households using television level goes down in the early evening, it really takes its toll on the 8 o’clock hour, particularly for comedies,” Kevin Reilly, then-chairman of Fox Entertainment, said in 2014, explaining his decision to cut the network’s 8 p.m. comedy hour.
Airlines have also complained loudly about increased DST. When DST was lengthened, the Air Transport Association estimated that the schedule-juggling necessary to keep U.S. flights lined up with international travel would cost the industry $147 million. DST hurts other transportation interests, too: Amtrak is known to halt its overnight trains for an hour when clocks change in November so they don’t show up and leave from their 3 a.m. destinations early. In the spring, trains have to try to make up lost time so they can stick to the schedule.
DST might also cost employers in the form of lost productivity. A 2012 study found that workers were more likely to cyberloaf — doing non-work-related things on their computers during the day — on the Monday after a DST switch. Study participants who lost an hour of sleep ended up wasting 20 percent of their time.
5. Standard time is standard.
Guess what time we’re on for eight months of the year? Daylight saving time. In what universe is something that happens for only one-third of the time the “standard”? Even before the 2007 change, DST ran for seven months out of 12.
In fact, some opponents of DST aren’t against daylight saving time per se: They think it should be adopted as the year-round standard time. Because it basically already is.
Rachel Feltman anchors the blog Speaking of Science at The Washington Post. Follow her on Twitter at @rachelfeltman. ||||| Are you feeling groggy or grumpy because you have sprung forward in the Spring to Daylight Saving Time? Perhaps then today is the day to call on the House to suspend all necessary rules to take up and consider House Bill 150 exempting Texas from DST.
To some degree, you can thank two Texas presidents for the sleepless state you are in, having lost an hour in the wee morning of Sunday.
Congress in 1966 passed a uniform time law under which President Lyndon B. Johnson established DST to take effect on the last Sunday of April through the last Sunday of October. Apparently, there just wasn’t enough recreational sunlight left at the end of the workday, so President George W. Bush in 2005 extended DST from the second Sunday of March through the first Sunday of November.
Now, riding in from Northeast Texas to save us from this annual disruption to our circadian rhythms is state Representative Dan Flynn, R-Canton, with House Bill 150 to exempt Texas from DST. The legislation is set for a hearing Wednesday in the Government Transparency and Operations Committee.
If you want one person to blame for Daylight Savings Time, that would be New Zealand entomologist George Vernon Hudson. He worked as a postal clerk during the day and wanted extra hours of daylight so he could collect insects in the evening. He first proposed the idea of DST in 1895. The world wisely did not immediately jump onto Hudson’s idea like a duck onto a June bug.
Congress briefly established DST at the end of World War I to conserve fuel, but it was so unpopular that it was voted out as quickly as the League of Nations. As a fuel conservation measure, DST was readopted during WWII. President Roosevelt called it “War Time.” With the Axis powers defeated, DST rode off into the sunset.
Standard Time marched on until Congress in 1966 decided the nation’s patchwork of local times needed consolidation. President Johnson then established our first permanent pattern of DST. The recreation industry loved it. It was hated by restaurants, theaters, drive-in movies, dairy farmers and the parents of small children who had to go to school in the dark during the dying days of Winter.
The reaction in Texas was swift. State Representative William Smith of Beaumont in 1967 tried to persuade the state House to adopt Texas Exceptional Time, hoping to exempt us from Daylight Savings Time before the official first spring-forward that year. Leading the charge in opposition to the bill and in favor of lost sleep was future Speaker Billy Wayne Clayton of Springlake in West Texas. Clayton persuaded the House to kill Smith’s bill on a 90-56 vote on March 28, 1967. The House later that year passed a resolution in Clayton’s honor.
“This gallant legislator showed his fervent and persuasive power by victoriously defending his stand against passage of the so-called Daylight Savings Time proposal so that All Texans…should have equal opportunity and joy of arising one hour Earlier every morning until late October; now, therefore be it “Resolved, That our estimable colleague be designated and he is hereby so designated as ‘Keeper of the Clock’ and he is further charged with the responsibility of being bodily in the House Chamber at exactly 1:59 ante meridian on Sunday, April 30, 1967, A.D., and shall at the exact stroke of Two, turn the House of Representatives’ clock Forward One House; thus by the stroke of His Hand shall be given All Texans, including those of the renown District Number 78, One Hour Less Sleep.”
Now, forty-eight spring-forward-in-the-springs later, it is Representative Flynn’s chance to turn back the clock.
When Flynn pre-filed his bill last year, he told the Fort Worth Star-Telegram’s Anna M. Tinsley that people in his district were tired of adjusting their clocks and mothers complained of sending their children to school on dark winter mornings.
“It seems that every time we have to change the time, we get lots of complaints in our office,” Flynn said. “A lot of people are just upset about the hassle of turning the clock forward and backward. … It seems as though I have had more calls than ever this year.”
The time has come to let your voice be heard! Are you with the Will Smiths and Dan Flynns of the world or the Billy Claytons? Your comments will be duly noted. ||||| This could be a very good session for faith family and freedom with Republican majorities in both the Texas House and Senate. We can work on zero based budgeting where we would look at what we spend and prioritize to not spend all that we have and instead spend only what we need. This session we look forward to progress on a variety of issues including Second Amendment rights, the ability to farm and ranch and conduct business in Texas without Government interference, and the ability to push back on the federal government.
It was the State that created the Federal Government and not the other way around. Our legislation works to ensure our Courts are governed by American Law and not foreign law in Texas. It also looks to rein in more than a dozen school districts with a billion dollars of principal and interest debt. The bills I’m working on help others to save taxpayer dollars with local water utility districts, and protect the free exercise of religion and a law requiring students take a semester on the U.S. Constitution prior to graduation from high school.
We want to make sure that voters get a wider arrange of transparency in terms of seeing what they are voting for, how much debt is currently being accumulated, and most importantly, the cost to our children and grandchildren. We also have the opportunity to do simple things like safeguarding our Teachers, providing them the right to defend themselves in the classroom. Our Texas military forces can be strengthened through the budget process and through additional recruiting. Our borders can remain safe through use of a variety of agencies who are committed to keep all Texans safe. It’s also nice to have legislation including eliminating day light savings time as a benefit to many including mothers who do not want to put their kids on the bus stop when it is dark and put them to bed when it is light.
We also have the ability to get nothing done if we do not focus on ensuring that we come together in a meaningful way, focusing on the values that have made Texas great. Those values of faith family and freedom have never been more important. The decision as a duly elected Representative of all Texans is to ensure that we achieve what the people of Texas have elected us to do. ||||| Published on Mar 8, 2015
Last Week Tonight with John Oliver: Daylight Saving Time - How Is This Still A Thing? (HBO)
Daylight saving time doesn’t actually benefit anyone. Strangely, it’s still a thing!
Connect with Last Week Tonight online...
Subscribe to the Last Week Tonight YouTube channel for more almost news as it almost happens: www.youtube.com/user/LastWeekTonight
Find Last Week Tonight on Facebook like your mom would:
http://Facebook.com/LastWeekTonight
Follow us on Twitter for news about jokes and jokes about news:
http://Twitter.com/LastWeekTonight
Visit our official site for all that other stuff at once:
http://www.hbo.com/lastweektonight
Connect with Last Week Tonight online...
Subscribe to the Last Week Tonight YouTube channel for more almost news as it almost happens: www.youtube.com/user/LastWeekTonight
Find Last Week Tonight on Facebook like your mom would:
http://Facebook.com/LastWeekTonight
Follow us on Twitter for news about jokes and jokes about news:
http://Twitter.com/LastWeekTonight
Visit our official site for all that other stuff at once:
http://www.hbo.com/lastweektonight | – If Rep. Dan Flynn gets his way, Texas will be on Central Standard Time year-round. With his HB 150, the Republican is trying to terminate Daylight Saving Time effective Sept. 1, the Houston Chronicle reports. The bill will be debated this morning by the Texas Government Transparency and Operation Committee; should it ultimately pass, Texas would join Arizona and Hawaii in opting out of the time shift. The Daily Texan recounts Flynn's inspiration: "It was … last year when we did the fall back, and I'm sitting there changing all of the clocks in my house and in my cars, and I'm … thinking, 'Why in the world do we do this?'" Among those who would benefit from DST's death: "Mothers who do not want to put their kids on the bus stop when it is dark and put them to bed when it is light," Flynn notes on his website. It's not the first time the Lone Star State has tried to break from the clock finagling. Rep. William Smith unsuccessfully tried in 1967 to get his state out of DST and instead adhere to "Texas Exceptional Time," per the Texas Monthly. The DOT cites practical reasons for observing DST, including conserving energy, preventing traffic injuries, and reducing crime. But critics from Rachel Feltman of the Dallas Morning News to comedian John Oliver say the benefits are minimal or misleading, and Flynn says his constituents aren't happy about complying. "It seems that every time we have to change the time, we get lots of complaints in our office," Flynn told WFAA in November upon filing the bill. DST was actually established under a Texan—Lyndon B. Johnson—in 1966, and a second Texan, George W. Bush, altered its dates in 2005. |
extracorporeal membrane oxygenation ( ecmo ) is an essential part of life support and acts as the last resort for severe cardiorespiratory failure neonates without response to aggressive conventional treatment .
ecmo improves oxygenation and organ perfusion and saves precious time for failed heart and lung to recover .
venous - venous ecmo and venous - arterial ecmo are two most common ecmo modes . since the first case of successful neonatal ecmo for a severe meconium aspiration syndrome ( mas ) neonate in 1976 , ecmo has been used and saved thousands of neonates suffering from respiratory and/or cardiac failure world widely .
the most common candidate diseases for neonatal ecmo are mas , persistent pulmonary hypertension of neonate , congenital diaphragmatic hernia , severe sepsis , etc .
neonatal ecmo is still at the underdeveloped stage in mainland china , in comparison with other countries .
the current paper reported the first neonate with early - onset group b streptococcus ( gbs ) sepsis saved by v - a ecmo in mainland china , and we also conducted a comprehensive literature review of neonatal ecmo utility in mainland china .
a term neonate with gestational age of 39 weeks and birth weight of 3180 g was born by spontaneous vaginal delivery with unremarkable maternal history ; however , the gbs status was unknown .
she did not need aggressive resuscitation in the delivery room , had good apgar score of 10 and 10 at 1 and 5 min , respectively .
eight hours later , she developed respiratory distress and needed to be intubated and received mechanical respiratory support ; the antibiotics were also started empirically after blood culture sampling .
she was transported to our neonate intensive care unit ( nicu ) at the age of 15 h. in nicu , she developed cardiac and respiratory failure with hypotension and hypoxemia , even though she received an aggressive fluid replacement and inotropic agents including epinephrine and large dose of dopamine .
she was put on high - frequency oscillatory ventilation with high mean airway pressure of 17 cmh2 o , and high fio2 of 1.0 , she was given inhaled nitric oxide as well due to pulmonary hypertension confirmed by bedside echocardiography . however , the blood gas was poor as following : ph 7.24 , pao2 35.5 mmhg , paco2 40.8 mmhg , and base deficit 9.0 mmol / l . the oxygenation index ( oi = map
after ruling out neonatal ecmo contraindications of intracranial hemorrhage , unrepairable congenital heart defects ( chds ) , she was put on v - a ecmo at the age of 23 h. in addition , the blood culture sent in referral hospital reported after was positive for gbs and confirmed the diagnosis of early onset gbs sepsis complicated by cardiorespiratory failure with no response to maximal conventional treatment .
medtronic minimax plus oxygenator and bio - console 560 centrifugal blood pump was used in this patient .
the v - a ecmo cannulation was achieved by right jugular vein and common carotid artery insertion .
the positions of the catheters were guided by bedside echo . during the 273 h ecmo procedure , the blood gas , lactate level , and glucose level were monitored every 2 h , c - reactive protein , and chest x - ray was done daily .
the ecmo flow was maintained at 100130 ml / kg / min based on hemodynamic monitoring .
she tolerated ecmo well without major complications , apart from mild bleeding at the sites of cannulation , thrombocytopenia , etc . and , she was decannulated at the age of 12 days after 273 h ecmo and extubated the day after .
neurologic evaluations including cranial magnetic resonance imaging and electroencephalogram were normal before discharge at the age of 36 days .
until july 2015 , there were totally 69,114 patients registered in the extracorporeal life support organization ( elso ) received ecmo treatment worldwide based on the data of elso . and ,
more than half of the cases were neonatal cases which consisted 51.4% ( 35505/69114 ) of all , and the survive rate were 74% and 41% in respiratory failure and cardiac failure cases , respectively . in comparison with the advanced and sophisticated ecmo utility in critical patients in developed countries ,
the ecmo utility was still at the underdeveloped stage , especially neonatal ecmo , in mainland china . and , survive rate remains much lower than developed countries . to know the current state of neonatal ecmo in mainland , china , we conducted a comprehensive literature review through searching
neonatal ecmo in chinese databases , including china knowledge resource integrated database and wanfang database ; we also used the keywords of
( ecmo ) and china [ affiliation ] as searching strategy in pubmed for papers published by authors from mainland , china .
after abstract reviewing , 8 papers were recruited in the full paper assessment . based on the clinical information , we deleted duplicated cases from the same institutions .
after that , we collected 22 reported neonatal cases with the birth weight of 2.84.0 kg .
fourteen of the 22 patients were patients with chds , while the other 8 patients were non - chd patients , including one mas , 4 neonatal respiratory distress syndrome ( nrds ) , 2 pulmonary dysplasia , 1 cardiac arrest secondary to hydronephrosis and electrolytes disturbance .
v - a ecmo was utilized in 21 of 22 patients ; the duration of ecmo running was 64 42 h ( range from 7 to 173 h ) .
complications occurred in 13 patients , and the survive rate was 41% ( 9/22 ) . of note ,
the survive rate in non - chd patients was only 25% ( 2/8 ) , and none of the patients with congenital anomalies , such as pulmonary dysplasia and hydronephrosis survived .
the current case is the first neonate case with severe gbs sepsis saved by v - a ecmo in mainland , china .
reported neonatal ecmo cases done in mainland , china in this article , we reported the first neonatal ecmo for gbs sepsis neonate in mainland china , and the ecmo duration of 273 h was the longest among all reported cases . among 22 reported neonatal ecmo cases in previous papers , there were only 2 non - chd patients survived .
the ecmo running durations in our case were much longer than the 48 h in the mas case and 50 h in the nrds case .
according to the available evidence , gbs could cause a severe inflammatory reaction in the patients , which is also manifested in our case by extremely elevated c - reactive protein levels .
this kind of inflammation recovers much slower than the self - limited lung diseases such as mas and nrds . that could be the underlying reason for a long duration of ecmo running in our case . of note , because of low survive rate in patients with pulmonary dysplasia ; the administration of neonatal ecmo in such patients should be cautious . in mainland china ,
the first reported neonatal ecmo case was a patient with left hypoplastic heart in 2006 , and the first reported non - chd case was a neonate with mas in 2009 .
the previous 22 cases reported by chinese authors were done only by big hospitals located in beijing , shanghai , hangzhou , and guangdong , these 4 megacities with the highest economical level in mainland china .
the high fee of ecmo not covered by chinese medicare is the biggest hurdle for developing neonatal ecmo in china .
the neonatal ecmo was still underdeveloped in mainland china ; however , because of the remarkable effectiveness of ecmo in treating critical patients without response to conventional treatment , ecmo does provide a chance of survival for neonates who have a grave prognosis by conventional treatment .
| we report the first successful treatment of extracorporeal membrane oxygenation ( ecmo ) in a neonate with group b streptococcus ( gbs ) sepsis and cardiorespiratory failure , and further conduct a literature review in the experience of neonatal ecmo utility in mainland china .
a term neonate with cardiorespiratory failure secondary to gbs sepsis was put on venous - arterial ecmo at 23 h of age .
after 273 h of ecmo running , the patient was saved and without major complications .
the comprehensive literature review demonstrated that there were 22 neonates received ecmo previously in mainland china , 14 of 22 of the patients are cases with congenital heart defects .
the overall survival rate was 41% ( 9/22 ) .
neonatal ecmo was underdeveloped in mainland , china .
moreover , it does provide a chance of survival for neonates who have a grave prognosis by conventional treatment . |
bipolar disorder ( bp ) is a mental disease with a high social burden measured by disability - adjusted life years , and the prevalence of bp is approximately 2.0% of the general population.1,2 traditionally , lithium and sodium valproate were recommended as the first - line medications for severe manic or mixed - phase bp in accordance with the expert consensus guidelines series set by the american psychiatric association .
however , these drugs can not produce rapid sedative effects , and thus failed to manage the acute manic episode of bp.3 therefore , the addition of a mood stabilizer to the conventional antipsychotic therapy would be a useful strategy for the treatment of bipolar i manic episodes .
typical antipsychotics , such as haloperidol and chlorpromazine , produce a series of adverse reactions , such as extrapyramidal side effects , orthostatic hypotension , and liver damage,4 which are not tolerated by many patients . in the meantime , atypical antipsychotics , such as risperidone and olanzapine , which have been widely used in the treatment of schizophrenia,5 have been credited with good therapeutic effects and rare side effects .
olanzapine , a novel atypical antipsychotic drug , has been demonstrated in several placebo - controlled trials to possess acute antimanic effects through either monotherapy or combination with other psychotropic agents.68 further studies have assessed the antimanic effects of olanzapine in comparison with mood stabilizers , and found that olanzapine had better efficacy than divalproex and lithium.9,10 recently , several trials that added olanzapine to ongoing lithium or valproate therapy also showed positive outcomes.1113 while all these results were from trials for caucasians , rare studies included asian populations .
moreover , populations in previous studies were not drug - nave subjects , which might have skewed the results .
the present study aimed to investigate the efficacy and safety of a combined therapy with olanzapine and sodium valproate in the management of acute manic episodes of bp , and compare differences between the combined therapy and olanzapine or sodium valproate monotherapy in a chinese population . through this study ,
we report that the combination of olanzapine valproate increases clinical global impression bipolar ( cgi - bp ) scale scores compared with either olanzapine or valproate monotherapy .
our work presents a novel idea of olanzapine valproate combination therapy versus olanzapine or valproate monotherapy to improve clinical outcome in bipolar i manic episode treatment .
this study recruited 120 patients with an acute manic episode of bp , who were recruited by the department of psychiatry of the second affiliated hospital of zhejiang university school of medicine .
these patients ( 60 drug - nave males and 60 drug - nave females ) were in their first acute manic episode when included in this study .
all of the patients were diagnosed with bipolar i by qualified psychiatrists according to the fourth edition of the diagnostic and statistical manual of mental disorders ( dsm - iv ) .
the young mania rating scale ( ymrs ) was used to assess the severity of bp.14 patients with a ymrs total score 17 were recruited in the study .
the exclusion criteria were : 1 ) female patients with pregnancy or lactation ; 2 ) severe and unstable diseases , including cardiovascular , respiratory , liver , kidney , gastrointestinal , neurological , endocrine , immune , blood - system conditions , narrow - angle glaucoma , and seizures ; 3 ) substance dependence ( except tobacco ) according to the dsm - iv standards ; 4 ) history of untolerated use of olanzapine or sodium valproate ; and 5 ) history of use of any antipsychotics or mood stabilizers .
the study was approved by the medical ethics committee of the second affiliated hospital of zhejiang university .
for randomization , a random number table with sequentially numbered , opaque , and sealed envelopes was used to conceal the allocation sequence .
this regimen was continued until the development of severe adverse events or up to 28 days , whichever was sooner .
the physician kept the randomization code , and no rater became aware of treatment allocations before requesting unmasking at the end of the study . for group
a , patients initially received sodium valproate ( xiangzhong pharmaceutical co ltd , hunan , people s republic of china ) at 0.6 g / day ( two to three times per day orally ) .
the dose of sodium valproate was gradually increased to 1.21.8 g / day based on the patients reaction . for group b
, patients initially received olanzapine ( eli lilly and company , indianapolis , in , usa ) at 10 mg / day ( once a day orally ) .
the dose of olanzapine was adjusted by 520 mg / day based on the patients condition . for group c
, patients received both olanzapine and sodium valproate , which were administered in the same manner as in groups a and b. this whole treatment course lasted for 4 weeks , and clinical data were collected at the beginning of the trial and the end of every week by interview .
treatment was stopped when severe side effects occurred or the disease worsened . aside from trial medications ,
patient assessments were conducted by a professional psychiatrist who was blind to the experimental condition .
the primary measure of the efficacy of drugs was the mean change from baseline to end point in the ymrs total score .
clinical responses on the ymrs were defined as an improvement of 50% or greater.3 patients were assessed once a week after treatment .
any treatment - emergent adverse effects were recorded , were monitored mostly by the subjects and the accompanying family members , and were evaluated by the professional interviewer . any new adverse reaction emerging with the subjects at least once was regarded as a treatment - emergent adverse effect .
specifically , weight gain was defined as a gain of 5% or greater , and extrapyramidal reactions were assessed by the simpson
severe adverse effects and poor efficacy were classified as : 1 ) patients could barely tolerate or accept the medications , and 2 ) the interviewer concluded that the trial was not beneficial to patients .
laboratory tests , including routine blood test , hepatic function test , blood glucose monitoring , lipid panel screen , and electrocardiograph , were conducted at the beginning and the end of the treatment .
data were on an intent - to - treat basis , and included all patients who met the entry criteria and contained all the information about the measurements by the statistician who was blind to the experimental condition .
all statistical analysis was performed using spss 16.0 ( spss inc , chicago , il , usa ) .
this study recruited 120 patients with an acute manic episode of bp , who were recruited by the department of psychiatry of the second affiliated hospital of zhejiang university school of medicine .
these patients ( 60 drug - nave males and 60 drug - nave females ) were in their first acute manic episode when included in this study .
all of the patients were diagnosed with bipolar i by qualified psychiatrists according to the fourth edition of the diagnostic and statistical manual of mental disorders ( dsm - iv ) .
the young mania rating scale ( ymrs ) was used to assess the severity of bp.14 patients with a ymrs total score 17 were recruited in the study .
the exclusion criteria were : 1 ) female patients with pregnancy or lactation ; 2 ) severe and unstable diseases , including cardiovascular , respiratory , liver , kidney , gastrointestinal , neurological , endocrine , immune , blood - system conditions , narrow - angle glaucoma , and seizures ; 3 ) substance dependence ( except tobacco ) according to the dsm - iv standards ; 4 ) history of untolerated use of olanzapine or sodium valproate ; and 5 ) history of use of any antipsychotics or mood stabilizers .
the study was approved by the medical ethics committee of the second affiliated hospital of zhejiang university .
the study was a prospective double - blind randomized controlled trial . for randomization , a random number table with
sequentially numbered , opaque , and sealed envelopes was used to conceal the allocation sequence .
this regimen was continued until the development of severe adverse events or up to 28 days , whichever was sooner .
the physician kept the randomization code , and no rater became aware of treatment allocations before requesting unmasking at the end of the study .
for group a , patients initially received sodium valproate ( xiangzhong pharmaceutical co ltd , hunan , people s republic of china ) at 0.6 g / day ( two to three times per day orally ) .
the dose of sodium valproate was gradually increased to 1.21.8 g / day based on the patients reaction . for group b ,
patients initially received olanzapine ( eli lilly and company , indianapolis , in , usa ) at 10 mg / day ( once a day orally ) .
the dose of olanzapine was adjusted by 520 mg / day based on the patients condition . for group c
, patients received both olanzapine and sodium valproate , which were administered in the same manner as in groups a and b. this whole treatment course lasted for 4 weeks , and clinical data were collected at the beginning of the trial and the end of every week by interview .
treatment was stopped when severe side effects occurred or the disease worsened . aside from trial medications ,
patient assessments were conducted by a professional psychiatrist who was blind to the experimental condition . the ymrs was used to evaluate the severity of manic symptoms .
the primary measure of the efficacy of drugs was the mean change from baseline to end point in the ymrs total score .
clinical responses on the ymrs were defined as an improvement of 50% or greater.3 patients were assessed once a week after treatment .
any treatment - emergent adverse effects were recorded , were monitored mostly by the subjects and the accompanying family members , and were evaluated by the professional interviewer . any new adverse reaction emerging with the subjects at least once was regarded as a treatment - emergent adverse effect .
specifically , weight gain was defined as a gain of 5% or greater , and extrapyramidal reactions were assessed by the simpson
severe adverse effects and poor efficacy were classified as : 1 ) patients could barely tolerate or accept the medications , and 2 ) the interviewer concluded that the trial was not beneficial to patients .
laboratory tests , including routine blood test , hepatic function test , blood glucose monitoring , lipid panel screen , and electrocardiograph , were conducted at the beginning and the end of the treatment .
data were on an intent - to - treat basis , and included all patients who met the entry criteria and contained all the information about the measurements by the statistician who was blind to the experimental condition .
all statistical analysis was performed using spss 16.0 ( spss inc , chicago , il , usa ) .
a total of 120 patients were recruited into the study , and were randomly divided into three groups ( groups a , b , and c ) .
forty patients were included in group a at the beginning of the treatment , but only 37 patients ( 18 males and 19 females ) completed this trial .
group b included 40 patients at the beginning of the treatment , and 39 patients ( 19 males and 20 females ) completed the trial .
group c contained 40 patients at the beginning of the treatment , and 38 patients ( 18 males and 20 females ) completed the trial .
mean disease duration was 12.75.8 days ( range 718 days ) ( table 1 ) .
there were no significant differences in age , sex , or disease duration among groups a , b , and c ( p>0.05 ) ( table 1 ) .
there was no significant difference in the percentage of patients that completed the trial among the three groups ( group a 92.5% , group b 97.5% , group c 95.0% ; p>0.05 ) . in group a , two patients discontinued treatment due to poor efficacy , and one patient initiated withdrawal . in group b , one patient discontinued treatment due to side effects ( 5.2% weight gain in 2 weeks according to our measurements , and unacceptable drowsiness reported by the patient ) . in group c ,
one patient discontinued treatment due to side effects ( 5.6% weight gain in three weeks according to our measurements ) , and one patient withdrew due to abnormal hepatic function ( alanine aminotransferase increased to 467 u / l from 42
u / l ) ( table 2 ) . at the end of the treatment ,
the average dose of sodium valproate was 1.530.22 g / day in group a , and the average dose of olanzapine was 16.32.1 mg / day in group b. the average dose of sodium valproate in group c was 1.080.45 g / day , which was significantly lower compared with group a ( 1.530.22 g / day , p<0.05 ) .
the average dose of olanzapine in group c was 13.13.2 mg / day , which was lower than group b , but there was no significant difference ( 16.32.1 mg / day , p>0.05 ) ( table 1 ) .
patients in all three groups showed a significant improvement in ymrs scores as the primary outcome during the course of the treatment .
there was no significant difference in the baseline ymrs score among the three groups ( group a 34.25 6.07 , group b 34.557.03 , group c 34.399.12 ) .
the percentage decrease of ymrs score in both groups b and c was significantly higher than that in group a ( p<0.01 ) during the 4-week treatment ( table 3 ) . at the end of the fourth week
, patients in group c showed significantly greater improvement in ymrs score compared with group b ( p<0.01 ) . at the end of the fourth week , the ymrs score significantly decreased in group a ( < 75% ) compared with groups b and c ( > 75% , p<0.05 ) ( table 3 , figure 1 ) .
cgi - bp scale scores were used to assess the secondary outcome in groups a , b , and c. there was no significant difference in baseline cgi - bp scale total scores among the three groups ( group a 5.550.76 , group b 5.651.09 , group c 5.560.86 ; p>0.05 ; table 4 ) . during weeks 1 and 2 , there was no significant difference in baseline cgi - bp scale total scores among the three groups ( p>0.05 ) . during weeks 3 and 4 ,
patients in groups b and c showed significant improvement in cgi - bp scale total scores compared with those in group a ( p<0.05 ) , and patients in group c showed significant improvement in cgi - bp scale total scores compared with those in group b ( p<0.01 ; table 4 , figure 2 ) .
no statistically significant changes were seen from baseline in extrapyramidal symptoms on the simpson angus scale .
rates of adverse events , including weight gain , sleepiness , constipation , and dizziness , were more frequently reported in groups b and c than in group a ( p<0.05 ; table 2 ) .
a total of 120 patients were recruited into the study , and were randomly divided into three groups ( groups a , b , and c ) .
forty patients were included in group a at the beginning of the treatment , but only 37 patients ( 18 males and 19 females ) completed this trial .
group b included 40 patients at the beginning of the treatment , and 39 patients ( 19 males and 20 females ) completed the trial .
group c contained 40 patients at the beginning of the treatment , and 38 patients ( 18 males and 20 females ) completed the trial .
mean disease duration was 12.75.8 days ( range 718 days ) ( table 1 ) .
there were no significant differences in age , sex , or disease duration among groups a , b , and c ( p>0.05 ) ( table 1 ) .
there was no significant difference in the percentage of patients that completed the trial among the three groups ( group a 92.5% , group b 97.5% , group c 95.0% ; p>0.05 ) . in group a , two patients discontinued treatment due to poor efficacy , and one patient initiated withdrawal . in group b , one patient discontinued treatment due to side effects ( 5.2% weight gain in 2 weeks according to our measurements , and unacceptable drowsiness reported by the patient ) . in group c ,
one patient discontinued treatment due to side effects ( 5.6% weight gain in three weeks according to our measurements ) , and one patient withdrew due to abnormal hepatic function ( alanine aminotransferase increased to 467 u / l from 42
u / l ) ( table 2 ) . at the end of the treatment ,
the average dose of sodium valproate was 1.530.22 g / day in group a , and the average dose of olanzapine was 16.32.1 mg / day in group b. the average dose of sodium valproate in group c was 1.080.45 g / day , which was significantly lower compared with group a ( 1.530.22 g / day , p<0.05 ) .
the average dose of olanzapine in group c was 13.13.2 mg / day , which was lower than group b , but there was no significant difference ( 16.32.1 mg / day , p>0.05 ) ( table 1 ) .
patients in all three groups showed a significant improvement in ymrs scores as the primary outcome during the course of the treatment .
there was no significant difference in the baseline ymrs score among the three groups ( group a 34.25 6.07 , group b 34.557.03 , group c 34.399.12 ) .
the percentage decrease of ymrs score in both groups b and c was significantly higher than that in group a ( p<0.01 ) during the 4-week treatment ( table 3 ) . at the end of the fourth week
, patients in group c showed significantly greater improvement in ymrs score compared with group b ( p<0.01 ) . at the end of the fourth week ,
the ymrs score significantly decreased in group a ( < 75% ) compared with groups b and c ( > 75% , p<0.05 ) ( table 3 , figure 1 ) .
cgi - bp scale scores were used to assess the secondary outcome in groups a , b , and c. there was no significant difference in baseline cgi - bp scale total scores among the three groups ( group a 5.550.76 , group b 5.651.09 , group c 5.560.86 ; p>0.05 ; table 4 ) . during weeks 1 and 2 , there was no significant difference in baseline cgi - bp scale total scores among the three groups ( p>0.05 ) . during weeks 3 and 4 ,
patients in groups b and c showed significant improvement in cgi - bp scale total scores compared with those in group a ( p<0.05 ) , and patients in group c showed significant improvement in cgi - bp scale total scores compared with those in group b ( p<0.01 ; table 4 , figure 2 ) .
no statistically significant changes were seen from baseline in extrapyramidal symptoms on the simpson angus scale .
rates of adverse events , including weight gain , sleepiness , constipation , and dizziness , were more frequently reported in groups b and c than in group a ( p<0.05 ; table 2 ) .
in this double - blind , randomized controlled study , we found that combination therapy of olanzapine sodium valproate had significantly better efficacy in managing bipolar i manic episodes than valproate monotherapy , assessed by ymrs and cgi - bp scale scores , in line with previous studies .
furthermore , we also found that the combination therapy had an advantage over olanzapine monotherapy in efficacy , which was not shown in previous randomized controlled trials .
patients on olanzapine monotherapy showed significantly greater improvement of outcome than patients on sodium valproate monotherapy .
there were no significant differences in extrapyramidal symptoms among patients on different treatments , but the number of adverse events was higher in patients receiving olanzapine monotherapy or combined therapy . in the present study , the enrolled subjects were in their first acute manic episode without prior treatment with valproate or olanzapine , which was different from the previous studies,3,11,12 while the dosage - administration method and internals of follow - up interview and assessment methods in this study were similar to previous studies , which proved to be acceptable to subjects and reasonable for the trial .
we also found that after 1 week of treatment , the percentage decrease in ymrs scores was significantly higher in patients receiving olanzapine monotherapy than in patients receiving valproate monotherapy .
moreover , after 3 weeks of treatment , cgi - bp scale total scores were significantly lower in patients receiving olanzapine than in patients receiving valproate .
these findings suggest that olanzapine monotherapy has a faster effect and is more effective in treating bipolar i mania than valproate monotherapy .
consistent with this , an open - label , 8-week trial of olanzapine or valproate for the treatment of bipolar manic relapse in italian adult patients has shown that patients receiving olanzapine had significant improvement in ymrs scores after 1 week of treatment.11 furthermore , another prospective open - label trial of olanzapine monotherapy and olanzapine combination therapy in europeans with bipolar mania disorder found that olanzapine treatment significantly reduced cgi - bp scale scores after 1 week.15 taken together , these studies suggest that olanzapine can produce efficacy in managing bipolar i manic episodes as early as 1 week after the start of treatment .
interestingly , the present study also found that the combination therapy of valproate with olanzapine showed similar efficacy by ymrs score within 3 weeks of the treatment compared with olanzapine monotherapy , suggesting that valproate had no significant efficacy after short - term treatment ( 3 weeks ) .
consistent with our findings , it has been reported that after 3 weeks of olanzapine but not valproate treatment , patients with bipolar i manic episodes showed significantly greater improvement in ymrs scores than controls.16 furthermore , a 47-week trial for the treatment of bipolar i mania has shown that the average onset of efficacy is 14 days for olanzapine and 62 days for valproate.17 our study found that the percentage decrease in ymrs scores in the group on valproate monotherapy reached over 50% only at the end of the fourth week , suggesting that valproate needed 3 weeks to produce efficacy for the treatment of bipolar i mania .
therefore , the use of olanzapine monotherapy provides superior efficacy in managing bipolar i manic episodes .
in addition , we found that adverse events , including weight gain , sleepiness , dizziness , and constipation , occurred more frequently in patients receiving olanzapine therapy .
this finding was in line with previous reports showing that the use of atypical antipsychotics increased the incidence of weight gain18 and olanzapine in combination with valproate had a higher incidence of dizziness than valproate monotherapy.3 importantly , in the present study , olanzapine monotherapy also caused dizziness more frequently than valproate monotherapy , suggesting that there is no clear synergism between olanzapine and valproate in this adverse effect .
recent studies have shown that the gastric ghrelin - signaling system19 and extrahepatic insulin resistance20 may contribute to olanzapine - induced side effects .
in addition , there was no significant difference in baseline extrapyramidal reactions after treatment , suggesting that olanzapine did not cause extrapyramidal side effects . despite the sound results ,
firstly , this study did not include a placebo control group to confirm the efficacy of olanzapine or sodium valproate monotherapy , which was due to the concern that patients with manic episodes should be actively controlled with drugs .
secondly , the present study recruited only 120 patients with bipolar i manic episodes ( n=40 for each group ) .
although we found a significant difference in the efficacy of the medication among groups , the sample size of this study was small .
clinical trials with a larger sample size are required to confirm the results of this study in the future .
in this study , we found that in patients with bipolar i manic episodes , the combination therapy of olanzapine and sodium valproate had significantly better efficacy than valproate or olanzapine monotherapy .
although patients on olanzapine therapy experienced more adverse side events , none of these side effects seemed to be life - threatening .
therefore , our work presents a safe but novel approach for olanzapine valproate combination treatment versus olanzapine or valproate mono - therapy to improve clinical outcome in managing bipolar i manic episodes . | backgroundbipolar disorder ( bp ) is a mental illness that has a high social burden estimated by disability - adjusted life years . in the present study ,
we investigated the efficacy of olanzapine valproate combination therapy versus olanzapine or valproate monotherapy in the treatment of bipolar i mania in a chinese population group.subjects and methodspatients aged 1958 years who had had an acute manic episode of bp were enrolled in the present study and randomly assigned to receive 600 mg sodium valproate daily ( group a ) , 10 mg olanzapine daily ( group b ) , or a combination of 600 mg olanzapine and 10 mg sodium valproate daily ( group c ) for 4 weeks .
the primary outcome was reduction in young mania rating scale ( ymrs ) scores .
the secondary outcome was assessed with the clinical global impression bipolar ( cgi - bp ) scale .
adverse reactions , such as weight gain , sleepy , and dizziness were also evaluated .
statistical analysis was carried out on a per - protocol basis.resultspatients in groups b and c showed significant improvement in ymrs scores compared with those in group a ( p<0.01 ) during weeks 14 of treatment .
patients in group c showed significant improvement in ymrs scores compared with those in group b ( p<0.01 ) only after 4 weeks of treatment .
furthermore , after 34 weeks of treatment , patients in groups b and c showed significantly greater improvement in cgi - bp scale scores compared with group a ( p<0.05 ) , while group c demonstrated significantly greater improvement in cgi - bp scale scores than group b ( p<0.01 ) .
no significant difference existed in extrapyramidal reactions among these groups .
adverse reactions , including weight gain , drowsiness , dizziness , and constipation , were stronger in groups b and c than in group a ( p<0.05).conclusionthe combination therapy with olanzapine and sodium valproate had higher efficacy than monotherapy in patients with bipolar mania , which provides a crucial insight of the treatment regimen during clinical practice . |
As he planned the new opera, he approached Ms. Luna, who had already ventured to a high G as the sprite Ariel in Mr. Adès’s adaptation of “The Tempest” at the Met in 2012.
“I’ve practiced up to a C above high C in the past,” she said in an interview in her dressing room. “So I know it’s in me. But it’s just nothing I’ve performed on any stage before.”
“When I saw Ariel the first time, it was like a dare,” she added, referring to the “Tempest” score. “And this is a double-dog dare.”
In “The Exterminating Angel,” based on the 1962 Luis Buñuel film, Ms. Luna plays Leticia, an opera diva who is part of a blue-bloods dinner party, the guests of which find themselves mysteriously unable to leave at the end of the evening. The vocal demands are a workout for almost every performer onstage.
“The note,” Mr. Adès said, “the range, the tessitura, is a metaphor for the ability to transcend these psychological and invisible boundaries that have grown up around them.”
Adding to the excitement of the high A is its placement in the score. Unlike in other high-flying parts — the imperious Queen of the Night in “The Magic Flute,” the spunky Zerbinetta in “Ariadne auf Naxos,” the long-suffering title role in “Lucia di Lammermoor” — there’s little time for Ms. Luna to warm up: The A is her very first note, sung before she’s even visible onstage. (She sings it again a short time later, as the party guests, in a surreal portent, leave the stage and re-enter.)
“It’s a moment of arrival,” Mr. Adès said. “It had to be on this note.”
Growing up in Oregon, Ms. Luna sang the daunting Queen of the Night when she was still in high school “just because it was fun,” she said. “And I liked the sensation it made in my bones, in my head, in my sinuses. It just gave me a high. It still gives me a high.”
Photo
Her topmost register is unusually lucid and effortless. Even in those notes unattainable by most other sopranos, and even when those notes are held far longer than the pecks requested by most other composers, Ms. Luna’s tone is full. She manages to avoid shrillness in what she aptly calls the “Wagnerian coloratura spectacle” that is her final “Exterminating Angel” aria, a flood of sustained superhigh sound up to F.
Advertisement Continue reading the main story
Even if nothing in previous Met history has equaled her high A, other singers have come close, sometimes adding unwritten interpolations and transpositions to show off their personal stratospheres. A number of the highest notes in Met history have emerged from sopranos singing the title part in “Lucia di Lammermoor”; it’s no coincidence that this is the role Ms. Luna’s character performs just before the dinner party at the start of “The Exterminating Angel.”
Ellen Beach Yaw, born near Buffalo in 1869, sang a G above C as Lucia in her single Met performance in 1908. The review in The New York Times praised her “flutelike Santos-Dumont notes,” comparing her to a Brazilian aviation pioneer, and added, in a reference to a Wild West gunslinger: “She hit that high G as promised, but it is like Bat Masterson hitting a tomato can with a .44 at four paces.”
The celebrated French soprano Lily Pons sustained a high F in the final mad scene from “Lucia” — sung, at her Met debut in 1931, “in legitimate note, not bird whistles or falsetto,” according to The New York Post. At the turn of the 20th century, Sibyl Sanderson, as Massenet’s Manon, hit a G, known as her “Eiffel Tower note.” Mado Robin, a French coloratura, was recorded shrilling up to a B flat, but she never sang at the Met.
More recently, Natalie Dessay was known in New York for her crystal-clear G’s as the mechanical doll Olympia in Offenbach’s “Les Contes d’Hoffmann.” Just this fall, Erin Morley’s Olympia ornaments brought her up to A flat, a feat Rachele Gilmore achieved in the role at the Met in 2009.
The company admits it is possible that an even higher note could have slipped through the archival cracks. “There’s no record keeping of such things, especially of improvised stuff,” said Peter Clark, the Met’s archivist, who remembers hearing Pons’s F on a radio broadcast as a child. “So it’s not to say that in 1908, say, something higher didn’t happen. But I doubt that it wouldn’t be mentioned somewhere.”
“The Exterminating Angel” isn’t the first time Mr. Adès has pushed a singer to extremes, nor was “The Tempest.” In fact, one of his very first works, “Five Eliot Landscapes” for soprano and piano, from 1990, ends on a sustained G flat.
“It’s a certain amount of useful cruelty involved,” he said with a chuckle, before correcting himself. “Not cruelty, but the callowness of youth.”
Callow or not, he’s still at it. Mr. Adès said that in the score of “The Tempest,” which had its premiere in London in 2004, he had placed a high G in brackets, indicating a note that at the time he only dreamed of hearing.
Advertisement Continue reading the main story
When she sang the part at the Met, Ms. Luna could reach the G. Now, more than a decade later, the score of “The Exterminating Angel” has a high B in brackets — yet another seemingly impossible note, waiting patiently for a soprano who can crack yet another music-stave ceiling. ||||| These crawls are part of an effort to archive pages as they are created and archive the pages that they refer to. That way, as the pages that are referenced are changed or taken from the web, a link to the version that was live when the page was written will be preserved.Then the Internet Archive hopes that references to these archived pages will be put in place of a link that would be otherwise be broken, or a companion link to allow people to see what was originally intended by a page's authors.The goal is to fix all broken links on the web . Crawls of supported "No More 404" sites. ||||| At the Metropolitan Opera, the memorable performances usually happen with great renditions of the standards: a soprano who rivets as Tosca, or a or a sumptuous orchestral reading of the Ring.
A bracing premiere of a contemporary work isn’t the company’s specialty, but on Thursday night just such an occasion made for the highlight of the fall season thus far. Thomas Adès’ second premiere at the Met showed once again that he is undoubtedly one of the foemost composers of our time.
Luis Buñuel’s 1962 film The Exterminating Angel has long had a following of its own, as a cinema classic and a touchstone of the surrealist movement. A story of a high-society dinner that no one can ever leave—and whose attendees become more and more savage in their interactions as the ordeal continues—it is at once a Marxist critique of aristocratic values as well as the most exquisite nightmare of the socially anxious.
In its operatic form, The Exterminating Angel is an intense experience. The libretto by Tom Cairns hews closely to its source, following the original almost to the letter. As a party of dinner guests arrive—twice—at the home of Edmundo and Lucía de Nobile, the household staff hurry to leave, offering meager excuses for their absence. As the fête drags on late into the night and through to the next day, the company realize that they are under some sort of spell that prevents them from leaving.
The staging, also by Cairns, is a vivid creation, using stark visuals to make the scene come alive while retaining Buñuel’s impeccable sense of the bizarre. The stage turntable is deftly employed to present the grim scene in the parlor in tandem with the agitation of the curious mob outside. The brilliance of Hildegard Bechtler’s costuming becomes most apparent in the closing tableau, where the splashy colors of the crowd’s chic ‘60s clothes are pitted against the faded grandeur of the dinner guests’ once-opulent evening attire.
What makes Buñuel’s scenario so fascinating is precisely the fact that, as far as any viewer can tell, there is no “forcefield” preventing the dinner guests from leaving. The effect of the barrier, such as it is, is psychological rather than physical: the guests are trapped in the room not because they are thrown back by some unseen force, but because, as they approach the doorway, they realize there is one more coffee spoon that needs to be picked up, one more goodbye to be said. In that regard the only directorial choice that rings hollow is an attempted exit by Julio, the butler, who strides purposefully toward the dining room and then jumps back, as though zapped by an electric current.
There are no real leads as such, but the current run at the Met brings together a superb ensemble cast.
In Thursday’s night’s American premiere, Joseph Kaiser was the picture of a gracious host, his tenor bright and clean as Edmundo de Nobile. Amanda Echalaz was a glamorous presence, with a blooming, liquid soprano as his wife Lucía. Audrey Luna, now a regular in Adès operas, blazed high above the staff as the opera diva Leticia Maynar. Sally Matthews was deliciously obnoxious as the self-absorbed Silvia de Ávila, singing with piercing brightness, and countertenor Iestyn Davies’s countertenor was clear and pealing as ever, finding a bit of slime in his characterization of her incestuous brother, Francisco.
David Adam Moore showed a robust, brassy baritone as the hot-headed Colonel G’omez, while Christian Van Horn maintained a quiet dignity as Julio. Kevin Burdette’s appearance as the elderly Señor Russell was brief but chilling, as he groaned out a dark premonition in his final moments. The preening of Rod Gilfry as Maestro Alberto Roc offered tart moments of comedy as the scene spiraled into chaos. As Raúl, Frédéric Antoun brought a warm, beaming tenor that in another opera could serve for a romantic lead.
If the piece has any kind of moral center, it’s Doctor Conde, ever the voice of reason shouting against the natural slide towards barbarism—here played by John Tomlinson, an admirable, imposing bass who could often take on a more buffo feel.
Even with so strong a cast, the real stars are Adès and the brilliant score that he led from the pit. Much of his music finds novel ways to put the listener on edge: recurring throughout the piece is a quasi-motivic Viennese waltz, resurfacing at the most bizarre moment possible, and always slightly twisted, so as to create acute unease through the distortion of a familiar form.
Adès finds opportunities to show off his gift for lyric writing, as well: towards the end of Act II, as things really unravel, Blanca (Christine Rice, full-voiced and self-assured) sits at the piano and sings a haunting melody, starting mournfully, and growing more maddened as it goes along until eventually she pounds away at a single dissonant chord. When the soon-to-be spouses Beatrìz (a rich-voiced Sophie Bevan) and Eduardo (the shining tenor David Portillo) decide to end their torment apart from the rest of the company, they sing a sumptuous duet in their fatal embrace.
Some of the most striking music of all comes in a set-piece aria for Leonora, a middle-aged patient of Dr. Conde with a terminal diagnosis. Reproducing a scene from the film in which she hallucinates Russell’s severed hand coming out of the closet, Adès imagines a poetic song for voice and guitar, with the hand dancing across the doorframe as though plucking out the accompaniment. Alice Coote was marvelous as Leonora; her usual richness of tone and deep connection to music and text made for a truly trance-like experience.
But The Exterminating Angel is in many ways more a theatrical piece than a musical one, and the genius in Adès’ work is his ability to find an ingenious musical solution to every dramatic need.
His imagination begins even before the downbeat: rather than the customary eight-minute bell, an eerie chime summons the audience into the theater, growing ever more insistent as the curtain-time approaches. As the guests sleep between Acts I and II, a blaring, percussive military march accompanies a Rothko-like apparition, a pale wash of green that moves across the front curtain. As the guests begin to understand their predicament, those who truly grasp the situation are only able to express their fears at an excruciatingly slow pace, accompanied by the eerie wailing of an ondes martenot.
Seldom has a dramatic work come to more vivid life in its musical realization. Adès’ latest is a masterpiece in every sense; with two stirring successes under his belt, he’s established an impressive track record of presenting major new works at the Met, albeit after previous showings elsewhere. Sooner or later, the company ought to take a real chance and offer an Adès world premiere.
The Exterminating Angel runs through November 21 at the Metropolitan Opera. metopera.org
Leave a Comment | – "When I hear the conventional high C of a soprano, I want to say, 'Show us what else you've got,'" says a British composer whose new opera is currently being performed at the Metropolitan Opera. Soprano Audrey Luna did just that. The New York Times reports she is the only singer on record in the Met's 137-year history to hit the A above high C, something she does twice during Thomas Adès' The Exterminating Angel, the story of a high-society dinner party whose guests are oddly unable to leave. The Times calls hitting the note "a combination of genetic gifts, rigorous training and psychological discipline over two fragile vocal cords." Luna—a Grammy winner, notes the Sacramento Bee—has sung a high G previously at the Met. She was recruited by Adès for the part, says she's practiced as high as the C above high C, so she felt confident she was capable of hitting the note. And she has to hit it in a somewhat incredible way: It's the very first note she sings, as she's coming on stage; she sings it once more shortly after. "It's a moment of arrival," says Adès. "It had to be on this note." New York Classical Review offered its take on Luna's performance, writing she "blazed high." Head to the Times to hear audio of the note. |
since the discovery of the void in botes ( kirshner et al . 1981 ) , with a diameter of @xmath9 , and subsequent discoveries of voids in larger redshift surveys ( geller & huchra 1989 ; pellegrini , da costa & de carvalho 1989 ; da costa et al .
1994 ; shectman et al .
1996 ; el - ad , piran & da costa 1996 , 1997 ; mller et al .
2000 ; plionis & basilakos 2002 ; hoyle & vogeley 2002 ) , these structures have posed an observational and theoretical challenge . because the characteristic scale of large voids was comparable to the depth of early redshift surveys , few independent structures were detected , making statistical analysis of their properties difficult .
likewise , the limitations of computing technology constrained early cosmological simulations to include only a few voids per simulation . whether voids are empty or not has been the question of recent debate .
peebles ( 2001 ) pointed out the apparent discrepancy between cold dark matter models ( cdm ) and observations .
cdm models predict mass and hence , maybe galaxies , inside the voids , ( dekel & silk 1986 ; hoffman , silk & wyse 1992 ) . however , pointed observations toward void regions failed to detect a significant population of faint galaxies ( kuhn , hopp & elssser 1997 ; popescu , hopp & elssser 1997 ; mclin et al .
surveys of dwarf galaxies indicate that they trace the same overall structures as larger galaxies ( bingelli 1989 ) .
thuan , gott & schneider ( 1987 ) , babul & postman ( 1990 ) and mo , mcgaugh & bothun ( 1994 ) showed that galaxies had common voids regardless of hubble type .
grogin & geller ( 1999 , 2000 ) identified a sample of 149 galaxies that lie in voids traced by the center for astrophysics survey .
the void galaxies were found in the century and 15r redshift samples .
grogin & geller showed that the void galaxies tended to be bluer and that a significant fraction of them were of late type .
their sample of 149 void galaxies covered a narrow range of absolute magnitude ( @xmath10 ) of which 49 have a low density contrast of @xmath11 . here
we present a sample of @xmath12 void galaxies found in regions of density contrast @xmath13 .
this sample is large enough to allow comparison of void and wall galaxies with the same color , surface brightness profile and luminosity to statistically quantify their differences .
the range of absolute magnitude ( sdss @xmath14-band ) in our sample ( @xmath15 ) is large enough to include faint dwarfs to giants . in this paper
, we introduce a new sample of void galaxies from the sloan digital sky survey ( sdss ) .
the large sky coverage and depth of the sdss provides us with the opportunity to identify for the first time more than 10@xmath16 void galaxies with @xmath17 . in section [ sec : surv ] we discuss the galaxy redshift samples that we use for this analysis . in section [ sec : fvgs ] we describe our method for finding void galaxies . in section [ sec
: props ] we present the results found from the comparison of the photometric properties of void and wall galaxies and in section [ sec : discus ] we interpret these results by comparing them to predictions from semi - analytic modeling of structure formation and properties of different galaxy types . finally , in section [ sec : conc ] we present our conclusions .
the search for void galaxies requires a large 3-dimensional map of the galaxy density field .
we extract a volume - limited sample from the sdss data to map the galaxy density field and look for void galaxies in the full magnitude - limited sample . as the sdss currently has a slice - like geometry , with each slice only @xmath18 thick , large voids of radius @xmath19 ( @xmath20 )
can only be detected at comoving distances of @xmath21 using the sdss data alone .
therefore , to trace the local voids , we also extract a volume - limited sample from the combined updated zwicky catalog ( uzc ; falco et al .
1999 ) and southern sky redshift survey ( ssrs2 ; da costa et al .
it should be noted that nearby void galaxies are not selected from the uzc and ssrs2 surveys .
these surveys are only used to define the density field around sdss galaxies that lie at distances @xmath22 .
to recap , we have two volume - limited samples , one from the sdss and one from the combined uzc+ssrs2 .
these samples are used to define the galaxy density field only .
void galaxies are found from the magnitude - limited sdss sample .
we define the distant sample to be the sdss magnitude - limited sample truncated at @xmath23 .
the nearby sample is the sdss magnitude - limited sample truncated at @xmath24 .
both magnitude - limited samples ( nearby and distant ) are constructed using the sdss @xmath14-band . in this section
we describe each of the surveys and samples in detail .
the sdss is a wide - field photometric and spectroscopic survey .
the completed survey will cover approximately @xmath25 square degrees .
ccd imaging of 10@xmath26 galaxies in five colors and follow - up spectroscopy of 10@xmath27 galaxies with @xmath28 will be obtained .
york et al .
( 2000 ) provides an overview of the sdss and stoughton et al .
( 2002 ) describes the early data release ( edr ) and details about the photometric and spectroscopic measurements made from the data .
abazajian et al . (
2003 ) describes the first data release ( dr1 ) of the sdss .
technical articles providing details of the sdss include descriptions of the photometric camera ( gunn 1998 ) , photometric analysis ( lupton et al . 2002 ) , the photometric system ( fukugita et al .
1996 ; smith et al .
2002 ) , the photometric monitor ( hogg et al .
2001 ) , astrometric calibration ( pier et al .
2002 ) , selection of the galaxy spectroscopic samples ( strauss et al . 2002 ; eisenstein et al .
2001 ) , and spectroscopic tiling ( blanton et al . 2001 ) .
a thorough analysis of possible systematic uncertainties in the galaxy samples is described in scranton et al .
( 2002 ) .
we examine a sample of 155,126 sdss galaxies ( blanton et al . 2002 ; sample10 ) that have both completed imaging and spectroscopy .
the area observed by sample10 is approximately 1.5 times that of the dr1 ( abazajian et al .
2003 ) . to a good approximation ,
the sample we analyze consists of roughly three regions covering a total angular area of 1,986 deg@xmath29 . due to the complicated geometry of the sdss sky coverage
, the survey regions are best described in the sdss coordinate system ( see stoughton et al .
where possible in this section we describe approximate limits in the more familiar equatorial coordinates .
the first region is an equatorial stripe in the north galactic cap ( ngc ) .
this stripe has a maximum extent of @xmath30 in the declination direction over the r.a .
range @xmath31 and maximum length of @xmath32 over the r.a .
range @xmath33 .
the second region is in the south galactic cap ( sgc ) .
there are three stripes , the boundaries of which are defined in the sdss coordinate system .
each stripe is @xmath34 wide in sdss survey coordinates .
one stripe is centered at @xmath35 and covers the r.a .
range @xmath36 .
the other two stripes are above and below the equator and cover similar r.a
. ranges . in survey
coordinates these two stripes cover the range @xmath37 , @xmath38 and @xmath39 , @xmath40 .
the third large region is in the north galactic cap . in sdss survey
coordinates it covers the range @xmath41 , @xmath42 .
there are additional smaller stripes at @xmath43 , @xmath44 and @xmath45 , @xmath46 ( the boundary is an approximation because of the tiling geometry ) .
we correct the velocities of galaxies to the local group frame according to @xmath47\ ] ] where @xmath48 , @xmath49 , and @xmath50 ( karachentsev & makarov 1996 ) .
the magnitudes of the galaxies are @xmath51-corrected as described in blanton et al .
( 2003 ) and corrections for galactic extinction are made using the schlegel , finkbeiner , & davis ( 1998 ) dust maps . finally , to convert redshifts into comoving distances we adopt an @xmath52 cosmology .
the decrease of observed galaxy density with distance in an apparent magnitude - limited galaxy sample might cause us to erroneously detect more voids at large distances .
therefore , we use a volume - limited sub - sample of the sdss to define the density field of galaxies .
this sample consists of galaxies with redshifts less than the redshift limit , @xmath53 , and sdss @xmath14-band absolute magnitudes brighter than @xmath54 , where @xmath55\ ] ] @xmath56 instead of @xmath57 , in the construction of volume - limited catalog to ensure we have a uniform limit across all the data since earlier stripes were only observed to @xmath58 . ] , is the magnitude limit of the survey and @xmath59 , is the luminosity distance in units of @xmath60 at @xmath61 .
we form a volume - limited sample of the sdss with @xmath62 , with corresponding absolute - magnitude limit @xmath63 ( in the sdss @xmath14-band ) .
the redshift limit @xmath62 allows us to construct the largest possible volume - limited sample from the current sdss sample .
this volume - limited sample contains 22,866 galaxies where the mean separation between these galaxies is @xmath64 . for a @xmath65 cosmology ,
the redshift limit of @xmath66=0.089 , corresponds to a comoving distance of @xmath67 .
the lower bound of @xmath68 on the comoving distance is necessary due to the slice - like geometry of the early sdss slices . recall that voids of diameter @xmath69 can only be found at @xmath70 as discussed in section 2 .
the updated zwicky catalog ( falco et al . 1999 ) includes a re - analysis of data taken from the zwicky catalog and center for astrophysics surveys ( zwicky et al .
1961 - 1968 ; geller & huchra 1989 ; huchra et al .
1990 ; huchra , geller , & corwin 1995 ; huchra , vogeley , & geller 1999 ) together with new spectroscopic redshifts for some galaxies and coordinates from the digitized poss - ii plates .
improvements over the previous catalogs include estimates of the accuracy of the cfa redshifts and uniformly accurate coordinates at the @xmath71 level .
the uzc contains 19,369 galaxies . of the objects with limiting apparent magnitude @xmath72 ,
96% have measured redshifts , giving a total number of 18,633 objects .
the catalog covers two main survey regions : @xmath73 in the north galactic cap and @xmath74 in the south galactic cap .
we correct the velocities of the galaxies with respect to the local group as discussed in section 2.1 .
the magnitudes of the galaxies are corrected for galactic extinction using the schlegel , finkbeiner & davis ( 1998 ) dust maps and the magnitudes are @xmath51-corrected assuming @xmath75 , which is appropriate for the _ b _ filter and the median galaxy morphological type sab ( park et al .
1994 ; pence 1976 ; efstathiou , ellis & peterson 1988 ) .
we construct a volume - limited uzc sample with @xmath76 since this is the redshift at which the largest volume - limited sample can be obtained .
this volume - limited sample contains 4924 galaxies , has a comoving depth of @xmath77 and absolute - magnitude limit of @xmath78 ( @xmath79 ) . to compare this limit to that of the sdss
, we translate a b - band magnitude into an approximate @xmath14-band magnitude of @xmath80 using @xmath81 and @xmath82 from fukugita et al .
the absolute magnitude limit of the uzc sample is therefore , slightly brighter than the sdss limit . to ensure
that this sample and the ssrs2 described below are equally deep , we cut back this sample to @xmath83 . the ssrs2 galaxy sample ( da costa et al .
1998 ) was selected from the list of nonstellar objects in the hubble space telescope guide star catalog ( gsc ) .
the ssrs2 contains 3489 galaxies in the sgc over the angular region : @xmath84 and @xmath85 , covering a total of 1.13 sr with @xmath86 , where the zero - point offset from the zwicky magnitude system used in the uzc is approximately @xmath87 mag ( alonso et al .
1994 ) .
we construct a volume - limited sample with the same redshift limit as for the uzc , @xmath76 ( same reason as discussed in section 2.2 ) and ( after adjustment of the zeropoint ) , @xmath88 . for our chosen cosmology ,
the depth of the sample is @xmath89 which we also cut back to @xmath83 as discussed in the case for the uzc sample .
therefore , both ssrs2 and uzc volume - limited samples have the same comoving depth . as above ( section 2.1 )
, we correct galaxy velocities to the local group frame , apply the galactic dust corrections based on the schlegel , finkbeiner , & davis ( 1998 ) dust maps , and assume @xmath75 to @xmath51-correct the observed magnitudes .
this volume - limited sample includes 725 galaxies .
the ssrs2 provides angular coverage in the south galactic cap .
the combined uzc+ssrs2 sample contains 5649 galaxies and sky coverage of @xmath90 .
the left hand plot of figure [ fig : surveys ] , shows an aitoff projection of the three surveys .
the black points show the sdss galaxies and the gray dots show the uzc+ssrs2 galaxies .
this figure demonstrates that in terms of area , the sdss is almost totally embedded in the uzc+ssrs2 data apart from along the bottom edge of the northern equatorial slice and a small part of the southern most slice .
therefore , the combined uzc+ssrs2 survey is useful for defining the large - scale galaxy density field around the sdss sample out to a distance of approximately @xmath83 .
the right - hand plot in figure 1 shows a cone diagram of the sdss data with @xmath91 .
the inner circle is drawn at @xmath83 , which is the comoving depth of the combined uzc and ssrs2 volume - limited sample .
the outer circle is drawn at @xmath67 , which is the comoving depth of the sdss volume - limited sample . beyond @xmath83 , the selection function
( number of observed galaxy density with distance ) of these shallower surveys drops and the thickness ( in the declination direction ) of the sdss itself is adequate to define the density field around the sdss galaxies .
we search for void galaxies in the sdss using the nearest neighbor statistic .
the two volume limited samples ( sdss and uzc+ssrs2 ) are used to trace the voids . any magnitude - limited galaxy that lies away from the boundary of the volume - limited sample and
has less than 3 volume - limited sample neighbors in a sphere of @xmath92 is considered a void galaxy .
we expand on each of these steps below .
galaxies in the magnitude - limited sdss samples that lie near the boundaries of the volume - limited samples have systematically larger distances to their third nearest neighbors than galaxies that lie deep in the volume - limited samples .
this is because potentially closer neighbors have not been observed / included in the sample .
these galaxies have a higher probability of being selected as void galaxies than the galaxies inside the survey .
we correct for this bias in the following way : we generate a random catalog with the same angular and distance limits as the corresponding volume - limited sample ( sdss and uzc+ssrs2 ) but with no clustering .
we count how many random points lie around each of the magnitude - limited sdss galaxies .
if the density around a galaxy is less than a certain value , we reject it from the sdss samples .
this is explained further below .
we count how many random points ( @xmath93 ) lie in a sphere of size , @xmath94 around each galaxy and compute the number density , @xmath95 , where , @xmath96 .
since we know the number of random points , the solid angle and depth of the sdss and uzc+ssrs2 surveys , we can compute the corresponding average density of random points , @xmath97 .
galaxies with values of @xmath98 , are rejected as it is their proximity to the sample s boundaries which causes a low value of @xmath99 .
we apply the above procedure twice , once when we compare the distant sdss magnitude - limited sample with the sdss random catalog and again when we compare the nearby sdss magnitude - limited sample with the uzc+ssrs2 random catalog .
the distant sdss sample is reduced from 65,186 galaxies to 13,742 galaxies , the nearby sdss sample is reduced from 3784 galaxies to 2,450 galaxies .
the nearby sdss sample is cut less drastically as the uzc+ssrs2 sample covers a greater area . because the sdss is not finished
, the angular selection function is complicated ( see figure [ fig : surveys ] ) .
an algorithm to quantify the fraction of galaxies that have been observed in any given region , i.e. the completeness , has been developed and is described in tegmark , hamilton , & xu ( 2002 ) .
the completeness for any given ( @xmath100 ) coordinate is returned , allowing a random catalog with the same angular selection function to be created . for the sdss ,
the completeness within the regions that have been observed is typically @xmath101 .
the angular selection function for the combined uzc+ssrs2 sample is easier as the surveys are finished and the completeness for the uzc is @xmath102 .
we classify galaxies that have a large distance to their _ nth _ nearest neighbor as void galaxies .
we follow the work of el - ad & piran ( 1997 ) and hoyle & vogeley ( 2002 ) and use @xmath103 rather than @xmath104 in the nearest neighbor analysis . because galaxies are clustered , it is not unreasonable to expect that some galaxies in large - scale voids might be found in binaries or triplets .
if we used @xmath104 then a pair of galaxies in an otherwise low density environment would not be classified as void galaxies .
setting @xmath103 allows for a couple of bright neighbors , but excludes galaxies in typical groups .
note that we do not make any corrections for peculiar velocities along the line of sight .
therefore , we might underestimate the density of systems with large velocity dispersions , which may pollute the void galaxy population .
this effect could lead us to slightly underestimate the differences between the void and wall populations . to identify void galaxies in the sdss
, we compute the distance from each galaxy in the apparent magnitude limited sample to the third nearest neighbor in a volume - limited sample . in other words ,
the volume - limited sample is used to define the galaxy density field that traces voids and other structures .
we compute the average distance to the third nearest neighbor , @xmath105 , and the standard deviation , @xmath106 , of this distance .
we fix the critical distance @xmath107 to be @xmath92 , which is approximately equal to @xmath108 found from the two samples , ( the actual values are given in sections [ sec : dvg ] and [ sec : nvg ] ) .
this threshold is consistent with the criterion for defining wall and void galaxies in voidfinder ( hoyle & vogeley 2002 ) .
galaxies in the apparent magnitude limited sample whose third nearest neighbor lies further than @xmath109 are classified as void galaxies . we thereby divide the apparent magnitude limited sdss sample into two mutually - exclusive sub - samples , which we hereafter refer to as void and wall galaxies .
note that boundary of the void and wall samples is defined by throwing away galaxies that lie within 3.5@xmath2mpc of the survey edge , where as galaxies are classified as void galaxies if they have less than 3 neighbors in a sphere of 7@xmath2mpc .
if we use 7h@xmath110mpc to mark the boundary then the volume available for finding void galaxies is decreased , especially at the near edge of the distant sample .
we tolerate this inconsistancy inorder to have overlap between the near and distant samples in terms of the magnitude ranges that each sample probes .
however , this means that near the edges there is a slightly higher probability of a galaxy being flagged as a void galaxy than deep in the survey . to test
what effect this has , we construct 10 mock volume- and flux - limited catalogs from the virgo consortium s hubble volume z=0 @xmath111cdm simulation ( frenk et al . 2000 ; evrard et al .
2002 ) that have the same geometry as region 2 .
following the procedure used in the survey data , we throw away galaxies in the flux - limited mock catalogs that lie within 3.5@xmath2mpc of the region s edge and then find the mock void galaxies .
we then compute the average n(r ) distribution of the mock void and wall samples , where n(r ) is the normalized number of galaxies as a function of comoving distance , and show these in the left hand plot of figure [ fig : nrdata ] .
it can be seen that within the errors , the two distributions are similar .
the exception is at the near edge where more mock galaxies are classified as void galaxies , as expected . out to 125@xmath2mpc
there are 50% more void galaxies than wall galaxies .
this excess is only 4% of the whole void sample because most of the void galaxies are found at greater distances .
the n(r ) plot for the data ( figure [ fig : nrdata ] right hand side ) indicates a somewhat later ratio of void / wall galaxies at the near edge of the sample . from the test above , only 4% of the void galaxies might be erroneously flagged .
the rest of the difference is due to large - scale structure within the volume surveyed .
thus we conclude that our procedure for identifying void galaxies and removing objects ( both void and wall ) near the survey boundaries does not produce any significant bias in the redshift distribution of void and wall galaxies .
the difference is insufficient to generate the large observed differences between void and wall galaxies .
in fact , dilution of the void galaxy sample can only decrease the apparent statistical significance of differences between the void and wall galaxy populations i.e. the true differences between void and wall galaxies may be more severe than we find .
the converse is not possible ; this dilution could not cause the population differences that we observe . in section [ sec : discus ]
we discuss the impact of this dilution on our results .
also , tests with smaller `` clean '' samples show , as expected , a higher statistical significance and results from the nearby sample , which suffers less from this effect due to wider opening angle , and distant sample show the same trends , which is further evidence that the dilution of the distant sample is a minor effect . to test that our procedure identifies genuine void galaxies
, we compute the mean , median and upper bound of the density contrast ( @xmath112 ) around void galaxies and compare these values to the emptiness of voids as defined by voidfinder .
the number of galaxies in the sdss volume - limited sample is 22,866 and the respective volume , @xmath113 , therefore , the mean density is @xmath114 .
the void galaxies contain less than three neighbors in a sphere of @xmath92 , thus , the density around the void galaxies is @xmath115 .
therefore , the density contrast around void galaxies in the distant sample is @xmath116 .
this number is very similar for the nearby sample .
it is an upper bound , as the median third nearest neighbor distance to the void galaxies is closer to @xmath117 , giving values for the density contrast closer to @xmath118 .
this value is low , although not as low as that found by voidfinder for the density contrast of the voids .
since we are centered on a galaxy and galaxies are clustered , we expect the density around void galaxies ( @xmath119 ) to be higher than the mean density of a void ( @xmath120 ) . recall that the mean density of a void is about @xmath121 mean density of the universe ( @xmath122 ) and since the correlation length ( @xmath123 ) on spheres of @xmath117 is @xmath124 ( @xmath125 ) , then : @xmath126 .
in addition , void galaxies are typically found near the the edge of the void where the density is higher .
it is important to keep in mind that since most of the void galaxies will lie near the edges of voids , the typical density contrast around void galaxies is less extreme than the density contrast of the whole void region ( see figure 11 in benson et al .
the average number of volume - limited galaxies in a sphere of @xmath92 around a wall galaxy is 25 compared to 2 around a void galaxy , demonstrating that void galaxies really are in highly underdense regions . for sdss galaxies that lie in the distant sdss sample , we use the sdss volume - limited sample to define the galaxy density field .
using the third nearest neighbor ( @xmath103 ) , we obtain @xmath127 , from which we obtain @xmath128 , which we round up to @xmath92 . from the distant sdss sample of 13,742 galaxies we find 1010 void galaxies .
this sample of void galaxies will be referred to as vgd ( as in void galaxy distant ) .
the sample of 12,732 non - void galaxies we label wgd ( as in wall galaxy distant ) .
the fraction of void galaxies in the distant sample is @xmath129 .
this is only slightly higher than the fraction of void galaxies found by voidfinder ( hoyle & vogeley 2002 ) and by el - ad & piran ( 1997 ) .
figure [ fig : vgals ] shows a redshift cone diagram of the sdss wall galaxies ( gray dots ) and the corresponding void galaxies , vgd ( black points ) .
we plot only galaxies with @xmath130 .
note that some of the void galaxies appear to be close to wall galaxies .
this is merely a projection effect .
all the void galaxies have less than three neighbors within a radius of @xmath92 . after obtaining the wgd and vgd samples
, we split each void and corresponding wall galaxy sample into approximately equal halves by applying an absolute magnitude cut . in this case , the magnitude cut is done at @xmath131 , from which we obtained the corresponding sub - samples , [ wgd_b , vgd_b ] ( @xmath132 , b = bright ) and [ wgd_f , vgd_f ] ( @xmath133 , f = faint ) . the approximate range of absolute magnitudes covered by the sub - samples are , @xmath134 , for the bright and @xmath135 , for the faint half .
figure [ fig : vg2 ] shows the distribution of absolute magnitudes for the distant samples . note the terms _ bright _ and _ faint _ in this context are used to describe the sub - samples relative to their parent sample . to find faint void galaxies , which are present in the sdss sample only at small comoving distances
, we use the uzc+ssrs2 volume - limited sample to trace the voids because the slice - like sdss samples are too thin to detect three - dimensional voids in this nearby volume . the number of galaxies in the sdss nearby sample , after applying the boundary corrections , is 2456 .
we measure the distance to the third nearest uzc+ssrs2 volume - limited galaxy and obtain the values @xmath136 , hence the choice of @xmath137 is still applicable . in this case
we find 194 void galaxies .
we refer to this void galaxy sample as vgn ( n for nearby ) and the respective parent wall galaxy sample ( after removing the respective void galaxies ) as wgn .
we again apply an absolute magnitude cut to the vgn and wgn samples . for the nearby sample ,
this cut is done at @xmath138 ( see figure [ fig : vg1 ] ) .
this cut divides the wall and respective void galaxy samples into approximately equal halves which we label [ wgn_b , vgn_b ] ( @xmath139 , b = bright ) and [ wgn_f , vgn_f ] ( @xmath140 , f = faint ) .
the range of absolute magnitudes included in each sub - sample is @xmath141 for the bright half and @xmath142 ( see figure [ fig : vg1 ] ) , for the faint half . in this case
the percent of void galaxies found is @xmath143 .
to examine whether void and wall galaxies have different photometric properties , we compare their colors ( @xmath144 and @xmath145 ) , concentration indices , and sersic indices .
we compare the properties of wall and void galaxies in both the distant and nearby samples .
we also subdivide each sample by absolute magnitude and compare their properties further .
the samples compared are therefore , ( 1 ) distant ; bright ( @xmath132 ) [ wgd_b , vgd_b ] , faint ( @xmath133 ) [ wgd_f , vgd_f ] , and full ( undivided ) void vs. wall samples in each case respectively and ( 2 ) nearby ; bright ( @xmath139 ) [ wgn_b , vgn_b ] , faint ( @xmath140 ) [ wgd_f , vgd_f ] , and full ( undivided ) void vs. wall samples in each case respectively .
we compute the means of the distributions and also the error on the mean to see if on average void and wall galaxies have the same colors , concentration indices and sersic indices .
we also use the _ kolmogorov - smirnov _ ( ks ) test to see if the void and wall galaxies could be drawn from the same parent population .
tables 1 and 2 , summarize the results of these tests for the nearby and distant samples respectively .
we present the results for the whole sample , as well as the samples split by absolute magnitude .
the results are considered in detail below . the existence of strong correlations of galaxy type with density ( postman & geller 1984 ; dressler 1980 ) , galaxy type with color ( strateva et al .
2001 ; baldry et al .
2003 ) , and density with luminosity and color ( hogg et al . 2002 ; blanton et al .
2002 ) are well known ; bright red galaxies tend to populate galaxy clusters and tend to be elliptical , while dim blue galaxies are less clustered and tend to be more disk like .
this behavior is shown in an analysis of sdss galaxy photometry by blanton et al .
( 2002 ; see their figures 7 and 8) , in which they find that the distribution of @xmath146 colors at redshift 0.1 is bimodal . of particular interest to us
is the location of the void galaxies in color space . because these galaxies evolve more slowly and interact less with neighboring galaxies than their wall galaxy counterparts
, we might expect void galaxies to be dim , blue , of low - mass and have high star formation rates ( benson et al .
we consider two color indices : @xmath144 and @xmath145 .
the reason for these two colors is that @xmath145 measures the slope of the spectrum and @xmath144 is sensitive to the uv flux and the @xmath147 break . since the @xmath148 band magnitudes can be noisy , by looking at @xmath145 and @xmath144 we are able to verify that the results are consistent and not affected by low signal - to - noise ratio . in tables 1 and 2 , we compare the photometric properties of the void galaxy samples to their respective wall galaxy samples . in figures [ fig : color1 ] and [ fig : color2 ] we present normalized histograms of the color distributions .
solid lines correspond to the void galaxy samples and the dotted lines represent the wall galaxies . in all cases ( nearby , distant and the bright and faint sub - samples )
we find that the void galaxy samples are on average bluer than the corresponding wall galaxy samples in both colors .
if we look at the full samples , we find that the mean values of the two samples are significantly different .
the nearby void galaxies have mean @xmath144 and @xmath145 colors that are at least @xmath149 bluer than the wall galaxy samples .
for the distant void galaxies , the differences in the means are about four times greater than for the nearby case . when we split the nearby sample into the bright and faint samples , we see that it is at the faint end where there is the greatest difference between void and wall galaxies .
the significance of the ks test is reduced because of the smaller number of galaxies in each sample .
the nearby bright and faint void galaxies are at least @xmath150 bluer than the wall galaxies .
the differences between the nearby void and wall galaxies are not as pronounced as in the distant samples because we are shot noise limited by how many clusters there are in the small nearby volume . in the distant sample
it is very unlikely that the wall and void galaxies in both the bright and faint sub - samples are drawn from the same parent population ( @xmath151 ) .
we assess the statistical significance of differences in the color distributions using a ks test ( the values of @xmath152 , the probability that the two samples are drawn from the same parent population , are given in the last column of tables 1 and 2 ) .
the probability that the void and wall galaxies are drawn from the same parent population is low : @xmath153 in the nearby case and @xmath151 in the distant case . in the distant sample
it is very unlikely that the wall and void galaxies in both the bright and faint sub - samples are drawn from the same parent population ( @xmath151 ) .
to compare morphological properties of void and wall galaxies , we examine the distribution of concentration indices measured by the sdss photometric pipeline ( lupton et al . 2001 ; stoughton et al .
2002 ; pier et al .
2002 ; lupton et al .
the concentration index ( ci ) is defined by the ratio @xmath154 , where @xmath155 and @xmath156 correspond to the radii at which the integrated fluxes are equal to @xmath157 and @xmath158 of the petrosian flux , respectively .
a large value of ci corresponds to a relatively diffuse galaxy and a small value of ci to a highly concentrated galaxy .
the concentration index has been shown to correlate well with galaxy type ( strateva et al .
2001 ; shimasaku et al .
spiral galaxies are usually found to have small concentration indices ( @xmath159 ) whereas ellipticals have larger concentration indices ( @xmath160 ) .
this bimodal behavior of the concentration index can be clearly seen in strateva et al .
( 2001 ; see figure 8) .
figure [ fig : cin ] shows histograms of ci for void and wall galaxies for both the nearby and distant samples along with the respective bright and faint sub - samples .
tables 1 and 2 show the mean , error on the mean , and the ks statistic found when comparing the wall and void galaxies . in the nearby samples ,
the void and wall galaxies are not distinguished by this morphological parameter . in table 1 , we find that the mean values of ci are very similar . the probability that the distributions of concentration indices of void and wall sub - samples are drawn from the same parent population approaches unity and @xmath161 for the faint and bright sub - samples respectively .
the top row of plots in figure [ fig : cin ] , shows there is indeed little difference between the distributions of ci for the void and wall galaxies in these samples .
we find that void galaxies have on average significantly smaller concentration indices in the bright half of the distant samples .
there are more wall than void galaxies at large values of ci ( @xmath162 ) .
the means differ by more than @xmath163 . in figure
[ fig : cin ] , in the bottom row , all three dotted curves show this behavior . in table 2 , it is clear that in the full sample and in the bright sample , the wall and void galaxies have significantly different ci distributions .
a ks analysis of the bright sub - sample reveals that there is a probability of less than @xmath164 that the void and wall galaxies are drawn from the same parent population . in the case of the distant faint void and wall galaxy samples
the results are consistent . as another measure of morphology of void and wall galaxies
we examine the sersic index ( sersic 1968 ) , found by fitting the functional form @xmath165 , where @xmath166 is the sersic index itself , to each galaxy surface brightness profile ( sbp ) . with this form
, @xmath104 corresponds to a purely exponential profile , while @xmath167 is a de vaucouleurs profile .
we use the sersic indices as measured by blanton et al .
( 2002 ) for the sdss galaxies . in figure
[ fig : nser ] , we plot histograms of sersic indices measured for all the samples . statistics of these distributions and the results of comparison of void and wall sub - samples are listed in tables 1 and 2 . in the nearby survey volume , we find @xmath168 for all void and wall galaxy sub - samples and there are no statistically significant differences between the distributions .
the top panels of figure [ fig : nser ] , show histograms of the sersic index ; the distributions of the void ( solid lines ) and wall ( dotted line ) galaxies appear very similar .
we find significant differences between void and wall galaxies in the distant samples .
the lower panels in figure [ fig : nser ] , show the distribution of sersic indices for the void and wall galaxies . for the more distant void galaxies ,
we find in table 2 , that @xmath169 , which is higher than what was found for the nearby void galaxies ( @xmath170 ) .
a ks test reveals that the void galaxies are distinct from the wall galaxies with a probability of @xmath151 in the fainter ( @xmath171 ) , brighter ( @xmath172 ) and full samples that the void and wall galaxies are drawn from the same parent population .
the means of the sersic indices of the void and wall galaxies differ by at least @xmath173 .
the above analysis clearly shows that there is a difference in the photometric properties of void and wall galaxies .
void galaxies are fainter and bluer than wall galaxies in all cases .
previous observational studies ( e.g. , vennik et al .
1996 ; pustilnik et al . 2002 ; popescu et al .
1997 ) suggested that isolated galaxies in voids can be distinguished from non - void galaxies based on their color with a large enough sample .
here we provide such a sample and even extend the analysis to compare sub - samples of void and non - void galaxies of similar luminosity and sbp .
nearby , the question might be raised as to whether it is the faintest void galaxies that are particularly blue and that these galaxies dominate the statistics . to test for this , the nearby sample is cut at -17.0 , reducing the range of absolute magnitude in each bin and again the void galaxies are bluer than the wall galaxies .
a further test was made where the galaxies were divided into bins of @xmath174 mag and still the void galaxies are bluer in every bin , thus the differences in color are not dominated by the tail of the distribution .
void galaxies are genuinely bluer than wall galaxies of the same luminosity . in the distant sample
the differences in color are only partly explained by the paucity of luminous red galaxies in voids .
the average galaxy in the distant sample has an absolute magnitude of around -19.5 which is more than a magnitude fainter than an @xmath175 galaxy in the @xmath14-band ( @xmath176 blanton et al .
galaxies that are thought of as bright red cluster ellipticals are typically brighter than @xmath175 . in the full sample , the faint sample and the bright sample ( and in the @xmath174 mag test ) void galaxies
are still bluer than wall galaxies .
in section [ sec : nn ] we noted a small excess of void galaxies near the inner boundary of the volume that encloses the distant samples .
we predicted that this might affect the purity of our void galaxy samples and thereby lower the apparent statistical significance of differences between the void and wall galaxy populations . to test for this effect ,
we redo selected analyses , to compare the photometric properties of void and wall galaxies in the range of comoving coordinate distance from @xmath177 to @xmath178 , far from the region where the excess is observed near the @xmath179 inner boundary .
we find that the differences between the photometric properties of void and wall galaxies are indeed larger for galaxies in this more restricted redshift range .
for example , @xmath144 , @xmath180 , and @xmath145 the differences rise to @xmath181 .
the sense of these differences is the same as for the larger sample ; void galaxies are bluer and of later type than wall galaxies .
we bother to include the more nearby , perhaps slightly diluted , void galaxy sample in our full analysis because it allows us to probe a larger range of absolute magnitude .
in fact , we find consistency of results in the nearby and distant samples over the range of absolute magnitude where these samples overlap .
the statistical significance is comparable perhaps because the nearby samples are relatively smaller , albeit purer .
we expect that the statistical significance of these comparisons will rise in future , more complete samples from the sdss
. one might ask if the observed differences in color are simply the result of the well - known morphology density relation , extrapolated down to lower densities : blue spiral galaxies are found in low density environments , while red ellipticals are found in clusters .
this explanation seem unlikely in the nearby samples , where the surface brightness profiles of void and wall galaxies are quite similar .
thus , in the nearby samples , the difference in color is not clearly linked to morphology . in the distant samples , however , we see a morphological difference between the void and wall sample ; there are more elliptical type galaxies in the wall sample . to test
if the difference in color is caused simply by the paucity of ellipticals in voids , we divide the distant sample by sersic index .
blanton et al .
( 2002 ) use @xmath182 to represent exponential disks and @xmath183 for the de vaucouleurs profiles whereas vennik et al .
( 1996 ) use @xmath184 for exponential law fits and @xmath185 for early type galaxy profile fitting .
we examine the color distributions of void and wall galaxies with sersic index less than 1.8 and greater than 1.8 to approximately split the sample into spirals and ellipticals .
we find that the void galaxies with both @xmath186 and @xmath187 are bluer than the wall galaxies . in @xmath144 and @xmath145 and for
both @xmath186 and @xmath187 the void galaxies are at least 3@xmath188 bluer than the wall galaxies , and for the @xmath186 , @xmath145 case the difference rises to 7@xmath188 .
again , the samples are divided into bright and faint sub - samples as well as by sersic index .
the void galaxies are always bluer than the wall galaxies , although the significance of the ks test is reduced because of the smaller number of galaxies .
thus , void galaxies are bluer than wall galaxies even when compared at similar sbp and luminosities .
they are also fainter and have surface brightness profiles that more closely resemble spirals than ellipticals .
these findings are consistent with predictions of void galaxy properties from a combination of semi - analytic modeling and n - body simulations of structure formation in cold dark matter models ( benson et al .
one of the reasons why void galaxies are bluer than galaxies in richer environments may be that star formation is an ongoing process in void galaxies .
galaxies in clusters and groups have their supply of fresh gas cut off .
therefore , star formation is suppressed in the wall galaxies . to illustrate the range of luminosities probed by this study , we consider which members of the local group could have been included in our samples at the distances probed by the sdss volume .
not only the brightest members of the local group ( lg ) , but also local group members like m31 and m33 can be detected in the distant sample , and fainter ( @xmath189 ) members of the lg , like the lmc and smc , would be included in the nearby sample .
in the nearby sample we can detect faint dwarf ellipticals ( de s ) , which is to be expected given that about @xmath190 of the known galaxies in the lg are dwarfs ( sung et al .
1998 ; staveley - smith , davies & kinman al . 1992 ) .
it is well known that while de s have exponential sbp s ( sandage & binggeli 1984 ; binggeli , tammann & sandage 1987 ; caldwell & bothun 1987 ) they exhibit color gradients that redden outward ( jerjen et al .
2000 ; vader et al . 1988 ; bremnes et al .
1998 ) and have a uniform color distribution ( james 1994 ; sung et al . 1998 ) . based on their color and other properties ,
a fraction of the void galaxies resemble a population of dwarf ellipticals ( de s ) , which have a mean @xmath191 ( kniazev et al . 2003 ) , typical de s have sersic indices @xmath192 and @xmath193 , consistent with our sample of nearby void galaxies ( see table 1 ) .
using a nearest neighbor analysis , we identify void galaxies in the sdss .
for the first time we have a sample of @xmath12 void galaxies .
these void galaxies span a wide range of absolute magnitudes , @xmath194 , are found out to distances of @xmath67 , and are found in regions of the universe that have density contrast @xmath195 . in previous studies of properties
void galaxies it was suggested ( vennik et al .
1996 ; pustilnik et al . 2002 ; popescu et al .
1997 ) that void galaxies could be distinguished from non - void galaxies based on their color , and a hint of them being bluer was observed ( grogin & geller ( 1999 , 2000 ) from a small sample of void galaxies . in this paper
we present a definitive result with a sample of @xmath196 void galaxies for which the colors , concentration and sersic indices are compared against wall galaxies .
void galaxies are bluer than wall galaxies of the same intrinsic brightness and redshift distribution down to @xmath197 .
we demonstrate that the difference in colors is not explained by the morphology - density relation .
nearby , void and wall galaxies have very similar surface brightness profiles and still the void and wall galaxies have different colors . in the distant sample
the voids and wall galaxies have different surface brightness profiles . however , when we divide the populations further by sersic index , the void galaxies are still bluer . to test that the differences in color are not due to the choice of absolute magnitude range
, we compare the colors within narrow bins of absolute magnitude .
this reveals that void galaxies are genuinely blue and that the differences between the colors are not dominated by extreme objects in the tails of the void and wall galaxy distributions .
analysis of surface brightness profiles indicates that void galaxies are of later type than wall galaxies .
comparison of the sersic indices between the distant void and wall galaxy samples including , sub - samples within a narrow range of luminosities shows that it is very unlikely ( @xmath151 ) that the two samples are drawn from the same parent population .
however , based on the concentration index , it is only the bright distant void and wall galaxy samples that differ by significantly .
our results are in agreement with predictions from semi - analytic models of structure formation that predict void galaxies should be bluer , fainter , and have larger specific star formation rates ( benson et al .
the differences in color are probably best explained in terms of star formation .
void galaxies are probably still undergoing star formation whereas wall galaxies have their supply of gas strangled as they fall into clusters and groups . in a separate paper ( paper ii ; rojas et al .
2004 ) we will discuss analysis of the spectroscopic properties ( @xmath198 , [ oii ] equivalent widths , and specific star formation rates ) of our void galaxies .
work in progress reveals that the specific star formation rate of our void galaxies is considerably higher , consistent with our current findings and predictions .
funding for the creation and distribution of the sdss archive has been provided by the alfred p. sloan foundation , the participating institutions , the national aeronautics and space administration , the national science foundation , the u.s .
department of energy , the japanese monbukagakusho , and the max planck society .
the sdss web site is http://www.sdss.org/. the sdss is managed by the astrophysical research consortium ( arc ) for the participating institutions .
the participating institutions are the university of chicago , fermilab , the institute for advanced study , the japan participation group , the johns hopkins university , los alamos national laboratory , the max - planck - institute for astronomy ( mpia ) , the max - planck - institute for astrophysics ( mpa ) , new mexico state university , university of pittsburgh , princeton university , the united states naval observatory , and the university of washington .
abazajian , k. , et al .
2003 , apj submitted , astro - ph/0305492 alonso , m. v. , da costa , l. , latham , d. , pellegrini , p. s. , & milone , a. e. 1994 , , 108 , 1987 babul , a. , & postman , m. 1990 , , 359 , 280 baldry , i. k. , et al .
2003 , apj submitted , astro - ph/0309710 benson , a. j. , hoyle , f. , torres , f. , & vogeley , m. s. 2003 , mnras , 340 , 160 bingelli , b. , tammann , g. a. , & sandage , a. 1987 , , 94 , 251 bingelli , b. 1989 , large scale structure and motions in the universe ; proceedings of the international meeting , trieste , italy , apr . 6 - 9 , 1988 blanton , m. r. , lin , h. , lupton , r. h. , maley , f. m. , young , n. , zehavi , i. , & loveday , j. 2003 , , 125 , 2276 blanton , m. r. , et al .
2003 , aj , 125 , 2348 blanton , m. r. , et al .
2002 , , 594 , 186 de lapparent , v. , geller , m. j. , & huchra , j.p .
1991 , , 369 , 273 bremnes , t. , bingelli , b. , & prugniel , p. 1998
, a&as , 129 , 313 caldwell , n. , & bothun , g. d. 1987 , , 94 , 1126 da costa , l. n. , et al .
1998 , , 116 , 1 dekel , a. , & silk , j. 1986 , , 303 , 39 dressler , a. 1980 , , 236 , 351 efstathiou , g. , ellis , r. s. , & peterson , b. s. 1988 , mnras , 233 , 431 eisenstein , d. j. , et al .
2001 , aj , 122 , 2267 el - ad , h. , & piran , t. 1997 , , 491 , 421 el - ad , h. , piran , t. , & da costa , l. n. 1996 , , 462 , 13 el - ad h. , piran , t. , & da costa , l. n. 1997 , mnras , 287 , 790 evrard , a. e. et al . , 2002 , apj , 573 , 7 falco , e. e. , et al . 1999 , pasp , 111 , 438 frenk , c. s. , et al .
2000 , astro - ph/0007362 fukugita , m. , shimasaku , k. , ichikawa , t. 1995 , pasp , 107 , 945 fukugita , m. , ichikawa , t. , gunn , j. e. , doi , m. , shimasaku , k. , & schneider , d. p. 1996 , aj , 111 , 1748 grogin , n. a. , & geller , m. j. 1999 , , 118 , 2561 grogin , n. a. , & geller , m. j. 2000 , , 119 , 32 gunn , j. e. , et al .
1998 , , 116 , 3040 geller , m. j. , & huchra , j. p. 1989 , science , 246 , 857 hoffman , y. , silk , j. , & wyse , r. f. g. 1992 , , 388 , l13 hogg , d. w. , finkbeiner , d. p. , schlegel , d. j. , & gunn , j. e. 2001 , , 122 , 2129 hogg , d. w. , et al .
2002 , , 124 , 646 hogg , d. w. , et al .
2002 , apj submitted , astro - ph/0212085 hoyle , f. & vogeley , m. s. 2002 , , 566 , 641 huchra , j. p. , geller , m. j. , de lapparent , v. , & corwin , h. 1990 , apjs , 72 , 433 huchra , j. p. , geller , m. j. , & corwin , h. 1995 , apjs , 99 , 391 huchra , j. p. , vogeley , m. s. , & geller , m. j. 1999 , apjs , 121 , 287 jerjen , h. , binggeli , b. , & freeman , k. c. 2000 , , 119 , 593 karachentsev , i. d. , & makarov , d. a. 1996 , , 111 , 794 kirshner , r. p. , oemler , a. jr . ,
schechter , p. l. , shectman , s. a. 1981 , , 248 , 57 kniazev , a. y. et al .
2003 , in preparation kuhn , b. , hopp , u. , & elasser , h. 1997 , a&a , 318 , 405 lanzetta , k. m. , bowen , d. v. , tytler , d. , & webb , j. k. 1995 , apj , 442 , 538 lupton , r. h. , gunn , j. e. , ivezi , . , knapp , g. r. , kent , s. , & yasuda , n. 2001 , in asp conf .
238 , astronomical data analysis software and systems x , ed .
f. r. harnden , jr . , f. a. primini & h. e. payne ( san francisco : asp ) , 269 lupton , r. h. , et al .
2002 , in preparation mclin , k. m. , stocke , j. t. , weymann , r. j. , penton , s. v. , shull , j. m. 2002 , , 574 , 115 mo , h. j. , mcgaugh , s. s. , & bothun , g. d. 1994 , mnras , 267 , 129 morris , a. l. , weymann , r. j. , dressler , a. , mccarthy , p. j. , smith , b. a. , terrile , r. j. , giovanelli , r. , & irwin , m. 1993 , , 419 , 524 mller , v. , arbabi - bidgoli , s. , einasto , j. , tucker , d. 2000 , mnras , 318 , 280 park , c. , vogeley , s. m. , geller , m. j. , & huchra , j. p. 1994
, , 431 , 569 peebles , p. j. e. 2001 , , 557 , 495 pellegrini , p. s. , da costa , l. n. , & de carvalho , r. r. 1989 , , 339 , 595 pence , w. 1976 , , 203 , 39 pier , j. r. , munn , j. a. , hindsley , r. b. , hennessy , g. s. , kent , s. m. , lupton , r. h. , & ivezi , .
2003 , , 125 , 1559 plionis , m. , & basilakos , s. 2002 , mnras , 330 , 399 popescu , c. , hopp , u. , & elasser , h. 1997 , a&a , 325 , 881 postman , m. , & geller , m. j. 1984 , , 281 , 95 pustilnik , s. a. , martin , j. -m . ,
huchtmeier , w. k. , brosch , n. , lipovetsky , v. a. , richter , g. m. 2002 , a&as 389 , 405 schlegel , d. j. , finkbeiner , d. p. , & davis , m. 1998 , , 500 , 525 sandage , a. , & binggeli , b. 1984 , , 89 , 919 scranton , r. et al . 2002 , apjs , 579 , 48 sersic j. l. 1968 , atlas de galaxias australes .
observatorio astronmico , cordoba .
shectman , s. a. , et al .
1996 , , 470 , 172 shimasaku , k. , et al .
2001 , , 122 , 1238 smith , j. a. , et al .
2002 , aj , 123 , 2121 staveley - smith , l. , davies , r. , & kinman , t. d. 1992 , mnras , 258 , 334 stoughton , c. et al .
2002 , aj , 123 , 485 strauss , m. a. , et al .
2002 , , 124 , 1810s strateva , i. , et al .
2001 , , 122 , 1874 sung , e. , han c. , ryden , b. s. , chun , m. , kim , h. 1998 , , 499 , 140 tegmark , m. , hamilton , a. j. s. , & xu , y. 2002 , mnras , 335 , 887 thuan , t. x. , gott , j. r. iii , & schneider , s. e. 1987 , , 315 , l93 vader , j.p . , vigroux , l. , lachieze - rey , m. , & souviron , j. 1988 , a&a , 203 , 217 vennik , j. , hopp , u. , kovachev , b. , kuhn , b. , & elssser , h. 1996 , a&as , 117 , 261 york , d. g. et al .
2000 , , 120 , 1579 zwicky , f. , herzog , e. , & wild , p. 1961 , catalogue of galaxies and clusters of galaxies , ( pasadena : california institute of technology ) vol .
i zwicky , f. , & herzog , e. 1962 - 1965 , catalogue of galaxies and clusters of galaxies , ( pasadena : california institute of technology ) vol .
ii - iv zwicky , f. , karpowicz , m. , & kowal , c. 1965 , catalogue of galaxies and clusters of galaxies , ( pasadena : california institute of technology ) vol . v zwicky , f. , & kowal , c. 1968 , catalogue of galaxies and clusters of galaxies , ( pasadena : california institute of technology ) vol .
vi .means , errors on the means and ks test probabilities that the void and wall galaxies are drawn from the same parent population for the photometric properties of void and wall galaxies in the nearby sample ( @xmath199 ) .
the number of galaxies ( void and wall ) in each sample and sub - sample are listed next to the magnitude range heading as [ @xmath200 ( void ) , @xmath201 ( wall ) ] .
small values of @xmath152 correspond to a low probability that the two samples are drawn from the same parent population .
the ks test shows that void galaxies appear to have different colors to wall galaxies .
the void galaxies appear bluer than the respective wall galaxies in all cases , where the average difference between the means of the colors is about @xmath150 .
however , the concentration and sersic indices are not significantly different . [ cols="^ " , ] | using a nearest neighbor analysis , we construct a sample of void galaxies from the sloan digital sky survey ( sdss ) and compare the photometric properties of these galaxies to the population of non - void ( wall ) galaxies .
we trace the density field of galaxies using a volume - limited sample with @xmath0 .
galaxies from the flux - limited sdss with @xmath1 and fewer than three volume - limited neighbors within 7@xmath2mpc are classified as void galaxies .
this criterion implies a density contrast @xmath3 around void galaxies . from 155,000 galaxies ,
we obtain a sub - sample of 13,742 galaxies with @xmath1 , from which we identify 1,010 galaxies as void galaxies . to identify an additional 194 faint void galaxies from the sdss in the nearby universe , @xmath4
, we employ volume - limited samples extracted from the updated zwicky catalog and the southern sky redshift survey with @xmath5 to trace the galaxy distribution .
our void galaxies span a range of absolute magnitude from @xmath6 to @xmath7 . using sdss photometry
, we compare the colors , concentration indices , and sersic indices of the void and wall samples .
void galaxies are significantly bluer than galaxies lying at higher density .
the population of void galaxies with @xmath8 and brighter is on average bluer and more concentrated ( later type ) than galaxies outside of voids .
the latter behavior is only partly explained by the paucity of luminous red galaxies in voids .
these results generally agree with the predictions of semi - analytic models for galaxy formation in cold dark matter models , which indicate that void galaxies should be relatively bluer , more disklike , and have higher specific star formation rates . |
Image copyright AFP Image caption Parts of the city of Salisbury were sealed off after the nerve agent attack
The US has said it will impose fresh sanctions on Russia after determining it used nerve agent against a former Russian double agent living in the UK.
Sergei Skripal and his daughter Yulia were left seriously ill after being poisoned with Novichok in Salisbury in March, though they have now recovered.
A UK investigation blamed Russia for the attack, but the Kremlin has strongly denied any involvement.
Russia has criticised the new sanctions as "draconian".
In a statement released on Wednesday, the US State Department confirmed it was implementing measures against Russia over the incident.
Spokeswoman Heather Nauert said it had been determined that the country "has used chemical or biological weapons in violation of international law, or has used lethal chemical or biological weapons against its own nationals".
The British government has welcomed the move.
"The strong international response to the use of a chemical weapon on the streets of Salisbury sends an unequivocal message to Russia that its provocative, reckless behaviour will not go unchallenged," a UK Foreign Office statement said.
The Russian embassy in the US hit back on Thursday morning, criticising what it called "far-fetched accusations" from the US that Russia was behind the attack.
Russia had become "accustomed to not hearing any facts or evidence", it said, adding: "We continue to strongly stand for an open and transparent investigation of the crime committed in Salisbury."
What are the sanctions?
The new sanctions will take effect on or around 22 August, and relate to the exports of sensitive electronic components and other technologies.
The State Department said "more draconian" sanctions will follow within 90 days if Russia fails to give reliable assurances it will no longer use chemical weapons and allow on-site inspections by the United Nations.
Image copyright Rex Features Image caption Sergei and Yulia Skripal were found unconscious on a bench in the city of Salisbury
An official said it was only the third time that the US had determined a country had used chemical or biological weapons against its own nationals.
Previous occasions were against Syria and against North Korea for the assassination of Kim Jong-nam, the half brother of leader Kim Jong-un, who died when highly toxic VX nerve agent was rubbed on his face at Kuala Lumpur airport.
Are these the only US sanctions against Russia?
No. In June the US imposed sanctions on five Russian companies and three Russian individuals in response to alleged Russian cyber-attacks on the US.
All are prohibited from any transactions involving the US financial system.
Treasury Secretary Steven Mnuchin said the measures were to counter "malicious actors" working to "increase Russia's offensive cyber-capabilities".
Russia likely to resist
Analysis by Gary O'Donoghue, BBC News, Washington
After pressure from Republican members of Congress, the State Department has determined Moscow broke international law by using a military grade chemical weapon on the Skripals.
While the US expelled some five dozen diplomats shortly after the poisoning, the administration stopped short of making a formal determination that Russia had broken international law.
But Congress has been pushing for such a decision and now the state department has confirmed Russia's actions contravened 1991 US legislation on the use of chemical weapons. That breach automatically triggers the imposition of sanctions and places requirements on Russia to avert further restrictions in three months' time.
Those requirements could include opening up sites in Russia for inspection - a move Moscow would probably resist.
So far President Donald Trump has been silent on this latest move - which could well derail his attempts to develop a new, warmer relationship with Vladimir Putin.
What was the nerve agent?
Following the incident, the British government said the military-grade nerve agent Novichok, of a type developed by Russia, had been used in the attack.
Media playback is unsupported on your device Media caption Laura Foster explains how the Novichok nerve agent works
Relations between Russia and the West hit a new low. More than 20 countries expelled Russian envoys in solidarity with the UK, including the US. Washington ordered 60 diplomats to leave and closed the Russian consulate general in Seattle.
Three months after the Salisbury attack, two other people fell ill at a house in Amesbury, about eight miles from the city. Dawn Sturgess later died while her partner, Charlie Rowley, spent three weeks recovering in hospital.
After tests, scientists at the UK's military research lab, Porton Down, found the couple had also been exposed to Novichok.
Mr Rowley told ITV News he had earlier found a sealed bottle of perfume and given it to Ms Sturgess, who sprayed the substance on her wrists. ||||| FILE - In this March 13, 2018, file photo, police officers guard a cordon around a police tent covering a supermarket car park pay machine near the spot where former Russian spy Sergei Skripal and his... (Associated Press)
FILE - In this March 13, 2018, file photo, police officers guard a cordon around a police tent covering a supermarket car park pay machine near the spot where former Russian spy Sergei Skripal and his daughter were found critically ill following exposure to the Russian-developed nerve agent Novichok... (Associated Press)
FILE - In this March 13, 2018, file photo, police officers guard a cordon around a police tent covering a supermarket car park pay machine near the spot where former Russian spy Sergei Skripal and his daughter were found critically ill following exposure to the Russian-developed nerve agent Novichok... (Associated Press) FILE - In this March 13, 2018, file photo, police officers guard a cordon around a police tent covering a supermarket car park pay machine near the spot where former Russian spy Sergei Skripal and his... (Associated Press)
WASHINGTON (AP) — The United States announced Wednesday it will impose new sanctions on Russia for illegally using a chemical weapon in an attempt to kill a former spy and his daughter in Britain earlier this year.
The new sanctions, to be imposed later this month, come despite President Donald Trump's efforts to improve relations with Russia and its leader Vladimir Putin, and amid the ongoing probe into Russian interference in the 2016 U.S. election.
The State Department said the U.S. this week made the determination that Russia had used the Novichok nerve agent to poison Sergei Skripal and his daughter, Yulia, and that sanctions would follow. It said Congress is being notified of the Aug. 6 determination and that the sanctions would take effect on or around Aug. 22, when the finding is to be published in the Federal Register.
Those sanctions will include the presumed denial of export licenses for Russia to purchase many items with national security implications, according to a senior State Department official who briefed reporters on condition of anonymity as he was not authorized to do so by name.
The U.S. made a similar determination in February when it found that North Korea used a chemical weapon to assassinate North Korean leader Kim Jong Un's half brother at the airport in Kuala Lumpur, Malaysia, in 2017.
Skripal and his daughter were poisoned by the Novichok military-grade nerve agent in the English town of Salisbury in March. Britain has accused Russia of being behind the attack, which the Kremlin vehemently denies.
Months later, two residents of a nearby town with no ties to Russia were also poisoned by the deadly toxin. Police believe the couple accidentally found a bottle containing Novichok.
The U.S. had joined Britain in condemning Russia for the Skripal poisoning and joined with European nations in expelling Russian diplomats in response, but it had yet to make the formal determination that the Russian government had "used chemical or biological weapons in violation of international law or has used lethal chemical or biological weapons against its own nationals."
Several members of Congress had expressed concern that the Trump administration was dragging its feet on the determination and had missed a deadline to publish its findings.
In March, the Trump administration ordered 60 Russian diplomats — all of whom it said were spies — to leave the United States and closed down Russia's consulate in Seattle in response to the Skripal case. The U.S. said at the time it was the largest expulsion of Russian spies in American history. | – The US is poised to impose sanctions on Russia in retaliation for a March nerve agent attack on a former Russian double agent and his daughter. Per BBC, US investigators have determined the country is behind the March 4 poisoning of Sergei Skripal and his daughter Yulia Skripal, who were found unconscious on a bench in Salisbury, England. The British government revealed earlier that the nerve agent Novichok, a type developed in Cold War-era Russia, was used in the attack that failed to kill the Skripals. They pulled through after weeks in a hospital. "The government of the Russian Federation has used chemical or biological weapons in violation of international law," US State Department spokeswoman Heather Nauert said in a statement. Nauert said the sanctions will go into effect later this month. Russia has forcefully denied they were behind the poisonings. However, the US was among some 20 countries who took a unified stance with the UK against the Kremlin by expelling Russian diplomats. Per the AP, the US alone kicked out 60 officials the Trump administration referred to as spies for the Russian government. The US also shut down Russia's consulate in Seattle. In the months since, two other cases of nerve agent poisoning made headlines not far from where the Skripals were discovered near death. In late June, Dawn Sturgess and her partner, Charles Rowley, fell ill 10 miles from Salisbury in the town of Amesbury after police believe the couple accidentally came into contact with a bottle containing Novichok. While Rowley survived, Sturgess was not as lucky. |
SECTION 1. SCHEDULING COMMITTEES, DISCUSSIONS, AND AGREEMENTS.
(a) In General.--Chapter 401 of title 49, United States Code, is
amended by adding at the end the following:
``Sec. 40129. Air carrier discussions and agreements relating to flight
scheduling
``(a) Discussions To Reduce Delays.--
``(1) Request.--An air carrier may file with the Secretary
of Transportation a request for authority to discuss with one
or more other air carriers or foreign air carriers agreements
or cooperative arrangements relating to limiting flights at an
airport during a time period that the Secretary determines that
scheduled air transportation exceeds the capacity of the
airport. The purpose of the discussion shall be to reduce
delays at the airport during such time period.
``(2) Approval.--The Secretary shall approve a request
filed under this subsection if the Secretary finds that the
discussions requested will facilitate voluntary adjustments in
air carrier schedules that could lead to a substantial
reduction in travel delays and improvement of air
transportation service to the public. The Secretary may impose
such terms and conditions to an approval under this subsection
as the Secretary determines are necessary to protect the public
interest and to carry out the objectives of this subsection.
``(3) Notice.--Before a discussion may be held under this
subsection, the Secretary shall provide at least 3 days notice
of the proposed discussion to all air carriers and foreign air
carriers that are providing service to the airport that will be
the subject of such discussion.
``(4) Monitoring.--The Secretary or a representative of the
Secretary shall attend and monitor any discussion or other
effort to enter into an agreement or cooperative arrangement
under this subsection.
``(5) Discussions open to public.--A discussion held under
this subsection shall be open to the public.
``(b) Agreements.--
``(1) Request.--An air carrier may file with the Secretary
a request for approval of an agreement or cooperative
arrangement relating to interstate air transportation, and any
modification of such an agreement or arrangement, reached as a
result of a discussion held under subsection (a).
``(2) Approval.--The Secretary shall approve an agreement,
arrangement, or modification for which a request is filed under
this subsection if the Secretary finds that the agreement,
arrangement, or modification is not adverse to the public
interest and is necessary to reduce air travel delays and that
a substantial reduction in such delays cannot be achieved by
any other immediately available means.
``(3) Secretarial imposed terms and conditions.--The
Secretary may impose such terms and conditions on an agreement,
arrangement, or modification for which a request is filed under
this subsection as the Secretary determines are necessary to
protect the public interest and air service to an airport that
has less than .25 percent of the total annual boardings in the
United States.
``(c) Limitations.--
``(1) Rates, fares, charges, and in-flight services.--The
participants in a discussion approved under subsection (a) may
not discuss or enter into an agreement or cooperative
arrangement regarding rates, fares, charges, or in-flight
services.
``(2) City pairs.--The participants in a discussion
approved under subsection (a) may not discuss particular city
pairs or submit to another air carrier or foreign air carrier
information concerning their proposed service or schedules in a
fashion that indicates the city pairs involved.
``(d) Termination.--This section shall cease to be in effect after
September 30, 2003; except that an agreement, cooperative arrangement,
or modification approved by the Secretary in accordance with this
section may continue in effect after such date at the discretion of the
Secretary.''.
(b) Conforming Amendment.--The analysis for such chapter is amended
by adding at the end the following:
``40129. Air carrier discussions and agreements relating to flight
scheduling.''.
SEC. 2. LIMITED EXEMPTION FROM ANTITRUST LAWS.
Section 41308 of title 49, United States Code, is amended--
(1) in subsection (b) by striking ``41309'' and inserting
``40129, 41309,''; and
(2) in subsection (c)--
(A) by inserting ``40129 or'' before ``41309'' the
first place it appears; and
(B) by striking ``41309(b)(1),'' and inserting
``40129(b) or ``41309(b)(1), as the case may be,''.
SECTION 1. AIR CARRIER DISCUSSIONS RELATING TO FLIGHT SCHEDULING TO
REDUCE DELAYS.
(a) Request.--An air carrier may file with the Attorney General a
request for authority to discuss with one or more other air carriers or
foreign air carriers agreements or cooperative arrangements relating to
limiting flights at an airport during a time period that the Attorney
General determines that scheduled air transportation exceeds the
capacity of the airport. The purpose of the discussion shall be to
reduce delays at the airport during such time period.
(b) Approval.--Notwithstanding the antitrust laws, the Attorney
General shall approve a request filed under this section if the
Attorney General finds that the discussions requested will facilitate
voluntary adjustments in air carrier schedules that could lead to a
substantial reduction in travel delays and improvement of air
transportation service to the public and will not substantially lessen
competition or tend to create a monopoly. The Attorney General may
impose such terms and conditions to an approval under this section as
the Attorney General determines are necessary to protect the public
interest and to carry out the objectives of this section.
(c) Notice.--Before a discussion may be held under this section,
the Attorney General shall provide at least 3 days notice of the
proposed discussion to all air carriers and foreign air carriers that
are providing service or seeking to provide service to the airport that
will be the subject of such discussion.
(d) Monitoring.--The Attorney General or a representative of the
Attorney General shall attend and monitor any discussion or other
effort to enter into an agreement or cooperative arrangement under this
section.
(e) Discussions Open to Public.--A discussion held under this
section shall be open to the public.
SEC. 2. AIR CARRIER AGREEMENTS RELATING TO FLIGHT SCHEDULING.
(a) Request.--An air carrier may file with the Attorney General a
request for approval of an agreement or cooperative arrangement
relating to interstate air transportation, and any modification of such
an agreement or arrangement, reached as a result of a discussion held
under section 1.
(b) Approval.--Notwithstanding the antitrust laws, and subject to
subsection (c), the Attorney General shall approve an agreement,
arrangement, or modification for which a request is filed under this
section if the Attorney General finds that the agreement, arrangement,
or modification is not adverse to the public interest, is necessary to
reduce air travel delays, and will not substantially lessen competition
or tend to create a monopoly and that a substantial reduction in such
delays cannot be achieved by any other immediately available means.
(c) Unanimous Agreement Among Carriers Required.--The Attorney
General may approve an agreement, arrangement, or modification for
which a request is filed under this section only if the Attorney
General finds that each air carrier and foreign air carrier providing
service or seeking to provide service to the airport that is the
subject of the agreement, arrangement, or modification has agreed to
the agreement, arrangement, or modification.
(d) Terms and Conditions.--The Attorney General may impose such
terms and conditions on an agreement, arrangement, or modification for
which a request is filed under this section as the Attorney General
determines are necessary to protect the public interest and air service
to an airport that has less than .25 percent of the total annual
boardings in the United States.
SEC. 3. LIMITATIONS.
(a) Rates, Fares, Charges, and In-Flight Services.--The
participants in a discussion approved under section 1 may not discuss
or enter into an agreement or cooperative arrangement regarding rates,
fares, charges, or in-flight services.
(b) City Pairs.--The participants in a discussion approved under
section 1 may not discuss particular city pairs or submit to another
air carrier or foreign air carrier information concerning their
proposed service or schedules in a fashion that indicates the city
pairs involved.
SEC. 4. CONSULTATION WITH SECRETARY OF TRANSPORTATION.
In making a determination whether to approve a request under
section 1, or an agreement, arrangement, or modification under section
2, the Attorney General shall consider any comments of the Secretary of
Transportation.
SEC. 5. DEFINITIONS.
In this Act, the following definitions apply:
(1) Air carrier, airport, air transportation, foreign air
carrier, and interstate air transportation.--The terms ``air
carrier'', ``airport'', ``air transportation'', ``foreign air
carrier'', and ``interstate air transportation'' have the
meanings such terms have under section 40102 of title 49,
United States Code.
(2) Antitrust laws.--The term ``antitrust laws'' has the
meaning such term has under section 41308(a) of title 49,
United States Code.
SEC. 6. TERMINATION.
(a) Approval of Agreements.--The Attorney General may not approve
an agreement, arrangement, or modification under section 2 after
October 26, 2003.
(b) Expiration of Agreements.--An agreement, arrangement, or
modification approved by the Attorney General under section 2 may
continue in effect until October 26, 2004, or an earlier date
determined by the Attorney General.
Amend the title so as to read: ``A bill to permit air
carriers to meet and discuss their schedules in order to reduce
flight delays, and for other purposes.''. | Amends Federal aviation law to authorize an air carrier to file with the Attorney General a request for: (1) authority to discuss with one or more other air carriers or foreign air carriers agreements or cooperative arrangements limiting flights at an airport during a time period when scheduled air transportation exceeds airport capacity; and (2) approval of such agreements or cooperative arrangements with respect to such limits on interstate air transportation. Directs the Attorney General, notwithstanding U.S. antitrust laws, to approve such requests if: (1) such discussions and resulting agreements are not adverse to the public interest; (2) they will facilitate voluntary adjustments in air carrier schedules that could lead to a substantial reduction in travel delays and improvement of air transportation service to the public; (3) they will not substantially lessen competition or tend to create a monopoly; and (4) reduction in delays cannot be achieved by any other immediately available means. Authorizes the Attorney General to: (1) approve such agreements and cooperative arrangements only if each air carrier or foreign air carrier providing service or seeking to provide service to an airport under such an agreement or cooperative arrangement has agreed to it; and (2) impose any terms or conditions on any approved agreement that are needed to protect the public interest and to protect air service to an airport that has less than .25 percent of the total annual boardings in the United States (non-hub and small hub airports). Prohibits participants in approved discussions from: (1) discussing or entering into agreements regarding rates, fares, charges, or in-flight services; or (2) discussing particular city pairs, or submitting to other air carriers or foreign air carriers information on their proposed service or schedules in a fashion that indicates the involvement of city pairs. |
LIMA, Peru (AP) — Greenpeace said Wednesday that its executive director will travel to Peru to personally apologize for the environmental group's stunt at the world-famous Nazca lines, which Peruvian authorities say harmed the archaeological marvel.
Greenpeace activists walk towards the historic landmark of the hummingbird in Nazca in Peru, Monday, Dec. 8, 2014. Greenpeace activists from Brazil, Argentina, Chile, Spain, Germany, Italy and Austria... (Associated Press)
Greenpeace activists stand next to massive letters delivering the message "Time for Change: The Future is Renewable" next to the hummingbird geoglyph in Nazca in Peru, Monday, Dec. 8, 2014. Greenpeace... (Associated Press)
The geoglyph of the condor is seen from a plane in Nazca, Peru, Monday, Dec. 8, 2014. Greenpeace activists from Brazil, Argentina, Chile, Spain, Germany, Italy and Austria displayed the message, "Time... (Associated Press)
Greenpeace activists stand next to massive letters delivering the message "Time for Change: The Future is Renewable," next to the hummingbird geoglyph in Nazca in Peru, Monday, Dec. 8, 2014. Greenpeace... (Associated Press)
The geoglyph of the austronaut is seen from a plane in Nazca, Peru, Monday, Dec. 8, 2014. Greenpeace activists from Brazil, Argentina, Chile, Spain, Germany, Italy and Austria displayed the message, "Time... (Associated Press)
Greenpeace activists arrange the letters delivering the message "Time for Change: The Future is Renewable" next to the hummingbird geoglyph in Nazca, Peru, Monday, Dec. 8, 2014. Greenpeace activists from... (Associated Press)
The group said it was willing to accept the consequences. A senior Peruvian official told The Associated Press on Tuesday evening that his government would seek criminal charges against Greenpeace activists who allegedly damaged the lines by leaving footprints in the adjacent desert.
"We fully understand that this looks bad," Greenpeace said in a statement Wednesday. "We came across as careless and crass."
Greenpeace regularly riles governments and corporations it deems environmental scofflaws. Monday's action was intended to promote clean energy to delegates from 190 countries at the U.N. climate talks in nearby Lima.
But the group signaled in the second of two emails Wednesday that it recognized it had deeply offended many Peruvians.
It said Greenpeace's executive director, Kumi Naidoo, would travel to Lima this week to apologize. Greenpeace will fully cooperate with any investigation and is "willing to face fair and reasonable consequences," the statement said.
In the stunt at the U.N. World Heritage site in Peru's coastal desert, activists laid a message promoting clean energy beside the famed figure of a hummingbird comprised of black rocks on a white background.
Deputy Culture Minister Luis Jaime Castillo called it a "slap in the face at everything Peruvians consider sacred."
He said the government would seek to prevent those responsible from leaving the country and ask prosecutors to file charges of "attacking archaeological monuments," a crime punishable by up to six years in prison.
The activists entered a "strictly prohibited" area where they laid big yellow cloth letters reading: "Time for Change; The Future is Renewable." They said after initial criticism that they were "absolutely careful" not to disturb anything.
Castillo said no one, not even presidents and Cabinet ministers, is allowed without authorization where the activists trod, and those who do have permission must wear special shoes.
The Nazca lines are huge figures depicting living creatures, stylized plants and imaginary figures scratched on the surface of the ground between 1,500 and 2,000 years ago. They are believed to have had ritual astronomical functions.
The Greenpeace delegation chief to the climate talks, Martin Kaiser, said none of the people involved in the action had been arrested.
"I think activists are always taking responsibility for what they are doing," he said. "We clearly underestimated the sensitivity of the situation."
He would not say whether any activists face internal sanction for the action. ||||| Located in the arid Peruvian coastal plain, some 400 km south of Lima, the geoglyphs of Nasca and the pampas of Jumana cover about 450 km 2 . These lines, which were scratched on the surface of the ground between 500 B.C. and A.D. 500, are among archaeology's greatest enigmas because of their quantity, nature, size and continuity. The geoglyphs depict living creatures, stylized plants and imaginary beings, as well as geometric figures several kilometres long. They are believed to have had ritual astronomical functions.
Outstanding Universal Value
Brief Synthesis
Located in the arid Peruvian coastal plain, some 400 km south of Lima, the Lines and Geoglyphs of Nasca and Pampas de Jumana are one of the most impressive-looking archaeological areas in the world and an extraordinary example of the traditional and millenary magical-religious world of the ancient Pre-Hispanic societies which flourished on the Peruvian south coast between the 8th century BC and the 8th century AD. They are located in the desert plains of the basin river of Rio Grande de Nasca, the archaeological site covers an area of approximately 75,358.47 Ha where for nearly 2,000 uninterrupted years, the region’s ancient inhabitants drew on the arid ground a great variety of thousands of large scale zoomorphic and anthropomorphic figures and lines or sweeps with outstanding geometric precision, transforming the vast land into a highly symbolic, ritual and social cultural landscape that remains until today. They represent a remarkable manifestation of a common religion and social homogeneity that lasted a considerable period of time.
They are the most outstanding group of geoglyphs anywhere in the world and are unmatched in its extent, magnitude, quantity, size, diversity and ancient tradition to any similar work in the world. The concentration and juxtaposition of the lines, as well as their cultural continuity, demonstrate that this was an important and long-lasting activity, lasting approximately one thousand years. Intensive study of the geoglyphs and comparison with other manifestations of contemporary art forms suggests that they can be divided chronologically from the Middle and Late Formative (500 BC – 200 AD) to the Regional Development Period (200 – 500 AD), highlighting the Paracas phase (400 - 200 BC) and the Nasca phase (200 BC - 500 AD). There are two categories of glyphs: the first group is representational, depicting in schematic form a variety of natural forms including animals, birds, insects, and other living creatures and flowers, plants, and trees, deformed or fantastic figures and objects of everyday life. There are very few anthropomorphic figures. The second group comprises the lines, which are generally straight lines that criss-cross certain parts of the pampas in all directions. Some are several kilometres in length and form designs of many different geometrical figures - triangles, spirals, rectangles, wavy lines, etc. Others radiate from a central promontory or encircle it. Yet another group consists of so-called 'tracks', which appear to have been laid out to accommodate large numbers of people.
Criterion (i): The Nasca Lines and Geoglyphs form a unique and magnificent artistic achievement of the Andean culture that is unrivalled in its extension, dimensions and diversity and long existence anywhere in the prehistoric world.
Criterion (iii): The Nasca and Pampas de Jumana Lines and Geoglyphs, through its unique form of land use, are an exceptional testimony of the culture and magical-religious tradition and beliefs of the societies that developed in Pre-Columbian South America between the 8th BC and 8th AD centuries.
Criterion (iv): The system of lines and geoglyphs, which has survived intact for more than two millennia, evidences an unusual way of using the land and the natural environment that represent a highly symbolic cultural landscape, using a construction technology that allowed them to design large-scale figures with outstanding geometric precision.
Integrity
The Lines and Geoglyphs of Nasca and Pampas de Jumana, with their protection area that extends over 75,358.47 Ha, are well defined and include all physical aspects that convey the Outstanding Universal Value of the property, including its surrounding landscape with which they make up an indivisible unit in a harmonious relationship that has survived virtually unaltered over the centuries.
The Pleistocene alluvial terrace, currently with occasional water activity (only during the El Niño Southern Oscillation - ENSO) and the low rainfall rates (the lowest in the world), determine desert climate characteristics and extreme aridity that have favoured the preservation of the Lines and Geoglyphs of Nasca and Pampas de Jumana. Likewise, harmful human activity has caused no severe impact on the property, so the geoglyphs and cultural landscape have remained intact for nearly two millennia, from their design in the 8th century BC to nowadays. The cleaning and preservation works performed have not affected the property’s integrity and have promoted their conservation.
The construction of the South Pan-American Highway, which directly crosses the property, has caused damages in some lines and figures sectors. However, most of the lines and figures are in fair condition.
Authenticity
The authenticity of the Lines and Geoglyphs of Nasca and Pampas de Jumana is indisputable. The method of their formation, by removing the overlying weathered gravels to reveal the lighter bedrock, is such that their authenticity is assured. The creation, design, morphology, size and variety of the geoglyphs and lines correspond to the original designs produced during the historic evolution of the region’s and have remained unchanged. The ideology, symbolism and sacred and ritual character of the geoglyphs and the landscape are clearly represented, and their significance remains intact even today.
The concentration and overlapping of lines and figures provide a clear evidence of long and intense activity in the territory, reflecting the millenary magical-religious tradition of this activity by pre-Hispanic societies and the historic continuity in Nasca’s Rio Grande river basin. The property also shows different social process stages. Several historic sources and researches confirm the property’s originality and its original landscape surroundings still preserved in pristine condition and unaltered.
Even though there have been some impacts caused by natural and human factors, these have been minimal and the geoglyphs maintain their authenticity and express their high symbolic and historic value even today.
Protection and management requirements
The National Constitution (Art 36) and Law Nº 28296, General Law for National Cultural Heritage are the main legal protection tools for the Lines and Geoglyphs of Nasca and Pampas de Jumana,
The protection area boundaries are established by Resolution No. 421/INC as an Archaeological Reserve. However, it has been recommended to redefine those boundaries according to the lines and geoglyphs’ real distribution in the field and submit a new proposal to the World Heritage Committee.
Since 1941 foreign scientists (notably Dr. Maria Reiche) and the Ministry of Culture have carried out archaeological investigation, conservation, permanent protection and maintenance measures.
The management and protection of the Lines and Geoglyphs of Nasca and Pampas de Jumana is the responsibility of the Peruvian Government represented by the Ministry of Culture. Documentation, research, protection and dissemination activities are being performed through the implementation of national and international research projects and civil associations, in the territory of Nasca and Palpa provinces.
A management plan “Sistema de Gestión para el Patrimonio Cultural y Natural del territorio de Nasca y Palpa” for the entire area, which is fundamental in the protection of the Lines and Geoglyphs, has been formulated and is being implemented. ||||| Culture ministry says it will press charges against activists for damage to world heritage site as UN climate talks began in Lima
Greenpeace has apologised to the people of Peru after the government accused the environmentalists of damaging ancient earth markings in the country’s coastal desert by leaving footprints in the ground during a publicity stunt meant to send a message to the UN climate talks delegates in Lima.
A spokesman for Greenpeace said: “Without reservation Greenpeace apologises to the people of Peru for the offence caused by our recent activity laying a message of hope at the site of the historic Nazca lines. We are deeply sorry for this.
“Rather than relay an urgent message of hope and possibility to the leaders gathering at the Lima UN climate talks, we came across as careless and crass.”
Earlier Peru’s vice-minister for culture Luis Jaime Castillo had accused Greenpeace of “extreme environmentalism” and ignoring what the Peruvian people “consider to be sacred” after the protest at the world renowned Nazca lines, a Unesco world heritage site.
He said the government was seeking to prevent those responsible from leaving the country while it asked prosecutors to file charges of attacking archaeological monuments, a crime punishable by up to six years in prison.
The activists had entered a strictly prohibited area beside the figure of a hummingbird among the lines, the culture ministry said, and they had laid down big yellow cloth letters reading “Time for Change! The Future is Renewable” as the UN climate talks began in Peru’s capital.
“This has been done without any respect for our laws. It was done in the middle of the night. They went ahead and stepped on our hummingbird, and looking at the pictures we can see there’s very severe damage,” Castillo said. “Nobody can go on these lines without permission – not even the president of Peru!”
Peruvian authorities are also seeking the identity of the archaeologist who led the activists to the site and the plane from which the photos of the stunt were taken, he said. “It was thoughtless, insensitive, illegal, irresponsible and absolutely pre-meditated. Greenpeace has said it was planning this action for months.”
Tina Loeffelbein, a Greenpeace spokeswoman at the summit, said she was not aware of any legal proceedings being brought against the group. She said Greenpeace was cooperating with the Peruvian authorities and seeking to clarify what took place.
In a statement Greenpeace said it was concerned that it could have caused “moral offence to the Peruvian people”.
The statement read: “Our history of more than 40 years of peaceful activism clearly shows that we have always been most respectful with people around the world and their diverse cultural legacies.”
Castillo responded: “Disrespecting humanity’s cultural heritage – I don’t think that’s the message this summit or Greenpeace is trying to spread to the world! Most of us in the cultural sector agree with the message. But the means don’t justify the ends.”
“We took every care we could to try and avoid any damage. We have 40 years of experience of doing peaceful protests,” Kyle Ash, Greenpeace spokesman, told the Guardian. “The surprise to us was that this resulted in some kind of moral offense. We definitely regret that and we want to figure out a way to resolve it. We are very remorseful for any offense that we’ve caused and we’re very remorseful for that.”
He said Greenpeace met on Wednesday with Peru’s minister of culture, Diana Alvarez. He said the organization hoped to maintain a dialogue with the Peruvian government. He added Greenpeace would take “total responsibility” if any permanent damage had been caused to the archaeological site.
“It’s not a matter of money. The destruction is irreparable,” Ana Maria Cogorno, President of the Maria Reiche Association named after the German archaeologist whose groundbreaking research on the Nazca Lines from 1940 onwards saw them gain recognition and protection, told the Guardian.
The hummingbird etching on which the Greenpeace stunt was laid was the “only one of the lines which was completely untouched and perfectly conserved”, she said. “It’s one of the symbols of Peru,” she added.
Last week Greenpeace projected a message promoting solar energy on to Huayna Picchu, the mountain that overlooks the Inca citadel of Machu Picchu, another protected archaeological site in Peru. |||||
Greenpeace activists stand next to massive letters delivering the message "Time for Change: The Future is Renewable" next to the hummingbird geoglyph in Nazca in Peru on Dec. 8. Greenpeace activists from Brazil, Argentina, Chile, Spain, Germany, Italy and Austria displayed the message, leading to castigation of the group on social media and outrage from the Peruvian government, which said the activists' stunt caused "irreversible" damage to the delicate site. (Rodrigo Abd/AP)
When the stunt-planners at Greenpeace sent teams of activists to trespass this week at Peru's Nazca archeological site, they must have thought their bumper-sticker messaging would look good on a Facebook page next to the 2,000-year-old geodesic drawings.
After all, the group is known for stringing banners from bridges and skyscrapers to draw attention to its environmental campaigns, and with U.N. climate talks taking place in Lima this week, the activists clearly wanted to make an impact.
And so they have. The impact of their footprints on the fragile desert site, in fact, will last "hundreds or thousands of years," according to outraged Peruvian officials.
So furious is the Peruvian government that it has barred the Greenpeace activists from leaving the country and is preparing criminal charges for "attacking archeological monuments," punishable by up to eight years in prison.
On Tuesday, culture ministry officials showed reporters aerial photographs of the damage, and said that when the Greenpeace trespassers snuck into the U.N. World Heritage site in the middle of the night, they marched single-file across the delicate volcanic rocks and white sand, leaving a path that has introduced a new line to the iconic Hummingbird-shaped figure.
The damage is "irreversible," Peruvian officials say, explaining that the rainless desert landscape is so delicate that visitors are required to obtain government permission and use special shoes to approach the site.
"What they have done is an attack on a site that is one of the most fragile in the world," cultural official Luis Jaime Castillo told reporters Tuesday.
Greenpeace issued an apology Wednesday, saying it was "deeply concerned about any offense Peruvians may have taken."
A statement on the group's Facebook page earlier in the week insisted that "absolutely NO damage was done" by the stunt, and that "no trace was left behind." The activists laid out yellow cloth lettering next to the hummingbird with the group's logo and the message: "Time for Change! The Future is Renewable."
The Greenpeace members who participated were from Germany, Argentina, Chile, Brazil, Spain and Austria, according to the group, insisting that an archeologist was on hand at the site during the episode.
The Nazca Lines are one of South America's most storied archeological wonders, a mysterious series of huge animal, human and plant symbols that were carefully etched into the ground between 1,500 and 2,000 years ago. Tourists typically view them from the air.
But by treating the sacred site like a Manhattan skyscraper, a European train terminal or some other eye-catching advertising space up north, Greenpeace seems to have trampled more than the desert.
Anger at the group from Peruvians and others on social media has been directed more at the cultural condescension of their act, and the attitude that they could barge their way into one of Peru's most sensitive places for the sake of publicity.
Of course, one of the biggest challenges for climate change activists is to convince developing countries in the southern hemisphere that they should not aspire to enjoy the same material comforts — cars, airplanes, air conditioning, et al — that have enlarged the carbon footprint of wealthier nations.
This is a delicate moral argument to make. It looks especially hollow coming from activists who are willing to break your laws and stomp all over one of your most sacred places because they think they walk on higher ground. | – An organization that bills itself as an environmental watchdog can't claim it's an archaeological one after an incident in Peru that's left that country steaming. Earlier this week, 20 Greenpeace activists with a clean-energy message for officials meeting in Lima for UN climate talks spread huge yellow cloths spelling out "Time for Change; The Future Is Renewable" across a plot of land, the AP reports. That plot of land, unfortunately, was a section of the Nazca lines, a revered archaeological wonder and UNESCO World Heritage Site that features giant geoglyphs between 1,500 and 2,000 years old etched into the earth. A Greenpeace Facebook post from earlier in the week stated that "absolutely NO damage was done," but Peruvian officials say footprints left behind in the desert as the group set up their protest materials could hang around for "hundreds or thousands of years." Even country leaders have to secure permission to walk on Nazca ground and must wear special footwear, the Washington Post reports. "It's a true slap in the face at everything Peruvians consider sacred," a Peruvian culture minister told news agencies, the BBC notes. Greenpeace has apologized for the "moral offense" and issued a statement saying it "came across as careless and crass." The statement went on to say Greenpeace will "co-operate fully with any investigation" and will accept any "fair and reasonable consequences" imposed. The government is looking to prosecute the offenders for "attacking archaeological monuments"; those convicted could face up to six years in prison, notes the AP. (Greenpeace had some trouble in Russia not too long ago.) |
fifty - year - old woman was admitted with known atrial septal defect ( asd ) that was diagnosed 1 year ago .
transthoracic echocardiography demonstrated a 1.8 cm - sized secundum asd with a large left - to - right shunt ( qp / qs=2.0 ) seen by color flow doppler .
retrospectively , it appeared that the anterosuperior rim of the defect was not well developed ( fig .
percutaneous transcatheter septal closure ( ptsc ) was then scheduled . in the cardiac catheterization laboratory
, the asd was measured to be 18 mm in diameter with a balloon under fluoroscopic guidance using a 6-french ( 6f ) sheath through the right femoral vein after induction of general anesthesia .
an amplatzer septal occluder ( 18 mm ) was delivered through delivery sheath ( 10f ) to the left atrium , and deployed successfully .
the position of the device was confirmed by transesophageal echocardiography ( tee ) and fluoroscopic imaging ( fig .
the follow - up echocardiogram on the next day after ptsc revealed disappearance of the device at the atrial septal region and the shunt flow through asd was noted again .
the device was observed at the left ventricular outflow tract ( lvot ) , but luckily there was no significant disturbance of blood flow ( fig .
right atriotomy was performed and the secundum asd was found to be 21.5 cm in size .
the amplatzer septal occluder device was located in lvot , beneath the anterior leaflet of the mitral valve .
the device was successfully retrieved through the mitral opening without damage to the valvular apparatus .
although morbidity and mortality are extremely low in surgical repair of asd , the advantages of the ptsc ( including avoidance of cardiopulmonary bypass , reducing blood transfusion , shortening hospital stay , and early return to daily life ) have led ptsc to become the primary treatment option for most patent foramen ovale ( pfo ) and secundum asd in many centers . since the first implantation of amplatzer septal occluder in 1997 , this device is currently the most commonly used device for percutaneous closure of asd . in spite of progressive evolution of techniques and devices , this procedure has attendant failure and complications . and not all secundum asds are amenable to device closure .
the reported complications of the asd device include residual shunt , device malposition , caval thrombosis , systemic or pulmonary embolization , erosion and perforation of the heart , thromboembolism , and atrial arrhythmia .
device dislodgement can occur if the size of the defect greatly exceeds the waist diameter of the device . on the other hand , implantation of overly large device may cause erosion and perforation , especially when there is a deficient anterosuperior rim of the defect .
one of the largest series of 2,800 secundum asd closures using the amplatzer septal occluder reported 5 cases of cardiac erosion and 7 cases of device embolization . in a meta - analysis of pfo closure , major complications occurred in 1.5% of patients .
the risk factors for device embolization are large defect , large device , undersizing of device relative to the defect , inadequate defect rim to hold the device and mobility of device or atrial rim of tissue after device implantation .
moreover , inaccurate deployment and failure to button the asd or unbuttoning of the occluder can also result in device embolization .
most commonly reported causes of device embolization are inadequate rim of the defect and undersized device .
include a defect size less than 32 mm and the presence of at least 4 mm rim of atrial septal tissue surrounding the defect . in this case the cause of the device embolization was likely due to the fact that the device was less than securely positioned because of the deficient anterosuperior septal rim .
reported late embolization of the device that occurred 7 weeks after implantation and recommended avoidance of strenuous exercise for 6 months and close echocardiographic surveillance .
application of the device to secundum asd with deficient anterosuperior rim can be associated with other serious complications such as erosion and cardiac perforation .
the bruises on the aortic root found during surgical procedure were regarded to be caused by device abrasion .
if the device had been left at the atrial septum for longer periods of time without embolization it could have resulted in aorta to atrial fistula or free wall perforation of the atria resulting in tamponade .
a short or deficient anterosuperior rim should be considered as a risk factor for device embolization as well as aortic perforation in ptsc . in order to prevent these complications associated with ptsc ,
proper selection of patient and device is mandatory and surgical repair should remain the standard management for this variant of secundum asd . | the percutaneous transcatheter closure of secundum atrial septal defect has recently become an increasingly widespread alternative to surgical closure in many centers .
although immediate , short , and intermediate term results of percutaneous transcatheter septal closure are promising , the procedure is not free from inherent complications that could be lethal .
we report a case of device embolization necessitating emergent surgical retrieval . |
weyl semimetals are the new quantum state of matter in which a three - dimensional ( 3d ) gapless system exhibits a nontrivial topology @xcite .
however , materials that host weyl fermions in three dimensions must break either time - reversal ( @xmath5 ) or inversion ( @xmath6 ) symmetry @xcite .
this guarantees that two weyl points separated in momentum space are topologically stable @xcite .
they can only annihilate each other . for isolated weyl points ,
the low - energy hamiltonian is governed by a massless dirac equation @xcite @xmath7 , where @xmath8 is the triplet pauli matrices and @xmath9 is a 3-component brillouin zone momentum and @xmath10 are the locations of the weyl points with chirality @xmath11 .
weyl points of this form are robust to external perturbation as all three pauli matrices are used up in @xmath12 .
the chirality of the weyl points are related to the topological charges of the system .
they act as monopole and anti - monopole of the berry curvature in the brillouin zone ( bz ) with point - like fermi arcs surface states @xcite . in recent years
, several theoretical proposals of weyl semimetals have been studied .
these proposals range from pyrochlore iridates @xcite , topological insulator ( ti ) multilayer @xcite , magnetically doped topological band insulators @xcite to tight binding models @xcite .
recently , weyl semimetal has been discovered experimentally in photonic crystals @xcite .
the experimental realization of weyl semimetal in taas has also been reported using angle - resolved photoemission spectroscopy @xcite . in a lattice model , it is possible to generate massless dirac fermions with chirality in 2 dimensions .
such systems have been dubbed 2d weyl semimetals @xcite .
they appear as chiral relativistic fermions @xcite and exhibit an additional hidden discrete symmetry represented by an anti - unitary operator .
the degeneracy of the resulting weyl nodes are protected provided that there exists an anti - unitarity operator that commutes with the hamiltonian whose square is equal to @xmath13 at the degenerate points .
this is reminiscent of time - reversal symmetry protected dirac points in graphene . in this paper
, we study two ultra - thin film models .
firstly , we study an ultra - thin film of ti multilayer by utilizing the explicit expression of the conventional 2d ti ultra - thin film hamiltonian @xcite , which contains quadratic corrections in its low - energy hamiltonian , with tunneling parameters @xmath14 . as a customary procedure ,
we construct a 3d version of this model by sandwiching a normal insulator between layers of ti thin film with tunneling parameter @xmath2 and a magnetic field along the @xmath3-direction .
the resulting 3d model exhibits topological properties similar to burkov and balents model @xcite .
however , in the present model we compute the explicit expressions for the chern numbers in all the topological phases and show that when @xmath4 , the tunneling parameter @xmath1 changes sign as the system transits from wely semi - metallic phase to insulating phases .
we further study the low - temperature dependence of the chiral magnetic effect ( cme ) by computing the explicit expressions for the response function in the presence of a time - dependent magnetic field . in this case
, the model does not possess any analytical solution .
we numerically show that the chiral magnetic conductivity exhibits plateaus which separate three distinct phases of the system even though it is not an integer quantized quantity .
secondly , we study a simple lattice model using the layers of porphyrin thin films @xcite an organic material that can be synthesized in the laboratory .
we present a detail analysis of this model in both 2 and 3 dimensions . in particular , we show that this lattice model captures a 2d weyl semi - metallic phase , whose nodes are protected by an anti - unitary operator . in addition , our model also captures a 3d weyl semi - metallic phase , which appears as an intermediate phase between a 3d quantum anomalous hall ( qah ) insulator and a normal insulator ( ni ) .
it is also shown that the porphyrin lattice model can be used as a tight binding model for topological insulator thin film multilayer .
we use this model to simulate the chiral edge states of the 2d system and the surface states ( fermi arcs ) of the 3d system in all the nontrivial topological phases of the system .
in 2d topological insulator ultra - thin film , the hybridization between the top and the bottom layers gives rise to a massive dirac fermion @xcite .
here , we work from this 2d low - energy hamiltonian and construct a 3d model for wely semimetal by inserting insulator spacer layers between ti thin films and introduce a tunneling parameter that couples neighbouring layers of the ultra - thin film .
the hamiltonian for this multilayer is given by @xmath15 where @xmath16 the difference between this hamiltonian and that of burkov and balents @xcite is that eq .
[ 2 ] is quadratic in the momentum variables and the couplings are diagonal in the pseudo spin space .
it also has an advantage in that the surface states can be simulated through a lattice model and the chern numbers can be obtained explicitly in all the topological phases of the system .
the pauli matrices @xmath8 denote the real spin space and @xmath17 are the _ _ w__hich surface pseudo spins ; @xmath18 is a 2d momentum vector in the bz .
the indices @xmath19 label distinct thin film layers and @xmath20 is the fermi velocity ; @xmath0 and @xmath1 are the tunneling parameters that couple the top and bottom surfaces of the same thin film layer for small @xmath21 and large @xmath21 respectively , and @xmath22 is the zeeman splitting which can be induced by magnetic doping or directly applying a magnetic field ; @xmath2 is the tunneling parameter that couples the top and bottom surfaces of neighbouring thin film layers along the growth @xmath3-direction .
the parameters @xmath23 , @xmath1 , @xmath0 , and @xmath2 depend on the thickness of the thin film , @xmath1 and @xmath0 have been determined both numerically @xcite and experimentally@xcite .
the new parameter @xmath2 can also be determined by growing the multilayer above . in the 2d model ,
the energy gap in the ti ultra - thin film can be enhanced by using a thinner film .
thus , the thickness of the film can change the topology of the system . in the present model
, a smaller thickness should also enhance the weyl semimetallic state induced by the interlayer coupling @xmath2 and the magnetic field .
without loss of generality we assume all the parameters to be positive @xmath24 .
however , as will be shown in the subsequent sections , @xmath1 can be positive or negative when moving from the weyl semi - metallic phase to other phases of the system .
it is expedient to fourier transform the hamiltonian along the growth @xmath3-direction .
we obtain @xmath25\sigma_z , \label{fullti1}\ ] ] where @xmath26 \tau_z.\ ] ] the hamiltonian ( eq . [ fullti1 ] ) breaks @xmath27-symmetry due to the magnetic field , but inversion symmetry is preserved @xmath28 .
the eigenvalues of @xmath29 are @xmath30 , where @xmath31 and the corresponding eigenspinors are @xmath32 hence , the hamiltonian can be written as a @xmath33 massless dirac equation ( weyl equations ) given by @xmath34 with @xmath35 and @xmath36 or @xmath37 .
the form of the @xmath38 function for this model affects the phases that emerged when @xmath39 . for the present model , eq .
[ par ] with @xmath39 describes a 3d dirac semimetal which possesses time - reversal and inversion symmetries .
it exhibits a phase with two dirac nodes along the @xmath40-direction when @xmath41 and an insulating phase ( 3d qsh phase ) for @xmath42 . in the insulating phase ,
the @xmath43 topological number is @xmath44 , where @xmath45 characterizes a nontrivial phase .
the semi - metallic phase and the insulating phase are separated by a saddle point at @xmath46 , with energy @xmath47 . to obtain a nontrivial weyl semi - metallic phase , @xmath5- or @xmath6-symmetry
must be broken as mentioned above .
this requires that @xmath48 .
the corresponding energy eigenvalues of eq . [ par ] are given by @xmath49 where @xmath50 labels the conduction and the valence bands respectively , and the eigenvectors are @xmath51 where @xmath52 .
hence , the eigenspinors of the complete system are @xmath53 , where @xmath54 two weyl nodes are realized in the @xmath55 block of the dirac equation ( eq . [ par ] ) .
this corresponds to the solutions of @xmath56 , where @xmath57 never changes sign .
the weyl nodes are located at @xmath58 , @xmath59 , where @xmath60 with @xmath61 and @xmath62 .
brillouin zone ( -4,0.0 ) (10,0 ) ; ( 10.1,0 ) node[]@xmath23 ; ( -1,-.4 ) node[]@xmath63 ; ( 7,-.4 ) node[]@xmath64 ; ( -2.8,.2 ) node[]@xmath65 ; ( -3,.85 ) node[]@xmath66 ; ( 8.5,.2 ) node[]@xmath67 ; ( 8.5,.85 ) node[]@xmath68 ; ( 3.4,.2 ) node[]@xmath69 ; ( 3.45,1.5 ) node[]@xmath70 ; ( 3.45,.85 ) node[]@xmath68 ; ( -1,0 ) circle ( 1.5 mm ) ; ( 7,0 ) circle ( 1.5 mm ) ; ( 7,0.15 ) (8,2.2 ) ; ( 6.7,2.5 ) node[]@xmath71 ; ( -1,0.15 ) (0,2.2 ) ; ( -.8,2.5 ) node[]@xmath72 ; ( -2.8,-.3 ) node[]@xmath73 ; ( 8.5,-.3 ) node[]@xmath74 ; ( 3.5,-.3 ) node[]@xmath75 ; the phase diagram in fig .
[ pha ] comprises an ordinary insulator phase for @xmath76 , and a 3d qah phase for @xmath77 .
a 3d weyl semimetal with two weyl nodes appears in the regime @xmath78 , and a pair annihilation occurs exactly at the boundaries . as in all theoretical models , a 3d weyl semimetal phase always appears as an intermediate state between an ordinary insulator and a 3d quantum anomalous hall insulator .
the hall conductivity is given by @xmath79 in the present model , we can calculate the chern number explicitly by treating @xmath40 as a parameter , thus reducing the problem to an effective 2d model .
hence , the chern number is computed with the same formula @xcite @xmath80 where @xmath81 the @xmath82 block realizes weyl nodes , therefore the chern number is defined only for the occupied band of this block . using eqs .
[ thou3 ] and [ dee ] we obtain @xmath83 , \label{chern}\ ] ] where @xmath84 in the weyl semi - metallic phase @xmath78 , @xmath40 must take values in - between the nodes , i.e. , @xmath85 , hence @xmath86 .
a nonzero chern number then requires @xmath87 .
the chern number only changes when the gap closes and reopens at the boundaries @xmath88 . once the gap closes and reopens we must have @xmath89 to get a normal insulator phase and a nontrivial 3d qah phase at @xmath90 and @xmath91 respectively ( see fig .
[ pha ] ) .
this can be explicitly shown by evaluating the chern number at @xmath91 where the gap closes and reopens for @xmath77 .
we obtain @xmath92 .
\label{qah}\ ] ] a similar situation occurs at @xmath90 for @xmath76 , and the chern number is given by@xmath93 .
\label{ni}\ ] ] note that eqs .
[ qah ] and [ ni ] reduce to the 2d chern number @xcite when @xmath94 . in this case
, the band inversion requires that @xmath95 for a nontrivial topological phase to exist . for @xmath48
, the present model requires that @xmath89 as mentioned above .
this guarantees that the first chern number , eq .
[ qah ] , is integer quantized and describes a 3d qah phase and the second chern number , eq .
[ ni ] , is zero which describes a normal insulator phase .
as mentioned above , the topological property of weyl semimetal is also manifested as monopoles and anti - monopoles of the berry curvature .
this is evident by expanding eq .
[ chern ] near the weyl nodes , we obtain @xmath96 where @xmath97 , and @xmath11 is the chirality of the weyl nodes .
this expression explicitly shows a monopole and anti - monopole at @xmath98 with chirality @xmath99 respectively .
the fermi arcs in the vicinity of the weyl nodes is a special feature of weyl semimetals .
these arcs are exactly the edge states of the effective 2d system for fixed @xmath40 , and exist for any surface not perpendicular to the @xmath3-axis @xcite .
we can explicitly solve for these edge states by considering a slab geometry occupying the half - plane @xmath100 with open boundary condition along @xmath101-direction and translational invariant in the @xmath102-@xmath3 plane .
thus , @xmath103 and @xmath40 are good quantum numbers and @xmath104 is replaced by @xmath105 .
the hamiltonian can be written as @xmath106 we first consider @xmath107 and solve for the zero energy solution of the schrdinger equation @xmath108 , @xmath109\phi(k_z , x)=0 , \label{xx1}\ ] ] where @xmath110 is a 2-component spinor and we have multiplied through by @xmath111 .
we seek for a solution of the form @xmath112 where @xmath113 , ( @xmath114 ) and @xmath115 solves the equation @xmath116 the allowed solution that obey the boundary conditions of the wavefunction [ @xmath117 is given by @xmath118 where @xmath119 is a normalization constant , and @xmath120 are the positive solutions of eq .
the surface hamiltonian is obtained by projecting eq .
[ bulk ] onto the surface states @xmath121
in the previous section , we derived the phase diagram , anomalous hall conductivity , and surface states of an ultra - thin film of ti hamiltonian with quadratic momentum corrections . in this section ,
we study the response of the system to an orbital magnetic field through the vector potential , @xmath122 , which corresponds to a magnetic field along the growth @xmath3-direction .
the hamiltonian is given by @xmath123 we introduce the operator @xmath124 , and define the creation and annihilation operators : @xmath125 where @xmath126 is the magnetic length . in terms of @xmath127 and @xmath128 the hamiltonian becomes @xmath129 where @xmath130 and @xmath131 , where @xmath132 is given by @xmath133.\ ] ] here , @xmath134 is the magnetic frequency and @xmath135 is the harmonic oscillator frequency .
the eigenvector of each @xmath33 block may be written as @xmath136 where @xmath137 are constants to be determined .
the operators satisfy @xmath138 ; @xmath139 . hence , eq . [ hamm ] yields a @xmath33 eigenvalue equation for @xmath140 .
the hamiltonian yields @xmath141 where @xmath142 is an identity matrix , @xmath143 , and @xmath144 the eigenvalues of eq .
[ hamm1 ] are given by .
the parameters are @xmath145 t ; @xmath146mev , @xmath147mev , @xmath148mev ; @xmath149mev.,width=384 ] @xmath150 the corresponding eigenvectors are @xmath151 where @xmath152 .
the eigenspinors of the complete system are @xmath153 , where @xmath154 for @xmath39 , the zero landau levels crosses at @xmath155 , which vanishes at the dirac nodes @xmath156 . at the transition point
@xmath90 , @xmath157 . for @xmath42 ,
the regime @xmath158 corresponds to a 3d qsh phase and @xmath159 corresponds to trivial phase . the landau level for @xmath48 is shown in fig .
[ ll ] , which evidently captures the appearance of two weyl nodes in the vicinity of the bulk gap .
chiral magnetic effect is the response of a system to a time - dependent magnetic field .
this phenomenon is well - known in high energy physics as the chiral magnetic conductivity .
for instance , gluon field configurations with nonzero topological charges exhibit this effect @xcite .
it has been shown to occur in weyl semimetals @xcite . in this subsection
, we investigate the low - temperature dependence of the chiral magnetic conductivity on the ti ultra - thin film hamiltonian .
we will derive the expressions for our model , which do not possess any analytical solution .
we also show that the chiral magnetic conductivity captures the appearance of the three distinct phases of the system though it is not integer quantized like the quantum anomalous hall conductivity . in the linear response theory ,
the current operator is given by @xmath160 where @xmath161 is the current - current correlation function .
the chiral magnetic effect ( or conductivity ) arises in the presence of a time - dependent magnetic field along the @xmath3-direction . in the landau gauge
we adopt here , the magnetic field is only related to the @xmath162 component of the gauge field , that is @xmath163 .
assuming @xmath164 , we have @xmath165 .
the response of the system to a time - dependent magnetic field gives rise to an induced current given by @xmath166 thus , the chiral magnetic conductivity is @xmath167 the response function @xmath168 is in general antisymmetric .
the most convenient way to calculate the response function is from the imaginary time path integral of eq .
[ par1 ] minimally coupled to a vector potential , @xmath169 \psi(\bold r , \tau).\ ] ] after integrating out the fermion degree of freedom , the response function is given by @xcite @xmath170-f[\xi_{s\lambda}(\bold{k+q})]}{i\omega + \xi_{s^\prime\lambda^\prime}(\bold{k})-\xi_{s\lambda}(\bold{k+q})}\nonumber\\&\times\braket{\psi^{s\lambda}_{\bold{k+q}}|\psi^{s^\prime\lambda^\prime}_{\bold{k}}}\braket{\psi^{s^\prime\lambda^\prime}_{\bold{k}}|\boldsymbol{\sigma}\cdot\bold{q}|\psi^{s\lambda}_{\bold{k+q } } } , \label{res } \end{aligned}\ ] ] where @xmath171 and @xmath172=[e^{\xi_{s\lambda}(\bold{k})/t}+1]^{-1}$ ] is the fermi function , with @xmath173 . without loss of generality we assume @xmath174 .
the spatial contribution only comes from the landau gauge choice , thus we take @xmath175 .
there are two contributions to the response function the interband with @xmath176 and the intraband with @xmath177 .
we are interested in the low - frequency and long wavelength limits @xmath178 and @xmath179 .
however , the two limits are not commutative so the order in which the limits are taken is very crucial .
the former limit is the direct current ( dc ) limit of a transport coefficient , while the latter is the static limit . for the interband case ,
both order of limits contribute to the response function , so we can start with @xmath180 . in this case
all other terms in eq .
[ res ] are finite as @xmath181 except @xmath182 .
hence , we will expand this term to first order in @xmath183 .
since the pseudo spin scalar product produces a term @xmath184 , we have @xmath185,\label{res1}\\\braket{\psi^{s^\prime\pm}_{\bold{k}}|\psi^{s\mp}_{\bold{k}}}&=\delta_{ss^\prime}\frac{1}{k_\perp}[-k_y\frac{\epsilon_s(\bold k ) } { m_s(\bold k)}\pm
\label{res2}\end{aligned}\ ] ] plugging eqs .
[ res1 ] and [ res2 ] into eq .
[ res ] , the terms containing @xmath186 vanish by angular integration , we obtain @xmath187}{\epsilon_{s}^3(\bold{k})}m_s(\bold k)\label{res3},\ ] ] where @xmath188 performing the angular integration , we obtain @xmath189\nonumber\\&\times[1-f(\sqrt{x+m_s^2(x , k_z)}-\epsilon_f)],\end{aligned}\ ] ] where @xmath190 and @xmath191^{3/2}},\nonumber\\ \omega_{2z}^s(x , k_z)&= \frac{\tilde{t}_\perp m_s^2(x , k_z)}{2[x + m_s^2(x , k_z)]^{3/2}},\\\nonumber m_s(x , k_z)&=\gamma+s[\frac{t_s}{2}-\tilde{t}_\perp x + \frac{t_d}{2}\cos(k_zd)],\end{aligned}\ ] ] with @xmath192 .
now for the intraband case @xmath193 , the response function vanishes in the dc limit @xmath194 , i.e. , if we take long - wavelength limit first .
however , in the static limit @xmath195 , it is nonzero . in this case
, we have @xmath196=f[\xi_{s\lambda}(\bold{k } ) ] + \bold{q}\frac{\partial \xi_{s\lambda}(\bold{k } ) } { \partial \bold{q } } \frac{\partial f[\xi_{s\lambda}(\bold{k } ) ] } { \partial \xi_{s\lambda}(\bold{k})}+\cdots\\&\xi_{s\lambda}(\bold{k+q})=\xi_{s\lambda}(\bold{k})+\bold{q}\frac{\partial \xi_{s\lambda}(\bold{k } ) } { \partial \bold{q}}+\cdots\end{aligned}\ ] ] the intraband response function is given by @xmath197 } { \partial \xi_{s\lambda}(\bold{k})}\rb\frac{m_s(\bold k)}{\epsilon^2_s(\bold k)}. \label{intra}\ ] ] in the present model , the integrations [ eqs . [ res4 ] and [ intra ] ] can not be done analytically .
we can reduce the problem in a way that is amenable to numerical integration by performing the angular integration .
we obtain @xmath198\label{res5}\nonumber\\ & \times\big[4t\cosh^2\lb\sqrt{x+m_s^2(x , k_z)}-\epsilon_f\rb\big]^{-1},\end{aligned}\ ] ] where @xmath199};~ \tilde{\omega}_{2z}^s(x , k_z)= \frac{\tilde{t}_\perp m_s^2(x , k_z)}{2[x + m_s^2(x , k_z)]}.\ ] ] the conductivity is given by eq .
[ condd ] . in the two non - commutative limits we obtain two conductivities given by @xmath200 .
\label{th}\end{aligned}\ ] ] the first limit [ eq . [ cme ] ] is the chiral magnetic effect ( cme ) . as mentioned above , this is nothing but the chiral magnetic conductivity , a phenomenon well - studied in high energy physics @xcite .
the second limit [ eq . [ th ] ] is a thermodynamic equilibrium quantity corresponding to the static limit ; we will focus on eq .
[ cme ] . at @xmath201 and @xmath202 in units of @xmath0 .
the parameters are @xmath203 and @xmath204 .
the sign of @xmath1 is unimportant because @xmath205 contribute to @xmath206 .
the different regimes separated by plateaus are ordinary insulator @xmath76 ( @xmath207 ) ; weyl semimetal @xmath78 ( @xmath208 ) , and quantum anomalous hall insulator @xmath77 ( @xmath208 ) . ,
width=384 ] ( solid ) ; @xmath209 ( dashed ) ; @xmath210 ( dotted ) .
the parameters are in units of @xmath0 with @xmath211 ; @xmath212 , @xmath213 ; @xmath214.,width=384 ] figure [ cond1 ] shows the plot of the chiral magnetic conductivity against the magnetic field @xmath23 .
note that the sign of @xmath1 is irrelevant because the two masses @xmath215 contribute in the computation of chiral magnetic conductivity .
interestingly , the chiral magnetic conductivity captures the appearance of the three phases of the system .
the plateaus of @xmath206 correspond to phase transitions from ordinary insulator @xmath76 ( @xmath207 ) to weyl semimetal @xmath78 ( @xmath208 ) , and from weyl semimetal to quantum anomalous hall insulator @xmath77 ( @xmath208 ) .
also notice from fig .
[ cond1 ] that the chiral magnetic conductivity is not a quantized quantity unlike the quantum anomalous hall conductivity , eq . [ qahc ] .
figure [ cond ] shows the chiral magnetic conductivity as a function of the fermi energy . in this case
, a step peak occurs at @xmath201 at low temperatures .
in this section , we propose and analyze a lattice model for weyl semimetals from a porphyrin thin film layer . we will also show the connection of this model to that of ti thin film studied above . to construct a 3d lattice model
, it is customary to stack layers of porphyrin thin film on top of each other along the @xmath3-direction .
the 2d hamiltonian of a porphyrin thin film is given by @xcite @xmath216 \label{genbhz } \nonumber\\&+j_\perp\sum_{m}[a^\dagger_{m}a_{m+\hat{x}(\hat{y})}-b^\dagger_{m}b_{m+\hat{x}(\hat{y})}+h.c . ]
+ \mu_{xy}\sum_{m}[a^\dagger_{m } a_{m}-b^\dagger_{m } b_{m } ] .
\end{aligned}\ ] ] the nearest neighbour ( nn ) sites are along the diagonals with coordinates @xmath217 and @xmath218 , and complex hopping parameters , @xmath219 , where @xmath220 ; @xmath221 and @xmath222 ; @xmath223 is a phase factor , which can be regarded as a magnetic flux treading the lattice .
the total flux on a square plaquette vanishes just like in haldane model @xcite .
the next nearest neighbour ( nnn ) sites are along the horizontal and vertical axes with real hopping parameter @xmath224 . the last term in eq .
[ genbhz ] is the staggered onsite potential with a tuneable parameter @xmath225 .
next , we introduce an interlayer coupling between the porphyrin thin film layers along the @xmath3-direction . the hamiltonian is given by @xmath226 + \mu_z\sum_{m}[a^\dagger_{m } a_{m}-b^\dagger_{m } b_{m } ] .
\label{genbhz1}\ ] ] here , the staggered onsite potential is along the @xmath3-direction with tuneable parameter @xmath227 , and @xmath228 is a real coupling constant . performing the fourier transform of the lattice model we obtain @xmath229 , where @xmath230\sigma_x \nonumber\\&-[\tilde{\rho}_1\cos\lb k_+
-{\phi}\rb+\tilde{\rho}_2\cos\lb k_- + { \phi}\rb]\sigma_y\nonumber\\&+[\mu_{xy}-2t_{\perp}\lb\cos ( k_+ + k_-)+\cos ( k_+- k_-)\rb]\sigma_z\nonumber\\ & -\frac { t_d}{2}[\cos(k_z d)+\cos(k_w d)]\sigma_z ; \label{fullti}\end{aligned}\ ] ] the above hamiltonian eq . [ fullti ] is obtained with the rescaled parameters @xmath231 , @xmath232 , and we have fine - tuned the staggered potential to @xmath233 .
we also set the lattice constants @xmath234 , and @xmath235 , where @xmath236 is the separation of the porphyrin thin film layers .
@xmath237 , @xmath238 , @xmath239 ; @xmath240 , and @xmath241 , where @xmath242 and @xmath243 denote real and imaginary parts of the complex hopping terms @xmath244 .
the model eq .
[ fullti ] can be simplified by taking @xmath245 , which implies that @xmath246 and @xmath247 .
this is a reasonable simplification and will be adopted throughout our analysis .
as mentioned above , 2d weyl semi - metals can be constructed from a lattice model @xcite . in this section ,
we show how it emerges from the porphyrin thin film layer . in the 2d limit @xmath248 , the hamiltonian eq .
[ fullti ] has the form @xmath249\sigma_x
\nonumber\\&- \rho[\cos\lb k_+ -{\phi}\rb-\cos\lb k_- + { \phi}\rb]\sigma_y .
\label{2d}\end{aligned}\ ] ] for @xmath250 or @xmath251 , eq . [ 2d ] can be written as @xmath252 where @xmath253 . in units of @xmath254 .
there are four degenerate points in the bz with each pair located at @xmath255 and @xmath256.,width=384 ] along the @xmath103 direction .
, width=384 ] at @xmath257 with @xmath258 .
the four regimes are : the insulating phase @xmath259 , weyl semi - metallic phase @xmath260 , phase transition point @xmath261 , and the 3d qah phase @xmath262 . ,
width=384 ] at @xmath250 or @xmath263 with @xmath264 .
the parameters are the same as fig .
[ band].,width=384 ] as shown in fig .
[ band1 ] , the energy band has four degenerate points located at @xmath255 and @xmath256 .
however , the degeneracy of an energy band does not guarantee a weyl semi - metallic phase . to obtain a nontrivial topological semimetal ,
symmetry consideration must be taken into account .
for the hamiltonian in eq .
[ 2dd ] , time - reversal symmetry ( @xmath265 ) is broken but inversion symmetry ( @xmath266 ) is preserved . for 2d systems , however , there is an additional hidden discrete symmetry with an anti - unitary operator @xcite . more generally , _ _
i__f a system is invariant under the action of an anti - unitary operator and the square of the operator is not equal to 1 , there must be degeneracy protected by this anti - unitary operator @xcite . in the present model , there is an anti - unitary operator for which the hamiltonian ( eq . [ 2dd ] )
is invariant .
it is given by @xmath267 , where @xmath268 is complex conjugation and @xmath269 translates the lattice by @xmath270 and @xmath271 along the @xmath101- and @xmath102-directions .
it is easy to check that @xmath272 .
it follows that @xmath273 at @xmath274 and @xmath275 .
note that @xmath276 at various points in the bz , e.g. , @xmath277 .
however , the energy does not vanish at these points , the reason being that they are not @xmath278-invariant points .
thus , the theorem stated above is only valid at the @xmath278-invariant points .
the four degenerate points in the energy spectrum is consistent with nielsen - ninomiya theorem @xcite , which states that weyl points must occur in pair(s ) with opposite helicity in a lattice model . near these points ,
the hamiltonian is linearized as @xmath279,\label{2dn}~ \mathcal{h}_2(\bold{q})=v_f[\mp q_x\sigma_x\pm q_y\sigma_y],\ ] ] where @xmath280 and @xmath281 .
the hamiltonian has the general form @xmath282 , where @xmath283 form a @xmath33 matrix .
the chirality of the weyl points is given by @xmath284\rb\label{chi}.\ ] ] from eqs .
[ 2dn ] and [ chi ] we obtain @xmath285 for the whole system , which signifies the topological nature of the system . as a massless dirac fermion with chirality
, the system above can be regarded as a 2d weyl semi - metal which hosts a 2d wely fermion . in fig .
[ band1 ] , opposite chirality is assigned to neighbouring weyl points in cyclic order . moreover , in 2d weyl semimetal there is a chiral edge state propagating in the intermediate region between the weyl points .
this can be explicitly shown by considering a semi - infinite system with periodic boundary conditions along the @xmath103 direction and open boundary condition along the @xmath104 direction @xcite .
the bulk band is shown in fig .
[ 2dege ] along the @xmath103 direction .
the bulk gap vanishes at the locations of the weyl points along the @xmath103-axis consistent with fig .
[ band1 ] .
however , topological protected flat - band chiral edge states emerge in - between the weyl nodes .
these chiral edge states connect the weyl points with opposite chirality along the @xmath103-direction .
in this section , we study the possibility of 3d weyl semi - metallic phase in the proposed lattice model .
the goal is to utilize this model to simulate the ti multilayer surface states ( fermi arc ) . in 3 dimensions we must have @xmath286 ;
a 3d weyl semimetal can be obtained with a judicious choice of @xmath223 . in particular , for @xmath257 and @xmath245 , eq .
[ fullti ] has the form @xmath287 , where @xmath288 @xmath289\sigma_z .
\label{3d}\ ] ] it is easy to see that with a fine - tuned @xmath258 , the partial continuum limit of eqs . [ 3dd ] and [ 3d ] is exactly the inner @xmath33 block of eq .
[ par ] and the weyl nodes are located at the same points @xmath290 , with @xmath291 . thus , the porphyrin thin film multilayer lattice model recovers that of ti thin film multilayer in the partial continuum limit . the evolution of the energy bands in the bz are shown in fig . [ band ] . near
the weyl points the hamiltonian is given by @xmath292 where @xmath293 , and @xmath294 .
the hamiltonian still has the general form @xmath282 , only that @xmath283 is now a @xmath295 matrix with components @xmath296 .
the chirality of the weyl points is the same @xmath297 . in this case , the nontrivial topology of eq .
[ fullti ] stems from the fact that eq .
[ fullti ] preserves inversion symmetry but breaks time - reversal symmetry , when @xmath257 .
another judicious choice of @xmath223 is @xmath250 or @xmath251 .
the resulting hamiltonian in this case is given by @xmath298 , but it is different from that of ti thin film .
however , the system still preserves inversion symmetry and breaks time - reversal symmetry ; thus a weyl semi - metallic phase can be obtained .
there are four weyl points in the bz , each pair is located at @xmath299 and @xmath300 , where @xmath301 and @xmath302 $ ] , with @xmath303 .
the energy bands are shown in fig .
[ band2 ] .
the hamiltonian near the weyl points is a combination of eq .
[ 2dn ] and the last term in eq .
[ 3dn ] .
now , we study the surface states evolution of the weyl semi - metallic phases above .
this is an important feature of 3d weyl semimetals @xcite and it is what is observed in most experiments @xcite . in our lattice model
, these states can be solved explicitly for any surface not perpendicular to the @xmath3-axis .
in fact , they are nothing but the edge states of the effective 2d model for fixed values of @xmath40 . we have shown the evolution of the states for @xmath257 in fig .
[ full_ti_edge ] ( a)(f ) , which corresponds exactly to the ultra - thin film of ti multilayer studied above .
the top panel describes the weyl semi - metallic phase bounded by two gapless bulk bands at the location of the weyl points . for @xmath85
, there exist dispersive surface states propagating in the vicinity of the bulk gap only when @xmath87 .
they are gapless at @xmath107 exactly at zero energy . in the bottom panel
we show the insulating phases after the weyl nodes annihilate and a gap opens at @xmath91 or @xmath90 . in this case
, the surface states still capture the appearance of the two insulating phases
3d qah and ni only when @xmath89 .
these results are consistent with our previous analysis and the energy dispersion in fig .
[ band ] . for other choices of @xmath223 such as @xmath304 ,
the situation is a little bit different .
the gapless surface states only occur at @xmath305 , when @xmath306 , but @xmath107 is gapped in this case and we observe that there exist gapped surface states propagating in this vicinity ( not shown ) .
in this paper , we have presented a detail analysis of two thin film models of weyl semimetals .
we showed that in an ultra - thin film of topological insulator multilayer the parameters of the system can change sign as the system transits from one topological phase to another . in this model
, we presented the low - temperature dependence of the chiral magnetic conductivity , induced by a time - dependent magnetic field .
we showed that the topological phases of the system can , indeed , be captured by the plateaus of the chiral magnetic conductivity .
we also proposed and studied a simple lattice model of porphyrin thin film .
we showed that this model embodies many weyl semi - metallic phases for a specific gauge choice , which acts as a magnetic flux treading the lattice .
we obtained a 2d weyl semi - metallic phase in the @xmath307-@xmath308 space .
we showed that the degeneracy of the weyl nodes is protected by an anti - unitary operator .
our model also realized a 3d weyl semi - metallic phase , which can be regarded as the lattice model for an ultra - thin film of topological insulator ( ti ) multilayer .
thus , it paved the way to numerically study the surface states of the ti multilayer .
we obtained the edge states and the surface states in two and three dimensions respectively , as well as in all the nontrivial topological phases of the ti multilayer in three dimensions .
as the porphyrin thin film is an organic material that can be grown in the laboratory , the proposed model can perhaps be studied experimentally or in 2d optical lattices . as shown in this paper ,
the porphyrin thin film is also a candidate to search for chiral relativistic fermions in two dimensions .
the author would like to thank j. -m .
hou for enlightening discussions .
the author would also like to thank african institute for mathematical sciences for hospitality .
research at perimeter institute is supported by the government of canada through industry canada and by the province of ontario through the ministry of research and innovation .
10 a. a. burkov and l. balents , _ phys . rev .
lett . _ * 107 * , 127205 ( 2011 ) .
a. a. burkov , m. d. hook , and l. balents , _ phys .
b. _ * 84 * , 235126 ( 2011 ) .
g. b. halasz and l. balents , _ phys .
b. _ * 85 * , 035103 ( 2012 ) .
a. a. zyuzin , m. d. hook , a. a. burkov , _ phys .
b. _ * 83 * , 245428 ( 2011 ) .
a. a. zyuzin , s. wu , and a. a. burkov , _ phys .
b. _ * 85 * , 165110 ( 2012 ) .
f. r. klinkhamer , g. e. volovik , _ int .
phys . _ * a20 * , 2795 ( 2005 ) ; g. e. volovik , _ the universe in a helium droplet , oxford university press _ , ( 2003 ) .
x. wan _ _ e__t al . , _ phys
b. _ * 83 * , 205101 ( 2011 ) .
s. murakami , new j. phys .
* 9 * , 356 ( 2007 ) .
w. witczak - krempa and y. b. kim , _ phys .
b. _ * 85 * , 045124 ( 2012 ) .
liu , p. ye , x. -l .
qi , _ phys .
b. _ * 87 * , 235306 ( 2013 ) ; g. y. cho , arxiv:1110.1939 .
_ e__t al . , _ phys .
b. _ * 84 * , 075129 ( 2011 ) . c. -z .
e__t al . , _ phys .
lett . _ * 115 * , 246603 ( 2015 ) .
lu , s. -b .
zhang , and s. -q .
shen , _ phys .
b. _ * 92 * , 045203 ( 2015 ) .
p. delplace , j. li , d. carpentier , _ epl _ * 97 * , 67004 ( 2012 ) .
jiang , _ phys .
a. _ * 85 * , 033640 ( 2012 ) .
slager , _ _
e__t al . ,
arxiv:1509.07705 , ( 2015 ) .
l. lu , _ _ e__t al . , _ science _ , * 349 * , 622 ( 2015 ) .
e__t al . , _ science _ , * 349 * , 613 ( 2015 ) .
b. q. lv _ _
e__t al . , _ phys .
x _ * 5 * , 031013 ( 2015 ) .
b. q. lv , _ _
e__t al . , _ nature physics _ * 11 * , 724 ( 2015 ) .
hou , _ phys .
* 111 * , 130403 ( 2013 ) .
joel yuen - zhou , _ _
e__t al . , _ nature materials _ * 13 * , 1026 ( 2014 ) . h. -z lu , _ _
e__t al . , _ phys .
b. _ * 81 * , 115407 ( 2010 ) .
h. li , _ _
e__t al . , _ phys .
b. _ * 82 * , 165104 , ( 2010 ) .
h. li , l. sheng , and d. y. xing , _ phys .
b. _ * 85 * , 045118 2012 ; _ phys .
b. _ * 84 * , 035310 ( 2012 ) .
shan , h. -z .
lu , and s. -q .
shen , _ new j. phys .
_ * 12 * , 043048 ( 2010 ) . | we investigate an ultra - thin film of topological insulator ( ti ) multilayer as a model for a three - dimensional ( 3d ) weyl semimetal .
we introduce tunneling parameters @xmath0 , @xmath1 , and @xmath2 , where the former two parameters couple layers of the same thin film at small and large momenta , and the latter parameter couples neighbouring thin film layers along the @xmath3-direction .
the chern number is computed in each topological phase of the system and we find that for @xmath4 , the tunneling parameter @xmath1 changes from positive to negative as the system transits from weyl semi - metallic phase to insulating phases .
we further study the chiral magnetic effect ( cme ) of the system in the presence of a time dependent magnetic field .
we compute the low - temperature dependence of the chiral magnetic conductivity and show that it captures three distinct phases of the system separated by plateaus .
furthermore , we propose and study a 3d lattice model of porphyrin thin film , an organic material known to support topological frenkel exciton edge states .
we show that this model exhibits a 3d weyl semi - metallic phase and also supports a 2d weyl semi - metallic phase .
we further show that this model recovers that of 3d weyl semimetal in topological insulator thin film multilayer .
thus , paving the way for simulating a 3d weyl semimetal in topological insulator thin film multilayer .
we obtain the surface states ( fermi arcs ) in the 3d model and the chiral edge states in the 2d model and analyze their topological properties . _
_ k__eywords : weyl semimetals , quantum anomalous conductivity , chiral magnetic conductivity , topological insulator thin film , porphyrin thin film . |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Wireless Privacy Enhancement Act of
1998''.
SEC. 2. COMMERCE IN ELECTRONIC EAVESDROPPING DEVICES.
(a) Prohibition on Modification.--Section 302(b) of the
Communications Act of 1934 (47 U.S.C. 302a(b)) is amended by inserting
before the period at the end thereof the following: ``, or modify any
such device, equipment, or system in any manner that causes such
device, equipment, or system to fail to comply with such regulations''.
(b) Prohibition on Commerce in Scanning Receivers.--Section 302(d)
of such Act (47 U.S.C. 302a(d)) is amended to read as follows:
``(d) Equipment Authorization Regulations.--
``(1) Privacy protections required.--The Commission shall
prescribe regulations, and review and revise such regulations
as necessary in response to subsequent changes in technology or
behavior, denying equipment authorization (under part 15 of
title 47, Code of Federal Regulations, or any other part of
that title) for any scanning receiver that is capable of--
``(A) receiving transmissions in the frequencies
that are allocated to the domestic cellular radio
telecommunications service or the personal
communications service;
``(B) readily being altered to receive
transmissions in such frequencies;
``(C) being equipped with decoders that--
``(i) convert digital domestic cellular
radio telecommunications service, personal
communications service, or protected
specialized mobile radio service transmissions
to analog voice audio; or
``(ii) convert protected paging service
transmissions to alphanumeric text; or
``(D) being equipped with devices that otherwise
decode encrypted radio transmissions for the purposes
of unauthorized interception.
``(2) Privacy protections for shared frequencies.--The
Commission shall, with respect to scanning receivers capable of
receiving transmissions in frequencies that are used by
commercial mobile services and that are shared by public safety
users, examine methods, and may prescribe such regulations as
may be necessary, to enhance the privacy of users of such
frequencies.
``(3) Tampering prevention.--In prescribing regulations
pursuant to paragraph (1), the Commission shall consider
defining `capable of readily being altered' to require scanning
receivers to be manufactured in a manner that effectively
precludes alteration of equipment features and functions as
necessary to prevent commerce in devices that may be used
unlawfully to intercept or divulge radio communication.
``(4) Warning labels.--In prescribing regulations under
paragraph (1), the Commission shall consider requiring labels
on scanning receivers warning of the prohibitions in Federal
law on intentionally intercepting or divulging radio
communications.
``(5) Definitions.--As used in this subsection, the term
`protected' means secured by an electronic method that is not
published or disclosed except to authorized users, as further
defined by Commission regulation.''.
(c) Implementing Regulations.--Within 90 days after the date of
enactment of this Act, the Federal Communications Commission shall
prescribe amendments to its regulations for the purposes of
implementing the amendments made by this section.
SEC. 3. UNAUTHORIZED INTERCEPTION OR PUBLICATION OF COMMUNICATIONS.
Section 705 of the Communications Act of 1934 (47 U.S.C. 605) is
amended--
(1) in the heading of such section, by inserting
``interception or'' after ``unauthorized'';
(2) in the first sentence of subsection (a), by striking
``Except as authorized by chapter 119, title 18, United States
Code, no person'' and inserting ``No person'';
(3) in the second sentence of subsection (a)--
(A) by inserting ``intentionally'' before
``intercept''; and
(B) by striking ``and divulge'' and inserting ``or
divulge'';
(4) by striking the last sentence of subsection (a) and
inserting the following: ``Nothing in this subsection prohibits
an interception or disclosure of a communication as authorized
by chapter 119 of title 18, United States Code.'';
(5) in subsection (e)(1)--
(A) by striking ``fined not more than $2,000 or'';
and
(B) by inserting ``or fined under title 18, United
States Code,'' after ``6 months,''; and
(6) in subsection (e)(3), by striking ``any violation'' and
inserting ``any receipt, interception, divulgence, publication,
or utilization of any communication in violation'';
(7) in subsection (e)(4), by striking ``any other activity
prohibited by subsection (a)'' and inserting ``any receipt,
interception, divulgence, publication, or utilization of any
communication in violation of subsection (a)''; and
(8) by adding at the end of subsection (e) the following
new paragraph:
``(7) Notwithstanding any other investigative or enforcement
activities of any other Federal agency, the Commission shall
investigate alleged violations of this section and may proceed to
initiate action under section 503 of this Act to impose forfeiture
penalties with respect to such violation upon conclusion of the
Commission's investigation.''.
Passed the House of Representatives March 5, 1998.
Attest:
ROBIN H. CARLE,
Clerk. | Wireless Privacy Enhancement Act of 1998 - Amends the Communications Act of 1934 to prohibit modifying any electronic communication device, equipment, or system in a manner which causes it to fail to comply with regulations governing electronic eavesdropping devices. Directs the Federal Communications Commission (FCC) to prescribe regulations (and review and revise them when necessary in response to changes in technology and behavior) denying equipment authorization for any scanning receiver capable of: (1) receiving transmissions in frequencies allocated to the domestic cellular or personal communications service; (2) being readily altered to receive such transmissions; (3) being equipped with decoders that convert domestic cellular or personal communications service or protected specialized mobile radio service transmissions to analog voice audio, or which convert protected paging service transmissions to alphanumeric text; or (4) being equipped with devices that otherwise encode encrypted radio transmissions for purposes of unauthorized interception. Directs the FCC, with respect to scanning receivers capable of receiving transmissions in frequencies used by commercial mobile services and that are shared by public safety users, to examine methods and prescribe regulations to enhance the privacy of users of such frequencies. Requires tampering prevention measures and warning labels to be considered by the FCC in prescribing such regulations. Applies penalties for the unauthorized publication or use of electronic communications to the unauthorized receipt, intentional interception, or divulgence of any such communication. Directs the FCC to investigate alleged violations and proceed to initiate action to impose forfeiture penalties. |
we are interested in producing estimates from a sequence of probability distributions .
the aim is to quickly report these estimates with a user - specified bound on the monte carlo error .
we assume that it is possible to use mcmc methods to draw samples from the target distributions .
for example , the sequence can be the posterior distributions of parameters from a bayesian model as additional data becomes available , with the aim of reporting the posterior means with the variance of the monte carlo error being less than @xmath0 .
we present a general system that addresses this problem .
our system involves saving the samples produced from the mcmc sampler in a database .
the samples are updated each time there is a change of sample space .
the update involves weighting or transiting the samples , depending on whether the space sample changes or not . in order to control the accuracy of the estimates ,
the samples in the database are maintained .
this maintenance involves increasing or decreasing the number of samples in the database .
this maintenance also involves monitoring the quality of the samples using their effective sample size .
see table [ tab : control_variates ] for a summary of the control variables .
another feature of our system is that the mcmc sampler is paused whenever the estimate is accurate enough .
the mcmc sampler can later be resumed if a more accurate estimate is required .
therefore , it may be the case that no new samples are generated for some targets .
hence the system is efficient , as it reuses samples and only generates new samples when necessary .
our approach has similar steps to those used in sequential monte carlo ( smc ) methods @xcite , such as an update ( or transition ) step and re - weighting of the samples . despite the similarities , smc methods are unable to achieve the desired aims considered in this paper .
specifically , even though smc methods are able to produce estimates from a sequence of distributions , it is unclear how to control the accuracy of this estimate without restarting the whole procedure .
for example , consider the simulations in @xcite where the bootstrap particle filter , a particular smc method , is introduced . in these simulations
the posterior mean is reported with the interval between @xmath1 and @xmath2 percentile points .
as these percentile points are fixed , there is no way to reduce the length of the interval .
the only hope of reducing the interval is to rerun the particle filter with more particles , although there is no guarantee .
this conflicts with the aim of reporting the estimates quickly . in practice ,
most smc methods are concerned with models where only one observation is revealed at a time ( see simulations in e.g. @xcite , @xcite , @xcite ) .
our framework allows for observations to be revealed in batches of varying sizes ; see the application presented in [ sec : football ] .
n. metropolis , a. w. rosenbluth , m. n. rosenbluth , a. h. teller , and e. teller .
equation of state calculations by fast computing machines . _ the journal of chemical physics _ , 210 ( 6):0 10871092 , 1953 . | a system to update estimates from a sequence of probability distributions is presented .
the aim of the system is to quickly produce estimates with a user - specified bound on the monte carlo error .
the estimates are based upon weighted samples stored in a database .
the stored samples are maintained such that the accuracy of the estimates and quality of the samples is satisfactory .
this maintenance involves varying the number of samples in the database and updating their weights .
new samples are generated , when required , by a markov chain monte carlo algorithm .
the system is demonstrated using a football league model that is used to predict the end of season table .
correctness of the estimates and their accuracy is shown in a simulation using a linear gaussian model .
* key words : * importance sampling ; markov chain monte carlo methods ; monte carlo techniques ; streaming data ; sports modelling |
research in mimo radar has been growing as evidenced by an increasing body of literature @xcite-@xcite .
generally speaking , mimo radar systems employ multiple antennas to transmit multiple waveforms and engage in joint processing of the received echoes from the target .
two main mimo radar architectures have evolved : with colocated antennas and with distributed antennas .
mimo radar with colocated antennas makes use of waveform diversity @xcite , while mimo radar with distributed antenna takes advantage of the spatial diversity supported by the system configuration @xcite .
mimo radar systems have been shown to offer considerable advantages over traditional radars in various aspects of radar operation such as the detection of slow moving targets @xcite , the ability to identify and separate multiple targets @xcite , and in the estimation of target parameters such as direction - of - arrival ( doa ) @xcite , and range - based target localization @xcite .
in particular , @xcite studies target localization with mimo radar systems utilizing sensors distributed over a wide area .
conventional localization techniques include time - of - arrival ( toa ) , time - difference - of - arrival ( tdoa ) , and direction - of - arrival ( doa ) based schemes .
mimo radar system with colocated antenna can perform doa estimation of targets in the far - field , in which case , the received signal has a planar wavefront . in this class of systems
, extensive research has focused on waveform optimization . in @xcite the signal vector transmitted by a mimo radar system is designed to minimize the cross - correlation of the signals bounced from various targets to improve the parameter estimation accuracy in multiple target schemes .
some of the waveform optimization techniques suggested in @xcite are based on the cramer - rao lower bound ( crlb ) matrix @xcite,@xcite .
the crlb is known to provide a tight bound on parameter estimation for high signal - to - noise ratio ( snr ) .
several design criteria are considered , such as minimizing the trace , determinant , and the largest eigenvalue of the crlb matrix , concluding that minimizing the trace of the crlb gives a good overall performance in terms of lowering the crlb . in @xcite , a crlb evaluation of the achievable angular accuracy
is derived for linear arrays with orthogonal signals .
the use of orthogonal signals is shown to provide better accuracy than correlated signals . for low snr scenarios ,
the barankin bound is derived in @xcite , demonstrating that the use of orthogonal signals results in a lower snr threshold for transitioning into the region of higher estimation error .
mimo radar systems with widely spread antennas take advantage of the geographical spread of the deployed sensors .
the multiple propagation paths , created by the transmitted waveforms and echoes from scatterers in their paths support target localization through either direct or indirect multilateration . with direct multilateration ,
the observations collected by the sensors are jointly processed to produce the localization estimate . with indirect multilateration ,
the toas are estimated first , and the localization is subsequently estimated from the toas . the observations and processing can also be classified as either non - coherent or coherent .
the distinction between the two modes relies on the need for mere time synchronization between the transmitting and receiving radars in the non - coherent case , versus the need for both time and phase synchronization in the coherent case .
note that our coherent / non - coherent terminology is limited to the processing for localization .
thus , a transmitted signal may have in - phase and quadrature components , yet the localization processing is non - coherent if it utilizes only information in the signal envelope . in the sequel , we evaluate the performance of localization utilizing both coherent and non - coherent processing .
mimo radar systems belongs to the class of active localization systems , where the signal usually travels a round trip , i.e. the signal transmitted by one sensor in a radar system is reflected by the target and measured by the same or a different sensor .
traditional single - antenna radar systems , performing active range - based measurements , are well known in literature @xcite-@xcite .
the target range is computed from the time it takes for the transmitted signal to get to the target plus the travelling time of the reflected signal back to the sensor .
the range estimation accuracy is directly proportional to the mean squared error ( mse ) of the time delay estimation and is shown to be inversely proportional to the signal effective bandwidth @xcite . a first study of the localization accuracy capability of mimo radar systems is provided in @xcite , where the fisher information matrix ( fim ) is derived for the case of orthogonal signals with coherent processing and widely separated antennas .
the crlb is analyzed numerically , pointing out the dependency of the accuracy on the signal carrier frequency in the coherent case , and its reliance on the relative locations of the target and sensors . in @xcite , it is observed that the crlb is a function of the number of transmitting and receiving sensors , however an analytical relation is not developed .
the high accuracy capability of coherent processing is illustrated by the use of the ambiguity function ( af ) .
active range - based target localization techniques are also used in multistatic radar systems , proposed in @xcite .
the toa of a signal transmitted by a single transmit radar , reflected by the target and received at multiple receive antennas is used in the localization process .
it is observed that increasing the number of sensors improves localization performance , yet an exact relation is not specified .
this paper addresses deficiencies in the literature by obtaining closed - form expressions of the crlb for both coherent and non - coherent cases .
geolocation techniques has been the subject of extensive research .
geolocation belongs to the class of passive localization systems , where the signal travels one - way .
since these passive measurement systems employ multiple sensors , further evaluation of existing results for geolocation systems might provide insightful for the active case . in wireless communication ,
passive measurements are used by multiple base stations for localization of a radiating mobile phone .
the localization accuracy performance is evaluated in @xcite .
it is shown that the localization accuracy is inversely proportional to the signal effective bandwidth as it does in the active localization case . moreover ,
the accuracy estimation is shown to be dependent on the sensors / base stations locations . in navigation systems ,
the target makes use of time synchronized transmission from multiple global positioning systems ( gps ) to establish its location . in @xcite ,
the relation between the transmitting sensors location and the target localization performance is analyzed .
gdop plots are used to demonstrate the dependency of the attainable accuracy on the location of the gps systems with respect to the target . in an optimal setting of the gps systems relative to the target position
, the best achievable accuracy is shown to be inversely proportional to the square root of the number of participating gps . in the sequel
, we apply the gdop metric to evaluate the localization performance of mimo radar .
the main contributions of this paper are : 1 .
the crlb of the target localization estimation error is developed for the general case of mimo radar with multiple waveforms transmission .
the analytical expressions of the crlb are derived for the case of orthogonal waveforms with non - coherent and coherent observations .
the non - coherent case is used as benchmark for evaluating the performance of the system with coherent observations .
it is shown that the crlb expressions for both the non - coherent and coherent cases can be factored into two terms : a term incorporating the effect of bandwidth and snr , and another term accounting for the effect of sensor placement .
the crlb of the standard deviation of the localization estimate with non - coherent observations is shown to be inversely proportional to the signals averaged effective bandwidth .
dramatically higher accuracy can be obtained from processing coherent observations . in this case
, the crlb is inversely proportional to the carrier frequency .
this gain is due to the exploitation of phase information , and is referred to as _
coherency gain .
formulating a convex optimization problem , it is shown that symmetric deployment of transmitting and receiving sensors around a target is optimal with respect to minimizing the crlb .
the closed form solution of the optimization problem also reveals that optimally placed @xmath0 transmitters and @xmath1 receivers reduce the crlb on the variance of the estimate by a factor @xmath2 this is referred to as the _ mimo radar gain_. 5 .
a closed form solution is developed for the blue of target localization for coherent mimo radars .
it provides a closed form solution and a comprehensive evaluation of the performance of the estimator s mse .
this estimator provides insight into the relation between sensors locations , target location , and localization accuracy through the use of the gdop metric .
contour maps of the gdop , presented in this paper , provide a clear understanding of the mutual relation between a given deployment of sensors and the achievable accuracy at various target locations .
the rest of the paper is organized as follows : the system model is introduced in section [ section : mimoradarconcept ] . in section [ section :
crlb ] , the crlb is derived for the general case of multiple transmitted waveforms .
analytical expressions are obtained for the cases of non - coherent and coherent observations with orthogonal signals .
optimization of the crlb as a function of sensor location is provided in section [ section : optimizationoverall ] .
the performance of two localization estimators is evaluated in section [ section : targetest ] . to establish a better understanding of the relations between the radar geographical spread and the target location ,
the gdop metric is introduced in section [ section : gdop ] .
finally , section [ section : conclusions ] concludes the paper . a comment on notation : vectors are denoted by lower - case bold , while matrices use upper - case bold letters . the superscripts t and h denote the transpose and hermitian operators , respectively .
complex conjugate is denoted @xmath3 .
points in the x - y plane are denoted in upper - case @xmath4
we consider a widely distributed mimo radar system with @xmath0 transmitting radars and @xmath1 receiving radars .
the receiving radars may be colocated with the transmitting ones or individually positioned . the transmitting and receiving radars
are located in a two dimensional plane @xmath5 .
the @xmath0 transmitters are arbitrarily located at coordinates @xmath6 @xmath7 , and the @xmath1 receivers are similarly arbitrarily located at coordinates @xmath8 @xmath9 the set of transmitted waveforms in lowpass equivalent form is @xmath10 @xmath11 where @xmath12 and @xmath13 is the common duration of all transmitted waveforms .
the power of the transmitted waveforms is normalized such that the aggregate power transmitted by the sensors is constant , irrespective of the number of transmit sensors . to simplify the notation ,
the signal power term is embedded in the noise variance term such that the snr at the transmitter , denoted snr@xmath14 and defined as the transmitted power by a sensor divided by the noise power at a receiving sensor , is set that a desired level .
let all transmitted waveforms be narrowband signals with individual effective bandwidth @xmath15 defined as @xmath16 $ ] , where the integration is over the range of frequencies with non - zero signal content @xmath17 @xcite .
we further define the signals averaged effective bandwidth or rms bandwidth _ _ _ _ as @xmath18 and the normalized bandwidth terms as @xmath19 .
the signals are narrowband in the sense that for a carrier frequency of @xmath20 the narrowband signal assumption implies @xmath21 * * * * @xmath22 and @xmath23 * * * * @xmath24**. * * the target model developed here generalizes the model in @xcite to a near - field scenario and distributed sensors . in skolnik s model
@xcite , the returns of individual point scatterers have fixed amplitude and phase , and are independent of angle . for a moving target , the composite return fluctuates in amplitude and phase due to the relative motion of the scatterers .
when the motion is slow , and the composite target return is assumed to be constant over the observation time , the target conforms to the classical swerling case i model .
we now proceed to generalize this model to a target observed by a mimo radar with distributed sensors .
assume an extended target , composed of a collection of @xmath25 individual point scatterers located at coordinates @xmath26 @xmath27 .
the amplitudes @xmath28 of the point scatterers are assumed to be mutually independent .
the pathloss and phase of a signal reflected by a scatterer , when measured with respect to a transmitted signal @xmath10 are functions of the path transmitter - scatterer - receiver .
let @xmath29 denote the propagation time from transmitter @xmath30 to scatterer @xmath31 to receiver @xmath32 @xmath33 where @xmath34 is the speed of light .
our signal model assumes that the sensors are located such that variations in the signal strength due to different target to sensor distances can be neglected , i.e. , the model accounts for the effect of the sensors / target localizations only through time delays ( or phase shifts ) of the signals .
the common path loss term is embedded in @xmath35 the baseband representation for the signal received at sensor @xmath36 is : @xmath37 where the term @xmath38is the phase of a signal transmitted by sensor @xmath30 reflected by scatterer @xmath39 located at @xmath40 and received by sensor @xmath41 phases are measured relative to a common phase reference assumed to be available at the transmitters and receivers .
the term @xmath42 is circularly symmetric , zero - mean , complex gaussian noise , spatially and temporally white with autocorrelation function @xmath43 .
the noise term is set @xmath44snr@xmath14 , where snr@xmath14 is measured at the transmitter .
snr@xmath14 is normalized such that the aggregate transmitted power is independent of the number of transmitting sensors .
the snr at the receiver , due to a scatterer with amplitude @xmath28 , is snr@xmath45snr@xmath46 signals reflected from the target combine at each of the receive antennas .
for example , the resultant signal at receive antenna @xmath36 is given by @xmath47 where @xmath48 and @xmath49 are respectively the amplitude and phase given by @xmath50 ^{1/2},\ ] ] and @xmath51 in obtaining ( [ e : target ] ) , we invoked the narrowband assumption @xmath52 , for all scatterers , namely that the change in the lowpass equivalent signals across the target is negligible .
it follows from this discussion that the extended target is represented by a point scatterer of amplitude @xmath48 and time delays @xmath53 where all the quantities are unknown . while this target model is completely adequate for our needs , it is possible to extend it slightly , at little cost .
assume a constant time offset error @xmath54 at the receivers .
further , assume that the error is small such that it does not impact the signal envelope , but it does impact the phase . then we can write the time delays @xmath55 for some location @xmath56 the target model ( [ e : target ] ) can now be expressed@xmath57 where @xmath58 and the narrowband assumption was invoked once more . the composite target of ( [ e : target ] ) is then equivalent to a point scatterer of complex amplitude @xmath59 and time delays @xmath60 for simplicity , the following notation is used : @xmath61 .
the signal model ( [ e : r ] ) becomes @xmath62 we define the vector of received signals as @xmath63 ^{t}$ ] for later use .
the radar system s goal is to estimate the target location @xmath4 the target location can be estimated directly , for example by formulating the maximum likelihood estimate ( mle ) associated with ( [ e : r1 ] ) .
alternatively , an indirect method is to estimate first the time delays @xmath64 subsequently , the target location can be computed from the solution to a set of equations of the form ( [ e : tau_vq ] ) , viz .
, @xmath65 the unknown complex amplitude @xmath59 is treated as a nuisance parameter in the estimation problem .
let the unknown target location @xmath66 unknown time delays delays @xmath67 , and unknown target complex amplitude @xmath68 where the notation specifies the real and imaginary components of @xmath69 we refer to the processing for estimating the target location as _ non - coherent _ or _
coherent_. the received signal introduced in ( [ e : r1 ] ) is adequate for the coherent case , where the transmitting and receiving radars are assumed to be both time and phase synchronized . as such , the time delays information , @xmath67 , embedded in the phase terms may be exploited in the estimation process by matching both amplitude and phase at the receiver end . in contrast ,
non - coherent processing estimates the time delays @xmath67 from variations in the envelope of the transmitted signals @xmath70 a common time reference is required for all the sensors in the system . in this case , the transmitting radars are not phase synchronized and therefore the received signal model is of the form : @xmath71 where the complex amplitude terms @xmath72 integrate the effect of the phase offsets between the transmitting and receiving sources and the target impact on the phase and amplitude of the transmitted signals .
these elements are treated as unknown complex amplitudes , where @xmath73 .
we define the following vector notations:@xmath74{c}\mathbf{\alpha}=[\alpha_{11},\alpha_{12}, ...
,\alpha_{\ell k}, ... ,\alpha _ { mn}]^{t } , \end{array } \label{eq : alpha_nc}\\ & \begin{array } [ c]{cc}\mathbf{\alpha}^{r}=\operatorname{re}\left ( \mathbf{\alpha}\right ) ; & \mathbf{\alpha}^{i}=\operatorname{im}\left ( \mathbf{\alpha}\right ) , \end{array } \
\nonumber\end{aligned}\ ] ] where @xmath75 and @xmath76 denote the real and imaginary parts of a complex - valued vector / matrix . *
the crlb provides a lower bound for the mse of any unbiased estimator for an unknown parameter(s ) .
given a vector parameter @xmath77 constituted of elements @xmath78 the unbiased estimate @xmath79 satisfies the following inequality @xcite : @xmath80 _ { ii},\text { \ \ } i=1,2, ... \ \text{\ } \label{eq : crlbbasic}\ ] ] where @xmath81 _ { ii}$ ] are the diagonal elements of the fisher information matrix ( fim ) @xmath82 .
the fim is given by : @xmath83 , \label{eq : fimdef}\ ] ] where @xmath84 is the joint probability density function ( pdf ) of @xmath85 conditioned on @xmath86 .
the crlb is then defined:@xmath87 ^{-1}.\label{eq : crlb_fim}\ ] ] sometime , it is easier to compute the fim with respect to another vector @xmath88 and apply the chain rule to derive the original @xmath89 in our case , since the received signals in both ( [ e : r1 ] ) and ( [ e : r1nc ] ) are functions of the time delays , @xmath90 and the complex amplitudes , by the chain rule , @xmath82 can be expressed in the alternative form @xcite:@xmath91 where @xmath92 is a vector of unknown parameters , and it incorporates the time delays .
matrix @xmath93 is the fim with respect to @xmath88 and matrix @xmath94 is the jacobian:@xmath95 from this point onward , we develop the crlb for the case of non - coherent and coherent processing , separately . for non - coherent processing
, there is no common phase reference among the sensors .
consequently , the complex - valued terms @xmath96 incorporate phase offsets among sensors and the effect of the target on the phase and complex amplitude , following the definitions in ( [ eq : alpha_nc ] ) .
the vectors of unknown parameters is defined : @xmath97 ^{t}.\label{eq : tetha_nc}\ ] ] the process of localization by non - coherent processing depends on time delay estimation of the signals observed at the receive sensors and also on the location of the sensors . to gain insight into how each of the factors affects the performance of localization
, we utilize the form of the fim given in ( [ eq : chainrule ] ) .
we define the vector of unknown parameters : @xmath98 ^{t},\label{eq : psi_nc}\ ] ] where @xmath99 is given in ( [ eq : alpha_nc ] ) and @xmath100 @xmath101 ^{t}$ ] .
we are interested only in the estimation of @xmath102 and @xmath103 , while @xmath104 @xmath105 act as nuisance parameters in the estimation problem . given a set of known transmitted waveforms @xmath106 parameterized by the unknown time delays @xmath90 which in turn are a function of the unknown target location @xmath107 , the conditional , joint pdf of the observations at the receive sensors , given by ( [ e : r1nc ] ) , is then : @xmath108 the matrix @xmath109 for ( [ eq : tetha_nc ] ) and ( [ eq : psi_nc ] ) , to be used in ( [ eq : chainrule ] ) , is defined as : @xmath110{ccc}\frac{\partial}{\partial x}\mathbf{\tau}^{t } & \frac{\partial}{\partial x}\left ( \mathbf{\alpha}^{r}\right ) ^{t } & \frac{\partial}{\partial x}\left ( \mathbf{\alpha}^{i}\right ) ^{t}\\ \frac{\partial}{\partial y}\mathbf{\tau}^{t } & \frac{\partial}{\partial y}\left ( \mathbf{\alpha}^{r}\right ) ^{t } & \frac{\partial}{\partial y}\left ( \mathbf{\alpha}^{i}\right ) ^{t}\\ \frac{\partial\mathbf{\tau}}{\partial\mathbf{\alpha}^{r } } & \frac { \partial\mathbf{\alpha}^{r}}{\partial\mathbf{\alpha}^{r } } & \frac { \partial\mathbf{\alpha}^{i}}{\partial\mathbf{\alpha}^{r}}\\ \frac{\partial\mathbf{\tau}}{\partial\mathbf{\alpha}^{i } } & \frac { \partial\mathbf{\alpha}^{r}}{\partial\mathbf{\alpha}^{i } } & \frac { \partial\mathbf{\alpha}^{i}}{\partial\mathbf{\alpha}^{i}}\end{array } \right ] _ { \left ( 2mn+2\right ) \times3mn}{\small , } \label{eq : p_nc}\ ] ] where @xmath111 is standard notation for taking the derivative with respect to @xmath102 of each element of @xmath112 and @xmath113 denotes the jacobian of the vector @xmath114 with respect to the vector @xmath115 the subscript denotes the matrix dimensions .
it is not too difficult to show that using ( [ e : td ] ) , the matrix @xmath109 can be expressed in the form : @xmath116{cc}\mathbf{h}_{2\times mn } & \mathbf{0}_{2\times2mn}\\ \mathbf{0}_{2mn\times mn } & \mathbf{i}_{2mn\times2mn}\end{array } \right ] { \small , } \label{eq : pvalue}\ ] ] where @xmath117 is the all zero matrix , @xmath118 is the identity matrix , and @xmath119 incorporates the derivatives of the time delays in ( [ e : td ] ) with respect to the @xmath102 and @xmath103 parameters
. these derivatives result in cosine and sine functions of the angles the transmitting and receiving radars create with respect to the target , incorporating information on the sensors and target locations as follows : @xmath120{cccc}a_{tx_{1}}+a_{rx_{1 } } & a_{tx_{1}}+a_{rx_{2 } } & ... & a_{tx_{m}}+a_{rxn}\\ b_{tx_{1}}+b_{rx_{1 } } & b_{tx_{1}}+b_{rx_{2 } } & ... & b_{tx_{m}}+b_{rxn}\end{array } \right ] .\label{eq : hdef}\ ] ] the elements of @xmath121 are given by : @xmath74{ccc}a_{tx_{k}}=\cos\phi_{k } ; & b_{tx_{k}}=\sin\phi_{k } ; & k=1, .. ,m,\\ a_{rx_{\ell}}=\cos\varphi_{\ell } ; & b_{rx_{\ell}}=\sin\varphi_{\ell } ; & \ell=1, ..
,n , \end{array } \label{eq : abdef}\\ & \begin{array } [ c]{cc}\phi_{k}=\tan^{-1}\left ( \frac{y - y_{tk}}{x - x_{tk}}\right ) ; & \varphi_{\ell } = \tan^{-1}\left ( \frac{y - y_{r\ell}}{x - x_{r\ell}}\right ) , \end{array } \nonumber\end{aligned}\ ] ] where the phase @xmath122 is the bearing angle of the transmitting sensor @xmath123 to the target measured with respect to the @xmath102 axis ; the phase @xmath124 is the bearing angle of the receiving radar @xmath36 to the target measured with respect to the @xmath102 axis
. see illustration in figure [ fig:1 ] . for later use
, we apply the following definitions : @xmath125 ^{t}$ ] , @xmath126 ^{t}$ ] , @xmath127 ^{t}$ ] , @xmath128 ^{t},$ ] @xmath129 ^{t}$ ] and @xmath130 ^{t}$ ] .
an expression for the fim @xmath131 is derived in appendix [ section : appendixa ] , yielding : @xmath132{cc}\mathbf{s}_{nc } & \mathbf{v}_{nc}\\ \mathbf{v}_{nc}^{t } & \mathbf{\lambda}_{\alpha}\end{array } \right ] _ { \left ( 3mn\right ) \times\left ( 3mn\right ) } , \label{eq : fimgennc}\ ] ] with the block matrices @xmath133 @xmath134 and @xmath135 defined in the appendix [ section : appendixa ] in ( [ e : snc ] ) , ( [ e : lambdanc ] ) , and ( [ e : vnc ] ) , respectively . in order to determine the value of @xmath136 we use ( [ eq : fimgennc ] ) and ( [ eq : pvalue ] ) in ( [ eq : chainrule ] ) , to obtain the following crlb matrix : @xmath137{cc}\mathbf{hs}_{nc}\mathbf{h}^{t } & \mathbf{hv}_{nc}\\ \mathbf{v}_{nc}^{t}\mathbf{h}^{t } & \mathbf{\lambda}_{\alpha}\end{array } \right ]
^{-1}.\label{eq : fimlocnc}\ ] ] the crlb matrix is related to the sensor and target locations through the matrix @xmath138 and to the received waveforms correlation functions and its derivatives through the @xmath139 and @xmath135 matrices .
when the waveforms are orthogonal , ( [ e : snc ] ) , ( [ e : lambdanc ] ) , and ( [ e : vnc ] ) simplify to ( [ eq : app_a9 ] ) in appendix [ section : appendixa ] .
this simplification enables to compute the crlb ( [ eq : fimlocnc ] ) in closed form .
we perform this calculation next . while the crlb expresses the lower bound on the variance of the estimate of @xmath140 ^{t}$ ] * * , *
* we are really interested only in the estimation of @xmath141 and @xmath142 the amplitude terms @xmath143 and @xmath144 serve as nuisance parameters . for the variances of the estimates of @xmath102 and @xmath145
it is sufficient to derive the @xmath146 upper left submatrix @xmath147 _ { 2\times2}=\left [ \left ( \mathbf{j}\left ( \mathbf{\theta}_{nc}\right )
\right ) ^{-1}\right ] _
{ 2\times2}.$ ] the crlb submatrix @xmath147 _ { 2\times2}$ ] for target localization in the _ non - coherent _ case with orthogonal signals is:@xmath148 _ { 2\times2}{\normalsize = } \frac{c^{2}}{2/\sigma_{w}^{2}}\left ( \mathbf{h\mathbf{s}}_{nc}\mathbf{h}^{t}\right ) ^{-1}.\label{eq : crlbexpnoncoh}\ ] ] from ( [ eq : app_a9 ] ) in appendix [ section : appendixa ] , we have for terms of ( [ eq : fimlocnc]):@xmath149 , \label{eq : fimncorto}\\ \mathbf{v}_{nc } & = \mathbf{0},\nonumber\\ \mathbf{\lambda}_{\alpha } & = \mathbf{i}_{2mn\times2mn}.\nonumber\end{aligned}\ ] ] in ( [ eq : fimncorto ] ) , @xmath150 denotes a diagonal matrix with the elements of vector @xmath99 . matrix @xmath151
\right ) $ ] , with @xmath152 denoting the normalized elements @xmath153 , and @xmath154 ^{t}$ ] , @xmath155 . using ( [ eq : fimncorto ] ) in ( [ eq : fimlocnc ] ) , it is easy to see that @xmath156 _ { 2\times2 } & = \frac{c^{2}}{2/\sigma_{w}^{2}}\left ( \mathbf{h\mathbf{s}}_{nc}\mathbf{h}^{t}\right ) ^{-1}\label{eq : crlbnoncohsubmatrix}\\ & = \frac{\eta_{nc}}{g_{x_{nc}}g_{y_{nc}}-h_{nc}^{2}}\left [ \begin{array } [ c]{cc}g_{x_{nc } } & h_{nc}\\ h_{nc } & g_{y_{nc}}\end{array } \right ] , \nonumber\end{aligned}\ ] ] where:@xmath157{c}\eta_{nc}=\frac{c^{2}}{8\pi^{2}\beta^{2}/\sigma_{w}^{2}},\\ g_{x_{nc}}=\overset{m}{\underset{k=1}{\sum}}\overset{n}{\underset{\ell=1}{\sum}}\left\vert \alpha_{\ell k}\right\vert ^{2}\beta_{r_{k}}^{2}\left ( b_{tx_{k}}+b_{rx_{\ell}}\right ) ^{2},\\ g_{y_{nc}}=\overset{m}{\underset{k=1}{\sum}}\overset{n}{\underset{\ell=1}{\sum}}\left\vert \alpha_{\ell k}\right\vert ^{2}\beta_{r_{k}}^{2}\left ( a_{tx_{k}}+a_{rx_{\ell}}\right ) ^{2},\\ h_{nc}=-\overset{m}{\underset{k=1}{\sum}}\overset{n}{\underset{\ell=1}{\sum}}\left\vert \alpha_{\ell k}\right\vert ^{2}\beta_{r_{k}}^{2}\left ( a_{tx_{k}}+a_{rx_{\ell}}\right ) \left ( b_{tx_{k}}+b_{rx_{\ell}}\right ) . \end{array } \label{eq : crlbnccoef}\ ] ] this concludes the proof of the proposition .
it follows that the lower bound on the variance for estimating the @xmath102 coordinate of the target is given by @xmath158 similarly , for the @xmath103 coordinate,@xmath159 the terms @xmath160 , @xmath161 , and @xmath162 are summations of @xmath163 , @xmath164 , @xmath165 and @xmath166 terms that represent sine and cosine expressions of the angles @xmath167 and @xmath168 and therefore relate to the radars and target geometric layout .
it is apparent that for the non - coherent case , the lower bounds on the variances ( [ e : varx_nc ] ) and ( [ e : vary_nc ] ) are inversely proportional to the averaged effective bandwidth @xmath169 , and @xmath170 ( see expression for @xmath171 in ( [ eq : crlbnccoef ] ) ) .
it is interesting to note that @xmath171 is actually the crlb for range estimation in a single antenna radar , based on the one - way time delay between the radar and the target ( see for example @xcite ) .
the other terms in ( [ e : varx_nc ] ) and ( [ e : vary_nc ] ) incorporate the effect of the sensors locations .
we recall that in the section on the signal model , we defined the complex amplitude @xmath72 associated with the path transmitter @xmath172
target @xmath173 receiver @xmath41 in the non - coherent case , the complex amplitude is a nuisance parameter in estimating the target location @xmath174 @xmath103 . in the coherent case ,
the transmitting and receiving radars are assumed to be phase synchronized . by eliminating the phase offsets , the signal model in ( [ e : r1 ] )
applies , and the nuisance parameter role is left to the complex target amplitude @xmath175 .
the coherent approach to localization seeks to exploit the target location information embedded in the phase terms @xmath176 that depend on the delays @xmath177 which in turn are function of the target coordinates @xmath174 @xmath103 .
define the vector of unknown parameters : @xmath178 ^{t}.\label{eq : theta_c}\ ] ] as before , define a second vector of unknown parameters in terms of the time delays @xmath114 ( rather then the target location ) , @xmath179 ^{t},\label{eq : psi_c}\ ] ] to be used in ( [ eq : chainrule ] ) to derive the crlb .
in comparing the coherent case in ( [ eq : psi_c ] ) with the non - coherent counterpart in ( [ eq : psi_nc ] ) , we note that @xmath180 incorporates the vectors @xmath143 and @xmath181 while @xmath182 is a function of the scalars @xmath183 and @xmath184 the reduction in the number of unknown parameters is made possible through the measurement of the phase terms of @xmath143 and @xmath105 . for coherent observations ,
the conditional , joint pdf of the observations at the receive sensors , given by ( [ e : r1 ] ) , is of the form : @xmath185 we follow the same process used in section [ section : crlbnoncoherent ] , to develop the crlb for the coherent case based on the relation in ( [ eq : chainrule ] ) .
the matrix @xmath186 takes the form : @xmath187{cc}\mathbf{h } & \mathbf{0}_{mn\times2}\\ \mathbf{0}_{2\times mn } & \mathbf{i}_{2\times2}\end{array } \right ] _ { 4\times\left ( mn+2\right ) } { \small , } \label{eq : p_c}\ ] ] where matrix @xmath121 has the same form as in ( [ eq : hdef ] ) , since it is independent of the nuisance parameters in both cases .
an expression for the fim matrix , @xmath188 is derived in appendix [ section : appendixb ] , yielding : @xmath189{cc}\mathbf{s}_{c } & \mathbf{v}_{c}\\ \mathbf{v}_{c}^{t } & \mathbf{\lambda}_{\alpha c}\end{array } \right ] _ { \left ( mn+2\right ) \times\left ( mn+2\right ) } , \label{eq : fimgenc}\ ] ] where the submatrices are found in appendix [ section : appendixb ] as follows : @xmath190 in ( [ e : sc ] ) , @xmath191 in ( [ e : lambdac ] ) , and @xmath192 in ( [ e : vc ] ) . the crlb matrix for the coherent case
is then found substituting ( [ eq : p_c ] ) and ( [ eq : fimgenc ] ) in ( [ eq : chainrule ] ) and ( [ eq : crlb_fim ] ) , obtaining : @xmath193{cc}\mathbf{hs}_{c}\mathbf{h}^{t } & \mathbf{hv}_{c}\\ \mathbf{v}_{c}^{t}\mathbf{h}^{t } & \mathbf{\lambda}_{\alpha c}\end{array } \right ] ^{-1}.\label{eq : fimlocc}\ ] ] as in section [ section : crlbnoncoherent ] , we develop the closed form solution to the crlb matrix in ( [ eq : fimlocc ] ) for the case of orthogonal waveforms . since we are interested only in the lower bound on the variances of the estimates of @xmath102 and @xmath103 , the submatrix @xmath194 _ { 2\times2}=\left [ \left ( \mathbf{j}_{c}\left ( \mathbf{\theta } \right ) \right ) ^{-1}\right ] _ { 2\times2}$ ] is derived and evaluated next . the crlb @xmath146 submatrix for the _ coherent _ case and orthogonal waveforms is:@xmath195 _ { 2\times2}{\normalsize = } \frac{c^{2}}{2/\sigma_{w}^{2}}\left ( \mathbf{h\mathbf{s}}_{c}\mathbf{h}^{t}-\mathbf{h\mathbf{v}}_{c}\mathbf{\mathbf{\lambda}}_{\alpha c}^{-1}\mathbf{v}_{c}^{t}\mathbf{h}^{t}\right ) ^{-1}.\label{eq : crlbexpcoh}\ ] ] from ( [ eq : app_b10 ] ) in appendix [ section : appendixb ] we have the values of the matrices @xmath196 @xmath197 and @xmath198 for orthogonal waveforms . using this and
@xmath121 defined in ( [ eq : hdef ] ) in ( [ eq : fimlocc ] ) , the crlb matrix * * * * @xmath199 is obtained .
consequently , the submatrix @xmath194 _ { 2\times2 } $ ] is computed in appendix [ section : appendixc ] resulting in the form given in ( [ eq : crlbexpcoh ] ) .
this completes the proof of the proposition . from ( [ eq : crlbexpcoh ] ) and ( [ eq : app_b10 ] ) , it can be shown that * * * * @xmath194 _ { 2\times2}$ ] can be expressed as : @xmath195 _ { 2\times2}=\frac{\eta_{c}}{g_{x_{c}}g_{y_{c}}-h_{c}^{2}}\left [ \begin{array } [ c]{cc}g_{x_{c } } & h_{c}\\ h_{c } & g_{y_{c}}\end{array } \right ] , \label{eq : crlbexpcohmm}\ ] ] where the various quantities are as follows : @xmath157{c}\eta_{c}=\frac{c^{2}}{8\pi^{2}f_{c}^{2}\left ( \left\vert \zeta\right\vert ^{2}/\sigma_{w}^{2}\right ) } , \\
g_{x_{c}}=\overset{m}{\underset{k=1}{\sum}}\overset{n}{\underset{\ell=1}{\sum } } f_{r_{k}}\left ( b_{tx_{k}}+b_{rx_{\ell}}\right ) ^{2}-\frac{1}{mn}\left ( \overset{m}{\underset{k=1}{\sum}}\overset{n}{\underset{\ell=1}{\sum}}\left ( b_{tx_{k}}+b_{rx_{\ell}}\right ) \right ) ^{2},\\ g_{y_{c}}=\overset{m}{\underset{k=1}{\sum}}\overset{n}{\underset{\ell=1}{\sum } } f_{r_{k}}\left ( a_{tx_{k}}+a_{rx_{\ell}}\right ) ^{2}-\frac{1}{mn}\left ( \overset{m}{\underset{k=1}{\sum}}\overset{n}{\underset{\ell=1}{\sum}}\left ( a_{tx_{k}}+a_{rx_{\ell}}\right ) \right ) ^{2},\\ h_{c}=-\overset{m}{\underset{k=1}{\sum}}\overset{n}{\underset{\ell=1}{\sum}}f_{r_{k}}\left ( a_{tx_{k}}+a_{rx_{\ell}}\right ) \left ( b_{tx_{k}}+b_{rx_{\ell}}\right ) \\
+ \frac{1}{mn}\overset{m}{\underset{k=1}{\sum}}\overset{n}{\underset{\ell = 1}{\sum}}\left ( a_{tx_{k}}+a_{rx_{\ell}}\right ) \overset{m}{\underset { k=1}{\sum}}\overset{n}{\underset{\ell=1}{\sum}}\left ( b_{tx_{k}}+b_{rx_{\ell}}\right ) . \end{array } \label{eq : crlbccoef}\ ] ] the lower bound on the error variance is provided by the diagonal elements of the @xmath200 _ { 2\times2}$ ] submatrix and are of the form : @xmath201 the terms @xmath202 , @xmath203 , and @xmath204 are summations of @xmath163 , @xmath164 , @xmath165 and @xmath166 that represent sine and cosine expressions of the angles @xmath167 and @xmath205 and therefore relate to the radars and target geometric layout , multiplied by the ratio terms @xmath206 . invoking the narrowband signals assumption @xmath207 it follows that @xmath208 .
these terms have some additional elements when compared with the non - coherent case .
it is apparent that for the coherent case , the variances of the target location estimates in ( [ eq : msec ] ) are inverse proportional to the carrier frequency @xmath209 .
we make the following observations : * the lower bound on the variance in the non - coherent case is inversely proportional to the averaged effective bandwidth @xmath210 . for the coherent case , with narrowband signals , where @xmath207 , the localization accuracy is inversely proportional to the carrier frequency @xmath211 and independent of the signal individual effective bandwidth , due to the use of the phase information across the different paths .
it is apparent that coherent processing offers a target localization precision gain ( i.e. , reduction of the localization root mean - square error ) of the order of @xmath212 , which we refer to as _ coherency gain_. designing the ratio @xmath212 to be in the range 100 - 1000 , leads to dramatic gains . * the term @xmath213 in ( [ eq : crlbccoef ] ) is the range estimate based on one - way time delay with coherent observations for a radar with a single antenna @xcite . *
the crlb terms are strongly reliant on the relative geographical spread of the radar systems vs. the target location .
this dependency is incorporated in the terms @xmath214 @xmath215 and @xmath216 .
it is apparent from ( [ eq : msec ] ) , ( [ e : varx_nc ] ) and ( [ e : vary_nc ] ) that there is a trade - off between the variances of the target location computed horizontally and vertically .
a set of sensor locations that minimizes the horizontal error , may result in a high vertical error .
for example , spreading the transmitting and receiving radars in an angular range of @xmath217 to @xmath218 radians with respect to the target , will result in high horizontal error while providing low vertical error , as we would expect intuitively .
this is caused by the fact that the terms @xmath219 are summations of sine functions and @xmath220 are summation of cosine functions of the same set of angles . in order to truly determine the minimum achievable localization accuracy in both @xmath102 and @xmath103 axis , we need to minimize the _ over - all _ accuracy , defined as the total variance @xmath221 . *
the message of dramatic improvement in localization accuracy needs to be moderated with the observation that the crlb is a bound of _ small errors_. as such , it ignores effects that could lead to _ large errors . _ for example , mimo radar with distributed sensors and coherent observations is subject to high sidelobes @xcite .
additionally , a phase coherent system is sensitive to phase errors .
these topics are outside the scope of this paper , but they should be kept in perspective . *
the lower bound as expressed by the crlb , provides a tight bound at high snr , while at low snr , the crlb is not tight .
as stated in @xcite , the mle is asymptotically unbiased and its error variance approaches the crlb arbitrarily close for sufficient long observation time , with the condition that the mle is not subject to ambiguities . as the mle of the time estimates
is based on matched filters at the receiver end , the ambiguity features of the signal waveforms arise in low snr conditions and predominate the estimation capabilities , causing erroneous time estimates . as the ambiguity problems are usually addressed trough the signal waveform design , a more rigid bound needs to be found for the localization variance in the low snr case .
the crlb for target localization with coherent mimo radar shows a gain , i.e. , reduction in the standard deviation of the localization estimate , of @xmath212 compared to non - coherent localization .
yet , the crlb is strongly dependent on the locations of the transmitting and receiving sensors relative to the target location , through the terms @xmath222 @xmath215 and @xmath216 . to gain a better understanding of these relations , and set a lower bound on the crlb over all possible sensor placements ,
further analysis is developed in this section .
we introduce the following general notation : for any given set of vectors @xmath223 and @xmath224 : @xmath157{c}t\left ( \mathbf{\xi}\right ) = \frac{1}{l}\overset{l}{\underset{i=1}{\sum}}\xi_{i}\\ t\left ( \mathbf{\xi}^{2}\right ) = \frac{1}{l}\overset{l}{\underset{i=1}{\sum } } \xi_{i}^{2}\\ t\left ( \mathbf{\xi\kappa}\right ) = \frac{1}{l}\overset{l}{\underset { i=1}{\sum}}\xi_{i}\kappa_{i}. \end{array } \label{eq : generaloperators}\ ] ] the terms @xmath202 and @xmath203 in ( [ eq : crlbnccoef ] ) can be expressed using the conventions defined in ( [ eq : generaloperators ] ) and terms defined in section [ section : crlbcoherent ] , viz . :
^{2}-\left [ t\left ( \mathbf{b}_{rx}\right ) \right ] ^{2}\right ] , \label{eq : gxcoh2}\ ] ] and @xmath226 ^{2}-\left [ t\left ( \mathbf{a}_{rx}\right ) \right ] ^{2}\right ] , \label{eq : gycoh2}\ ] ] where the narrowband signals assumption is applied .
similarly , the term @xmath204 in ( [ eq : crlbccoef ] ) can be expressed : @xmath227 .\nonumber\end{aligned}\ ] ] since @xmath228 and @xmath229 the following conditions apply : @xmath157{c}t\left ( \mathbf{a}_{tx}^{2}\right ) + t\left ( \mathbf{b}_{tx}^{2}\right ) = 1\\ t\left ( \mathbf{a}_{rx}^{2}\right ) + t\left ( \mathbf{b}_{rx}^{2}\right ) = 1\\ 0\leq\left [ t\left ( \mathbf{a}_{tx}\right ) \right ] ^{2}\leq1;\text { } 0\leq\left [ t\left ( \mathbf{a}_{rx}\right ) \right ] ^{2}\leq1\\ 0\leq\left [ t\left ( \mathbf{b}_{tx}\right ) \right ] ^{2}\leq1;\text { } 0\leq\left [ t\left ( \mathbf{b}_{rx}\right ) \right ] ^{2}\leq1\\ 0\leq t\left ( \mathbf{a}_{tx}^{2}\right ) \leq1;\text { } 0\leq t\left ( \mathbf{a}_{rx}^{2}\right ) \leq1\\ 0\leq t\left ( \mathbf{b}_{tx}^{2}\right ) \leq1;\text { } 0\leq t\left ( \mathbf{b}_{rx}^{2}\right ) \leq1 . \end{array } \label{eq : rel2}\ ] ] we seek to find sets of angles @xmath230 and @xmath231 that yield sets of cosine and sine expressions @xmath232 for which the values of the cramer - rao bounds for localization along the @xmath102 and @xmath103 axes ( @xmath233 and @xmath234 respectively ) are jointly minimized , that is : @xmath157{c}\underset{\mathbf{a}_{tx},\mathbf{a}_{rx},\mathbf{b}_{tx},\mathbf{b}_{rx}}{\text{minimize}}\left ( \sigma_{x_{c}crb}^{2}+\sigma_{y_{c}crb}^{2}\right ) . \end{array } \label{eq : min1}\ ] ] this is equivalent to minimizing the trace of the crlb submatrix @xmath235 _ { 2\times2}$ ] .
the explicit minimization problem is formulated introducing the objective function @xmath236:@xmath157{cc}\underset{\mathbf{a}_{tx},\mathbf{a}_{rx},\mathbf{b}_{tx},\mathbf{b}_{rx}}{\text{minimize } } & f_{0}\left ( \mathbf{a}_{tx},\mathbf{a}_{rx},\mathbf{b}_{tx},\mathbf{b}_{rx}\right ) = \eta_{c}\frac{g_{x_{c}}+g_{y_{c}}}{g_{x_{c}}g_{y_{c}}-h_{c}^{2}}\\ \text{\ } & \text{subject to constraints ( \ref{eq : rel2}).}\end{array } \label{eq : min2}\ ] ] this representation of the problem is not a convex optimization problem.[multiblock footnote omitted ] the next steps are undertaken in order to formulate a convex optimization problem equivalent to ( [ eq : min2 ] ) , i.e. , a convex optimization problem that can be solved through routine techniques and from whose solution it is readily possible to find the solution to ( [ eq : min2 ] ) . in @xcite
, it is shown that for a given positive definite matrix , in our case @xmath194 _ { 2\times2}$ ] , and its inverse matrix @xmath237 , in this case : @xmath238{cc}g_{yc } & -h_{c}\\ -h_{c } & g_{xc}\end{array } \right ] , \label{eq : f}\ ] ] the following relation exists between the diagonal elements of these matrices : @xmath195 _ { ii}\geq\frac{1}{\left [ \mathbf{f}\right ] _ { ii}};\text { } i=1,2.\label{eq : ineq1}\ ] ] equality conditions apply for all @xmath239 iff @xmath237 is a diagonal matrix , i.e. , @xmath240 .
now , observe that the inverse of the elements on the diagonal of @xmath237 are lower bounding the elements on the diagonal of the matrix @xmath241 for any @xmath242**. * * we then define the objective function @xmath243 and the optimization problem @xmath244 the new objective function and the original objective function are related as @xmath245 @xmath246 , with equality for @xmath240 .
substitute the values of @xmath202 and @xmath203 from ( [ eq : gxcoh2 ] ) and ( [ eq : gycoh2 ] ) in the objective function of ( [ eq : min21 ] ) to obtain @xmath247 ^{2}-\left [ t\left ( \mathbf{a}_{rx}\right ) \right ] ^{2}}\label{eq : obj1}\\ & + \frac{1/\left ( \eta_{c}mn\right ) } { t\left ( \mathbf{b}_{tx}^{2}\right ) + t\left ( \mathbf{b}_{rx}^{2}\right ) -\left [ t\left ( \mathbf{b}_{tx}\right ) \right ] ^{2}-\left [ t\left ( \mathbf{b}_{rx}\right ) \right ] ^{2}}.\nonumber\end{aligned}\ ] ] it is apparent that the denominator of the first summand is bounded by : @xmath248 ^{2}-\left [ t\left ( \mathbf{a}_{rx}\right ) \right ] ^{2}\leq2-t\left ( \mathbf{b}_{tx}^{2}\right ) -t\left ( \mathbf{b}_{rx}^{2}\right ) , \label{eq : ineq22}\ ] ] and the denominator of the second summand is bounded by : @xmath249 ^{2}-\left [ t\left ( \mathbf{b}_{rx}\right ) \right ] ^{2}\leq t\left ( \mathbf{b}_{tx}^{2}\right ) + t\left ( \mathbf{b}_{rx}^{2}\right ) .\label{eq : ineq23}\ ] ] denote @xmath250 , and let @xmath251 then , from ( [ eq : obj1])-([eq : ineq23 ] ) and ( [ eq : min21 ] ) , we obtain the following problem:@xmath157{cc}\underset{\mathbf{\mu}}{\text{minimize } } & \overline{f_{0}}\left ( \mu\right ) = \dfrac{1}{2-\mu}+\dfrac{1}{\mu}\\ \text{subject to } & \mu-2\leq0\\ & -\mu\leq0 . \end{array } \label{eq : obj21}\ ] ] the objective function @xmath252 is still not convex .
the epigraph form is a way to introduce a linear ( and hence convex ) objective @xmath253 , while the original objective @xmath254is incorporated into a new constraint @xmath255 the key point here is that while @xmath256 is not convex , the constraint@xmath257 can be transformed to a convex form .
after some simple algebraic manipulations , the epigraph form turns into the following convex problem : @xmath157{cc}\underset{\mu , t}{\text{minimize } } &
t\\ \text{subject to\ } & \\ & \begin{array } [ c]{c}t\mu^{2}-2t\mu+2\leq0\\ \mu-2\leq0\\ -\mu\leq0\\ -t\leq0 \end{array } .
\end{array } \label{eq : min4}\ ] ] a convenient way to solve this convex optimization problem is to employ the concept of lagrange duality and exploit the sufficiency of the _ karusk - kuhn - tucker _ ( kkt )
conditions @xcite .
the lagrangian of the problem in ( [ eq : min4 ] ) is given by : @xmath258 where @xmath259 , @xmath260 is the _ _ lagrange multiplier _ _ associated with the @xmath239th inequality constraint @xmath261 .
the kkt conditions state that the optimal solution for the primal problem ( minimization of @xmath253 in ( [ eq : min4 ] ) ) is given by the solution to the set of equations:@xmath262 applied to ( [ eq : min4 ] ) and ( [ eq : lagra ] ) , these equations specialize to@xmath263 it is not difficult to show that the solution to this system is given by @xmath157{c}\mu^{\ast}=1\\ t^{\ast}=2\\ \lambda_{1}^{\ast}=1\\ \lambda_{2}^{\ast}=\lambda_{3}^{\ast}=\lambda_{4}^{\ast}=0 \end{array } .\label{eq : opt}\ ] ] recalling that @xmath264 the optimal solution can be rewritten as : @xmath265 in addition to ( [ eq : optimalset2 ] ) , @xmath266 have to satisfy the relations ( [ eq : rel2 ] ) , and the equality conditions for ( [ eq : ineq1 ] ) , ( [ eq : ineq22 ] ) and ( [ eq : ineq23 ] ) , viz . ,
@xmath157{c}t\left ( \mathbf{a}_{tx}^{\ast2}\right ) + t\left ( \mathbf{a}_{rx}^{\ast 2}\right ) = 1\\ t\left ( \mathbf{b}_{tx}^{\ast}\right ) = 0;\text { } t\left ( \mathbf{b}_{rx}^{\ast}\right ) = 0\\ t\left ( \mathbf{a}_{tx}^{\ast}\right ) = 0;\text { } t\left ( \mathbf{a}_{rx}^{\ast}\right ) = 0\\ t\left ( \mathbf{a}_{tx}^{\ast}\mathbf{b}_{tx}^{\ast}\right ) + t\left ( \mathbf{a}_{rx}^{\ast}\mathbf{b}_{rx}^{\ast}\right ) = 0 .
\end{array } \label{eq : optimalset3}\ ] ] substituting these results in ( [ eq : gxcoh2 ] ) and ( [ eq : gycoh2 ] ) , we compute the optimal @xmath267 and @xmath268 @xmath269 it follows that the minimum value of the trace of the cramer rao matrix @xmath200 _ { _ { 2\times2}},$ ] @xmath236 in ( [ eq : min2 ] ) , is given by : @xmath270 the final step in determining the effect of sensor locations on the localization crlb is to recall that the multivariable argument of @xmath236 in ( [ eq : opttrace ] ) is actually a function of the transmitting sensors angles @xmath271 @xmath11 and receiving sensors angles @xmath272 @xmath273 ( see definitions in the previous section ) .
what are then the optimal sets @xmath230 and @xmath274 that minimize the variance of the localization error ? the optimal angles can be found from the relations ( [ eq : optimalset3 ] ) .
for example , for the cosine of the transmitters bearings @xmath275 means @xmath276 a symmetrical set of angles of the form @xmath277 , is a solution to ( [ e : sum1 ] ) for any arbitrary @xmath278 the same solution is obtained for the sines , @xmath279 the relations @xmath280 @xmath281 lead to a solution constituted by a symmetrical set of angles @xmath274 of the same form as @xmath282 the relation @xmath283 expressed in terms of angles is @xmath284 it can be shown that ( [ e : sum2 ] ) is met by angles @xmath285 and @xmath286 symmetrically distributed around the unit circle , but the number of sensors has to meet @xmath287 @xmath288 the condition @xmath289 in ( [ eq : optimalset3 ] ) , expressed in its explicit form , is @xmath290 the symmetrical set of angles that meet ( [ e : sum1 ] ) and ( [ e : sum2 ] ) provide @xmath291 @xmath292 and therefore meet the requirement of ( [ e : sum3 ] ) .
the same applies to @xmath293 , where we have @xmath294 @xmath295 .
we conclude that @xmath296 transmitting , and @xmath297 receiving sensors , symmetrically placed on a circle around the target at angular spacings of @xmath298 and @xmath299 respectively , lead to the lowest value of the localization crlb .
this result can be extended by noticing that relations ( [ eq : optimalset3 ] ) also hold for any _ superposition _ of symmetrical sets containing no less than @xmath300 transmitting and/or receiving sensors .
therefore , the complete set of optimal points is given by : @xmath157{c}\mathbf{\phi}^{\ast}=\left\ { \phi_{k}^{\ast}\left\vert \left .
\left ( \phi_{k}^{\ast}=\phi_{_{v}}+\frac{2\pi\left ( z-1\right ) } { z_{v}}\right ) \right\vert _ { z=1, .. ,z_{v}}\right . ; z_{v}\geq3;\underset{v=1}{\overset{v}{{\displaystyle\sum } } } z_{v}=m\text { } \ \right\ } \\
\mathbf{\varphi}^{\ast}=\left\ { \varphi_{\ell}^{\ast}\left\vert \left .
\left ( \varphi_{\ell}^{\ast}=\varphi_{_{u}}+\frac{2\pi\left ( z-1\right ) } { z_{u}}\right ) \right\vert _ { z=1, .. ,z_{u}}\right . ; z_{u}\geq3;\underset { u=1}{\overset{u}{{\displaystyle\sum } } } z_{u}=n\text { } \right\ } , \end{array } \label{e : optset}\ ] ] where the total number of transmitting ( @xmath0 ) and receiving ( @xmath1 ) radars may be divided into @xmath301 and @xmath302 sets of symmetrically placed radars , each set consists of @xmath303 and @xmath304 radars , respectively .
the angles @xmath305 and @xmath306 are an initial arbitrary rotation of the symmetric sets @xmath303 and @xmath304 , correspondingly . as a special case , it is interesting to evaluate the crlb in ( [ eq : crlbexpcohmm ] ) with @xmath307 transmitter and @xmath308 receivers , i.e. , a single - input multiple - output ( simo ) system .
this scheme makes use of @xmath309 radars instead of @xmath310 radars used in a mimo system with @xmath0 transmitters and @xmath1 receivers . from ( [ e : optset ] ) it is apparent the this case does not provide optimality since the number of transmitters is smaller than @xmath300 . to evaluate @xmath311 for this setting we assume @xmath307 transmitter is located at an arbitrary angle @xmath312 with respect to the target , and a set of @xmath308 receivers
are located symmetrically around the target , at angles @xmath274 that follow the condition in ( [ e : optset ] ) .
the expressions in ( [ eq : gxcoh2 ] ) , ( [ eq : gycoh2 ] ) , and ( [ eq : hcoh2 ] ) reduce to the form : @xmath313 ^{2}\right ] = \frac{1}{2}mn,\label{eq : simocoef}\\ g_{y_{c } } & = mn\left [ t\left ( \mathbf{a}_{rx}^{2}\right ) -\left [ t\left ( \mathbf{a}_{rx}\right ) \right ] ^{2}\right ] = \frac{1}{2}mn,\nonumber\\ h_{c } & = mn\left [ t\left ( \mathbf{a}_{rx}\mathbf{b}_{rx}\right ) -t\left ( \mathbf{a}_{rx}\right ) e\left ( \mathbf{b}_{rx}\right ) \right ] = 0,\nonumber\end{aligned}\ ] ] and the trace of the crlb submatrix @xmath194 _
{ 2\times2}$ ] , defined by @xmath314 , is @xmath315 this result expresses an increase in the estimation error in the factor of @xmath316 when compared with @xmath0 transmitters and @xmath1 receivers given in ( [ eq : opttrace ] ) .
the following comments are intended to provide further insight into the results obtained in this section . * from ( [ eq : opttrace ] ) , the lowest crlb for target localization utilizing phase information is given by @xmath317 .
we interpret the reduction of the crlb by the factor @xmath318 compared to a single antenna range estimation given by @xmath213 as a _ mimo radar gain .
_ this gain reflects two effects : ( 1 ) the gain due to the system footprint ; ( 2 ) the advantage of using @xmath0 transmitters and @xmath1 receivers , rather than , for example , @xmath307 transmitter and @xmath308 receivers .
the latter gain is apparent when @xmath319 . *
the crlb obtained through the use of a single transmit antenna and @xmath308 receive antennas in ( [ eq : simotrace ] ) is @xmath320 .
it follows that mimo radar , with a total of @xmath321 sensors , has twice the performance ( from the point of view of localization crlb ) of a system with a single transmit antenna and @xmath308 receive antennas . *
the best accuracy is obtained when the transmitting and receiving radars are located on a virtual circle , centered at the target position , with uniform angular spacings of @xmath298 and @xmath322 , respectively , or any _ superposition _ of such sets . * the optimization analysis presented in this section is intended to provide insight into the effect the sensors locations have on the crlb .
naturally , in practice , it is not possible to control in real time the location of the sensors relative to a target .
however , the results here teach us that selecting among the sensors those who are most symmetrical with respect to the target may lead to the most accurate localization .
so far we have focused on the theoretical lower bound of the localization error . in the next section ,
we discuss specific techniques for target localization and their performance as a function of sensors locations . for this purpose , the gdop
metric and gdop contour mapping tools are introduced .
in section [ section : crlb ] , was formulated the lower bound on the variance of any localization estimate . here , it is of interest to discuss some specific target localization estimators .
in particular , two estimators are presented : the mle and the blue . the mle is motivated by its asymptotic optimality , while the blue by its closed form expression .
the mle is a practical estimator in the sense that its application to a problem of observations in white gaussian noise is relatively straightforward .
moreover , under mild conditions on the probability density function of the observations , the mle of the unknown parameters is asymptotically unbiased , and it asymptotically attains the crlb @xcite . for the case of coherent mimo radar , the signal waveform received by radar @xmath36 is given in ( [ e : r ] ) .
the mle of the unknown parameter vector @xmath323 ^{t}$ ] given the observation vector @xmath85 is given by @xcite : @xmath324 \right\ } , \label{eq:12}\ ] ] where @xmath325 is given by ( [ eq : pdf_c ] ) noting that the time delays @xmath67 are known functions of @xmath102 and @xmath103 .
to jointly maximize @xmath326 with respect to @xmath327 ^{t},$ ] we start by maximizing it with respect to @xmath59 : @xmath328 using ( [ eq : pdf_c ] ) in ( [ eq:13 ] ) , the estimate @xmath329 can be found , and it is a function of @xmath102 and @xmath142 by substituting it back into ( [ eq:12 ] ) , it is said to _ compress _ the log - likelihood function @xcite to @xmath330 .
the mle of the target location is then given by @xmath331 since a closed form expression can not be found for the mle in ( [ e : mle ] ) , numerical methods need to be applied . a grid search or an iterative maximization of the likelihood function needs to be performed to determine @xmath332 and @xmath333 .
this might involve a significant computational effort . in practice
, we can limit the search grid for high resolution target localization estimation to an area around a coarse initial estimate obtained by the non - coherent approach .
the mle presented in section ( [ section : targetest mle ] ) does not lend itself to a closed form expression , and numerical methods need to be used to solve it . a closed form solution to the target localization can be obtained by application of the blue . to formulate the blue ,
it is necessary to have an observation model in which observations change linearly with the target location coordinates .
that is because it is inherent to the blue that the estimate is _
linear_. to this end , we formulate a model in which the time delays are observable .
let the observed time delay associated with a transmitter - receiver pair be @xmath334 then @xmath335 where @xmath336 is the observation noise . in practice ,
the time delays are not directly observable .
rather , they are estimated , for example by maximum likelihood , from the received signals .
then , the term @xmath336 is the time delay estimation error .
our blue estimation problem of the target location should not be confused with the estimation of the time delays .
the estimation of the time delays is just a preparatory step in setting up the observations of the blue model .
once , the observation model has been set up , it is necessary to ensure that the model between the time delays and target location is linear . setting the origin of the coordinate system at some nominal estimate of the target location , and preserving only linear terms of the taylor expansion of expressions such as in ( [ e : tau_vq ] ) , we can express the time delays as linear functions of @xmath102 and @xmath145 @xmath337 where the angles @xmath122 and @xmath338 are the bearings that the transmitting sensor @xmath123 and receiving sensor @xmath32 respectively , subtend with the reference axis ( with the origin at the nominal estimate of the target location ) . note that the definitions of the angles here are a little different than the angles defined in section [ section : crlb ] and also denoted @xmath339 and @xmath340 here , the vertex of the angles is an arbitrary point in the neighborhood of the true target location . in section [ section : crlb ] , the vertex is at the true target location . since only the vertex is different , we preserved the same notation for simplicity sake .
utilizing definitions ( [ eq : abdef ] ) , we can express the linear model in the following simplified form : @xmath341 letting , @xmath342 ^{t}$ ] and the vector of unknowns @xmath343^{t}$ ] , we write ( [ eq:17 ] ) in vector notation as follows : @xmath344 where the angle dependent matrix @xmath345 is defined as : @xmath346{ccc}a_{tx_{1}}+a_{rx_{1 } } & b_{tx_{1}}+b_{rx_{1 } } & 1\\ ... & ... & ... \\
a_{tx_{m}}+a_{rx_{n } } & b_{tx_{m}}+b_{rx_{n } } & 1 \end{array } \right ] _ { mn\times3}.\label{eq:19}\ ] ] the observation model ( [ e : mu ] ) can then be expressed as @xmath347 where @xmath348 ^{t},$ ] and @xmath349 ^{t}$ ] is the @xmath350 observation noise vector . to reiterate
, a key difference between the mle and blue models is that the mle target localization is carried out utilizing signal observations ( which are not linear in @xmath174 @xmath351 , while according to ( [ eq:20 ] ) , the blue s observations are in the form of time delays .
so an intermediate step of time delay estimation is implied .
the time delays estimates used as observations @xmath352 can be derived for example by mle as follows:@xmath353 , \label{eq:21}\ ] ] where @xmath354 is a dummy variable for the time delay . we still need some characterization of the noise terms @xmath355 it is shown in appendix [ section : appendixd ] , that the maximum likelihood time delay estimates are unbiased with error covariance matrix @xmath356 where previous definitions of the various quantities apply . for the linear and gaussian model in ( [ eq:20 ] ) ,
the blue is computed from the gauss - markov theorem @xcite that states the blue of the unknown vector @xmath86 is given by the expression : @xmath357 the theorem also establishes that the error covariance matrix is @xmath358 using the time error covariance matrix @xmath359 and the linear transformation matrix @xmath345 in ( [ eq:19 ] ) , the following estimate for the target localization is obtained : @xmath360{c}\widehat{x}\\ \widehat{y}\end{array } \right ] = \left [ \widehat{\mathbf{\theta}}_{b}\right ] _ { 2\times1}={\small -}c\mathbf{g}_{b}\left [ \begin{array } [ c]{c}\overset{m}{\underset{k=1}{\sum}}\overset{n}{\underset{\ell=1}{\sum}}\left ( a_{tx_{k}}+a_{rx_{\ell}}\right ) \mu_{\ell k}\\ \overset{m}{\underset{k=1}{\sum}}\overset{n}{\underset{\ell=1}{\sum}}\left ( b_{tx_{k}}+b_{rx_{\ell}}\right ) \mu_{\ell k}\end{array } \right ] , \label{eq : blue}\ ] ] where @xmath352 are the time observations , and the matrix @xmath361 is of the form:@xmath362{cc}g_{_{1b } } & h_{_{b}}\\ h_{_{b } } & g_{_{2b}}\end{array } \right ] .\ ] ] the elements of matrix @xmath363 are : @xmath364 using these results in ( [ eq : covblue ] ) provides the mse for the blue as follows : @xmath365 for the estimation of the @xmath102 coordinate , and @xmath366 for the estimation of the @xmath103 coordinate .
the following points are worth noting : * the blue estimator in ( [ eq:23 ] ) and its variance in ( [ eq : xmseblue ] ) and ( [ eq : ymseblue ] ) are provided in closed form .
this enables analysis without extensive numerical computations .
* in general , the variances ( [ eq : xmseblue ] ) and ( [ eq : ymseblue ] ) have similar functional dependencies on the carrier frequency and on the sensor deployment as the crlb ( [ eq : msec ] ) . the terms @xmath367 @xmath368 @xmath165 and @xmath166 embedded in ( [ eq : xmseblue ] ) and ( [ eq : ymseblue ] ) relate the sensors layout to the variance of the blue . from the expressions of the variance of the blue , one can not readily visualize the effect of the sensors layout . a mapping method , acting as a design and decision making tool for mimo radar systems , is proposed and evaluated in the next subsection . in section [ section :
optimizationoverall ] , we discussed optimal sensor location for minimizing the crlb . in practice ,
we are faced with a specific deployment of sensors , and we ask what is the localization accuracy for a given location of the target .
gdop is a metric that addresses this question .
the gdop is commonly used in gps systems for mapping the attainable localization accuracy for a given layout of gps satellites positions @xcite .
the gdop metric emphasizes the effect of sensors locations by normalizing the localization error with the term contributed by the range estimate .
the gdop metric for the two dimensional case is defined : @xmath369 where @xmath370 and @xmath371 are the variances of localization on the @xmath102 and @xmath103 axis , respectively , and @xmath372 is the standard deviation of the time delay estimation error , assumed the same for all sensors .
inherently , the gdop provides a normalized value that measures the relative contribution of the radars location to the overall accuracy . when the blue is used , and the linearity conditions hold , @xmath370 and @xmath371 are given by ( [ eq : xmseblue ] ) and ( [ eq : ymseblue ] ) , respectively .
using the result in ( [ eq:22 ] ) , @xmath373 for the time delay variance , we get the following gdop expression : @xmath374 the gdop reduces the combined effect of the locations of the sensors to a single metric .
once we get the values mapped , the actual localization error is easily derived by multiplying the gdop value with @xmath375 .
figure [ fig:2 ] and [ fig:3 ] present contour plots of the gdop values for @xmath376 and @xmath377 mimo radar systems , respectively .
the sensors are positioned symmetrically around the origin . in figure
[ fig:2 ] , the transmitting sensors are located at bearings @xmath378 , $ ] and the receiving sensors are positioned at bearings @xmath379 $ ] . in figure
[ fig:3 ] , the @xmath380 transmitting sensors are positioned as a superposition of two symmetrical constellations : the first set includes three radars and the second four .
the sets are located at bearings @xmath381 @xmath382 $ ] .
the receiving radars , for this case , are set in a single symmetrical constellation with bearings @xmath383 $ ] .
the first noticeable factor in the comparison of the two plots is the higher accuracy obtained with seven radars compared to four radars . for example , the lowest gdop value in figure [ fig:2 ] , for the @xmath376 system is @xmath384 while with seven radars ( see figure [ fig:3 ] ) , the lowest gdop is @xmath385 corresponding to a @xmath386 reduction .
when a target is located inside the virtual @xmath387-sided system footprint , a higher localization accuracy is obtained than when a target is outside the footprint of the system . in particular ,
the best localization is obtained for a target at the center of the system .
the increase in gdop values from the center to the footprint boundaries is slow .
outside the footprint , the gdop values increase rather rapidly . in figure [ fig:4 ] and
figure [ fig:5 ] , contours of seven non - symmetrically positioned radars are drawn .
when the radars are relatively widely spread , as in figure [ fig:4 ] , there are still some areas with good measurement accuracy , though the coverage is shrunk compared to the case with symmetrical deployment of sensors in figure [ fig:3 ] .
when the viewing angle of the target is very restricted , as in figure [ fig:5 ] , there is a marked degradation of gdop values .
these examples demonstrate the main theoretical result of section iv , namely that a symmetrical deployment of sensors around the target yields the lowest gdop values .
furthermore , calculating the lowest attainable gdop value using the optimal results in ( [ eq : opttrace ] ) for a @xmath388 mimo radar , we obtain a gdop value of @xmath389 , and for @xmath390 it is equal to @xmath391 . as a numerical example , the lowest gdops in figures [ fig:2 ] and [ fig:3 ] are @xmath392 and @xmath393respectively . comparing this with the results obtained in @xcite for the case of passive gps based systems , with @xmath1 satellites optimally positioned around the target , for which the lowest achievable gdop value is @xmath394 ,
the mimo system advantage is clearly manifested .
in this paper , we have developed analytical expressions for the estimation errors of coherent and non - coherent mimo radar using the crlb .
it was shown that when the processing is coherent and the phase is processed , there is a reduction in the crlb values ( standard deviation of the estimates ) by a factor of @xmath212 over the case when the observations are non - coherent .
we referred to this gain as coherency gain .
expressions for the crlb capture also the impact of the sensors geometry .
further minimization of the localization error reveals a mimo radar gain directly proportional to the product of the number of transmitting and receiving radars .
the smallest crlb is achieved when the transmitting and receiving sensors are arrayed symmetrically around the target or any a superposition of such sets .
the gdop metric and mapping were introduced as a general tool for the analysis of the localization accuracy with respect to the given radars and target locations .
these plots could serve as a tool for choosing favorable radar locations to cover a given target area .
while localization by coherent mimo radar provides significantly better performance than non - coherent processing , it faces the challenge of multisite systems phase synchronizing , and needs to deal with the ambiguities stemming from the large separation between sensors .
in this appendix , we develop the fim for the unknown parameter vector @xmath180 , based on the conditional pdf in ( [ eq : pdf_nc ] ) .
the expression for @xmath395 = -e\left [ \frac{\partial^{2}\log p\left ( \mathbf{r}|\mathbf{\psi}\right ) } { \partial^{2}\mathbf{\psi}}\right ] $ ] is derived using : @xmath74{l}\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { ii^{\prime}}=-e\left [ \frac{\partial^{2}\log p\left ( \mathbf{r}|\mathbf{\psi}_{nc}\right ) } { \partial\tau_{\ell k}\partial\tau_{\ell^{\prime}k^{\prime}}}\right ] , \\
\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { ( mn+i),(mn+i^{\prime})}=-e\left [ \frac{\partial^{2}\log p\left ( \mathbf{r}|\mathbf{\psi}_{nc}\right ) } { \partial\alpha_{\ell k}^{r}\ \partial\alpha_{\ell^{\prime}k^{\prime}}^{r}}\right ] , \\
\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { ( 2mn+i),(2mn+i^{\prime})}=-e\left [ \frac{\partial^{2}\log p\left ( \mathbf{r}|\mathbf{\psi}_{nc}\right ) } { \partial\alpha_{\ell k}^{i}\ \partial\alpha_{\ell^{\prime}k^{\prime}}^{i}}\right ] ,
\\ \left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { ( mn+i),(2mn+i^{\prime})}=\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { ( 2mn+i),(mn+i^{\prime})}=-e\left [ \frac{\partial ^{2}\log p\left ( \mathbf{r}|\mathbf{\psi}_{nc}\right ) } { \partial\alpha_{\ell k}^{r}\ \partial\alpha_{\ell^{\prime}k^{\prime}}^{i}}\right ] , \\
\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { i,\left ( mn+i^{\prime}\right ) } = \left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { \left ( mn+i\right ) , i^{\prime}}=-e\left [ \frac{\partial^{2}\log p\left ( \mathbf{r}|\mathbf{\psi}_{nc}\right ) } { \partial\tau_{\ell k}\ \partial\alpha_{\ell k}^{r}}\right ] , \\
\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { i,\left ( 2mn+i^{\prime}\right ) } = \left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { \left ( 2mn+i\right ) , i^{\prime}}=-e\left [ \frac{\partial^{2}\log p\left ( \mathbf{r}|\mathbf{\psi}_{nc}\right ) } { \partial\tau_{\ell k}\ \partial\alpha_{\ell k}^{i}}\right ] , \end{array } \label{eq : app_a1}\\ & \begin{array } [ c]{cc}i=(\ell-1)\ast m+k , & i^{\prime}=(\ell^{\prime}-1)\ast m+k^{\prime},\\ \ell,\ell^{\prime}=1, .. ,n ; & k , k^{\prime}=1, .. ,m ; \end{array } \text { \ \ } \nonumber\end{aligned}\ ] ] the first derivative of @xmath396 with respect to the elements of @xmath114 is:@xmath397 } { \partial\tau_{\ell k}}= & \text{$\frac{1}{\sigma_{w}^{2}}$}{\textstyle\int } \left\ { \left [ { \small r}_{\ell}{\small ( t)-}\overset{m}{\underset { k^{\prime}=1}{\sum}}\alpha_{\ell k^{\prime}}{\small s}_{k^{\prime}}\left ( t-\tau_{\ell k^{\prime}}\right ) \right ] \cdot\alpha_{\ell k}^{\ast}\frac{{\small \partial}\left [ { \small s}_{k}^{\ast}\left ( t-\tau_{\ell k}\right ) \right ] } { { \small \partial\tau}_{\ell k}}\right .
\label{eq : app_a3}\\ & \left .
+ \left [ { \small r}_{\ell}{\small ( t)-}\overset{m}{\underset { k^{\prime}=1}{\sum}}\alpha_{\ell k^{\prime}}{\small s}_{k^{\prime}}\left ( t-\tau_{\ell k^{\prime}}\right ) \right ] ^{\ast}\cdot\alpha_{\ell k}\frac{{\small \partial}\left [ { \small s}_{k}\left ( t-\tau_{\ell k}\right ) \right ] } { { \small \partial\tau}_{\ell k}}\right\ } dt.\nonumber\end{aligned}\ ] ] applying the second derivative to ( [ eq : app_a3 ] ) , define a matrix @xmath139 with the following elements:@xmath398_{ii^{\prime } } & = \frac{\sigma_{w}^{2}}{2}[{\small \mathbf{j}\left ( \mathbf{\psi}\right ) } ] _ { ii^{\prime}}=\label{eq : app_a4}\\ & = e\left\ { \frac{\partial^{2}}{\partial{\tau_{\ell k}}\partial{\tau _ { \ell^{\prime}k^{\prime}}}}\int\left [ \alpha_{\ell k}{\small s}_{k}\left ( t-{\tau_{\ell k}}\right ) \alpha_{\ell k^{\prime}}^{\ast}{\small s}_{k^{\prime}}^{\ast}\left ( t-{\tau_{\ell k^{\prime}}}\right ) \right . \right .
\nonumber\\ & \left
. + \left .
\alpha_{\ell k}^{\ast}{\small s}_{k}^{\ast}\left ( t-{\tau_{\ell k}}\right ) \alpha_{\ell k^{\prime}}{\small s}_{k^{\prime}}\left ( t-{\tau_{\ell k^{\prime}}}\right ) \right ] dt\right\ } \nonumber\\ & = \operatorname{re}\left\ { \alpha_{\ell k}\alpha_{\ell^{\prime}k^{\prime}}^{\ast}\left [ \frac{\partial^{2}}{\partial{\tau_{\ell k}}\partial{\tau _ { \ell^{\prime}k^{\prime}}}}\int{\small s}_{k}\left ( t-{\tau_{\ell k}}\right ) { \small s}_{k^{\prime}}^{\ast}\left
( t-{\tau_{\ell k^{\prime}}}\right ) dt\right ] \right\ } .\nonumber\end{aligned}\ ] ] using matrix notation for compactness , @xmath399 , \label{e : snc}\ ] ] where @xmath400 denotes a diagonal matrix , @xmath401 was defined in ( [ eq : alpha_nc ] ) , and we abuse the notation and let @xmath402 _ { ii^{\prime}}\equiv\frac{\partial}{\partial{\tau_{\ell k}}\partial{\tau _ { \ell^{\prime}k^{\prime}}}}\left [ \mathbf{r}_{s}\right ] _ { ii^{\prime}}.\label{e : d_sqr_rs}\ ] ] the elements of matrix @xmath403 are defined as : @xmath404 _ { ii^{\prime}}\equiv\left\ { \begin{array } [ c]{cc}\int{\small s}_{k}\left ( t-{\tau_{\ell k}}\right ) { \small s}_{k^{\prime}}^{\ast}\left ( t-{\tau_{\ell k^{\prime}}}\right ) dt & \ell=\ell^{\prime}\\ 0 & \ell\neq\ell^{\prime}\end{array } \right . .\label{e
: rs}\ ] ] the second and third terms in ( [ eq : app_a1 ] ) define a matrix @xmath405 with the following elements:@xmath406_{ii^{\prime } } & = [ \mathbf{\lambda}_{\alpha } ] _ { \left ( mn+i\right ) , \left ( mn+i^{\prime}\right ) } = \frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { ( mn+i),(mn+i^{\prime})}=\frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { ( 2mn+i),(2mn+i^{\prime})}\label{eq : app_a5}\\ & = e\left\ { \frac{\partial}{\partial\alpha_{\ell^{\prime}k^{\prime}}^{r}}\int\left [ \overset{m}{\underset{k^{\prime}=1}{\sum}}{\small s}_{k}\left ( t-{\tau_{\ell k}}\right ) \alpha_{\ell k^{\prime}}^{\ast}{\small s}_{k^{\prime}}^{\ast}\left ( t-{\tau_{\ell k^{\prime}}}\right ) \right . \right . \nonumber\\ & \left .
+ \overset{m}{\underset{k^{\prime}=1}{\sum}}{\small s}_{k}^{\ast}\left
( t-{\tau_{\ell k}}\right ) \alpha_{\ell k^{\prime}}{\small s}_{k^{\prime}}\left
( t-{\tau_{\ell k^{\prime}}}\right ) \right ] dt\right\ } \nonumber\\ & = \operatorname{re}\left\ { \left [ \mathbf{r}_{s}\right ] _ { ii^{\prime}}\right\ } , \nonumber\end{aligned}\ ] ] and @xmath406_{i,\left ( mn+i^{\prime}\right ) } & = [ \mathbf{\lambda}_{\alpha}]_{\left ( mn+i\right ) , i^{\prime}}=\frac { \sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { ( mn+i),(2mn+i^{\prime})}=\frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { ( 2mn+i),(mn+i^{\prime } ) } \label{eq : app_a6}\\ & = e\left\ { \frac{\partial}{\partial\alpha_{\ell^{\prime}k^{\prime}}^{i}}\int\left [ \overset{m}{\underset{k^{\prime}=1}{\sum}}\left ( j\right ) { \small s}_{k}\left
( t-{\tau_{\ell k}}\right ) \alpha_{\ell k^{\prime}}^{\ast}{\small s}_{k^{\prime}}^{\ast}\left
( t-{\tau_{\ell k^{\prime}}}\right ) \right .
\nonumber\\ & \left .
+ \overset{m}{\underset{k^{\prime}=1}{\sum}}\left ( -j\right ) { \small s}_{k}^{\ast}\left ( t-{\tau_{\ell k}}\right ) \alpha_{\ell k^{\prime}}{\small s}_{k^{\prime}}\left ( t-{\tau_{\ell k^{\prime}}}\right ) \right ] dt\right\ } \nonumber\\ & = -\operatorname{im}\left\ { \left [ \mathbf{r}_{s}\right ] _ { ii^{\prime}}\right\ } .\nonumber\end{aligned}\ ] ] in matrix notation , @xmath407{cc}\operatorname{re}\left [ \mathbf{r}_{s}\right ] & -\operatorname{im}\left [ \mathbf{r}_{s}\right ] \\ -\operatorname{im}\left [ \mathbf{r}_{s}\right ] & \operatorname{re}\left [ \mathbf{r}_{s}\right ] \end{array } \right ] .\label{e : lambdanc}\ ] ] the fourth and fifth terms in ( [ eq : app_a1 ] ) define the matrix @xmath135 with the following elements:@xmath408_{ii^{\prime } } & = \frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { ( mn+i),i^{\prime}}=\frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { i,(mn+i^{\prime})}\label{eq : app_a7}\\ & = e\left\ { \frac{{\small \partial}}{{\small \partial\tau}_{\ell k}}\frac{\partial}{\partial\alpha^{r}{_{\ell^{\prime}k^{\prime}}}}\int\left [ \alpha_{\ell k}{\small s}_{k}\left ( t-\tau_{\ell k}\right ) \alpha_{\ell k^{\prime}}^{\ast}{\small s}_{k^{\prime}}^{\ast}\left ( t-\tau_{\ell k^{\prime}}\right ) \right . \right .
\nonumber\\ & \left .
+ \alpha_{\ell k}^{\ast}{\small s}_{k}^{\ast}\left ( t-\tau_{\ell k}\right ) \alpha_{\ell k^{\prime}}{\small s}_{k^{\prime}}\left ( t-\tau_{\ell k^{\prime}}\right ) \right ] dt\right\ } \nonumber\\ & = \operatorname{re}\left\ { \alpha_{\ell k}\frac{{\small \partial}}{{\small \partial\tau}_{\ell k}}\left [ \mathbf{r}_{s}\right ] _ { ii^{\prime}}\right\ } , \nonumber\end{aligned}\ ] ] and@xmath408_{i,\left ( mn+i^{\prime}\right ) } & = \frac { \sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { ( 2mn+i),i^{\prime}}=\frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{nc}\right ) \right ] _ { i,(2mn+i^{\prime})}\label{eq : app_a8}\\ & = e\left\ { \frac{{\small \partial}}{{\small \partial\tau}_{\ell k}}\frac{\partial}{\partial\alpha^{i}{_{\ell^{\prime}k^{\prime}}}}\int\left [ \alpha_{\ell k}{\small s}_{k}\left ( t-\tau_{\ell k}\right ) \alpha_{\ell k^{\prime}}^{\ast}{\small s}_{k^{\prime}}^{\ast}\left ( t-\tau_{\ell k^{\prime}}\right ) \right . \right .
\nonumber\\ & \left .
+ \alpha_{\ell k}^{\ast}{\small s}_{k}^{\ast}\left ( t-\tau_{\ell k}\right ) \alpha_{\ell k^{\prime}}{\small s}_{k^{\prime}}\left ( t-\tau_{\ell k^{\prime}}\right ) \right ] dt\right\ } \nonumber\\ & = -\operatorname{im}\left\ { \alpha_{\ell k}\frac{{\small \partial}}{{\small \partial\tau}_{\ell k}}\left [ \mathbf{r}_{s}\right ] _ { ii^{\prime}}\right\ } .\nonumber\end{aligned}\ ] ] in matrix notation : @xmath409{cc}\frac{\partial}{\partial\mathbf{\tau}}\operatorname{re}\left [ \operatorname*{diag}(\mathbf{\alpha})\mathbf{r}_{s}\right ] ; & -\frac { \partial}{\partial\mathbf{\tau}}\operatorname{im}\left [ \operatorname*{diag}(\mathbf{\alpha})\mathbf{r}_{s}\right ] \end{array } \right ] .\label{e : vnc}\ ] ] _ orthogonal waveforms _
orthogonality implies that all cross elements @xmath410 for @xmath411 and @xmath412and after some algebra , the matrices defined by ( [ eq : app_a4])-([eq : app_a8 ] ) take the following form:@xmath157{c}\left [ \mathbf{s}_{nc}\right ] _ { ii^{\prime}}=\left\ { \begin{array } [ c]{cc}4\pi^{2}\beta^{2}\left [ \left\vert \alpha_{lk}\right\vert ^{2}\beta_{r_{k}}^{2}\right ] & i = i^{\prime}\\ 0 & i\neq i^{\prime}\end{array } \right . \\ \lbrack\mathbf{\lambda}_{\alpha}]_{ii^{\prime}}=[\mathbf{\lambda}_{\alpha } ] _ { \left ( mn+i\right ) , \left ( mn+i^{\prime}\right ) } = \left\ { \begin{array } [ c]{cc}1 & i = i^{\prime}\\ 0 & i\neq i^{\prime}\end{array } \right . \\ \lbrack\mathbf{\lambda}_{\alpha}]_{i,\left ( mn+i^{\prime}\right ) } = [ \mathbf{\lambda}_{\alpha}]_{\left ( mn+i\right ) , i^{\prime}}=0\\ \lbrack\mathbf{v}_{nc}]_{ii^{\prime}}=0\\ \lbrack\mathbf{v}_{nc}]_{i,\left ( mn+i^{\prime}\right ) } = 0 .
\end{array } \label{eq : app_a9}\ ] ]
in this appendix , we develop the fim for the unknown parameter vector @xmath182 , based on the conditional pdf in ( [ eq : pdf_c ] ) .
the expression for @xmath413 $ ] is derived using : @xmath74{l}\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { ii^{\prime}}=-e\left [ \frac{\partial^{2}\log p\left ( \mathbf{r}|\mathbf{\psi}_{c}\right ) } { \partial\tau_{\ell k}\partial\tau_{\ell^{\prime}k^{\prime}}}\right ] , \\
\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { ( mn+1),(mn+1)}=-e\left [ \frac{\partial^{2}\log p\left ( \mathbf{r}|\mathbf{\psi}_{c}\right ) } { \left ( \partial\zeta^{r}\right ) ^{2}\ } \right ] , \\
\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { ( mn+2),(mn+2)}=-e\left [ \frac{\partial^{2}\log p\left ( \mathbf{r}|\mathbf{\psi}_{c}\right ) } { \left ( \partial\zeta^{i}\right ) ^{2}}\right ] , \\
\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { ( mn+1),(mn+2)}=\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { ( mn+2),(mn+1)}=-e\left [ \frac{\partial^{2}\log p\left ( \mathbf{r}|\mathbf{\psi}_{c}\right ) } { \partial\zeta^{r}\ \partial\zeta^{i}}\right ] ,
\\ \left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { i,\left ( mn+1\right ) } = \left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { \left ( mn+1\right ) , i^{\prime}}=-e\left [ \frac{\partial^{2}\log p\left ( \mathbf{r}|\mathbf{\psi}_{c}\right ) } { \partial\tau_{\ell k}\ \partial \zeta^{r}\ } \right ] , \\
\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { i,\left ( mn+2\right ) } = \left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { \left ( mn+2\right ) , i^{\prime}}=-e\left [ \frac{\partial^{2}\log p\left ( \mathbf{r}|\mathbf{\psi}_{c}\right ) } { \partial\tau_{\ell k}\ \partial \zeta^{i}}\right ] , \end{array } \label{eq : app_b1}\\ & \begin{array } [ c]{cc}i=(\ell-1)\ast m+k , & i^{\prime}=(\ell^{\prime}-1)\ast m+k^{\prime},\\ \ell,\ell^{\prime}=1, .. ,n ; & k , k^{\prime}=1, .. ,m . \end{array } \text { \ .\ } \nonumber\end{aligned}\ ] ] the first derivative of @xmath414 with respect to the elements of @xmath114 is:@xmath415 } { \partial\tau_{\ell k}}= & \text{$\frac{1}{\sigma_{w}^{2}}$}{\textstyle\int } \left\ { \left [ { \small r}_{\ell}{\small ( t)-}\overset{m}{\underset { k^{\prime}=1}{\sum}}\zeta\exp\left ( -j2\pi f_{c}\tau_{\ell k^{\prime}}\right ) { \small s}_{k^{\prime}}\left ( t-\tau_{\ell k^{\prime}}\right ) \right ] \cdot\zeta^{\ast}\frac{{\small \partial}\left [ \exp\left ( j2\pi f_{c}\tau_{\ell k}\right ) { \small s}_{k}^{\ast}\left ( t-\tau_{\ell k}\right ) \right ] } { { \small \partial\tau}_{\ell k}}\right .
\label{eq : app_b3}\\ & \left .
+ \left [ { \small r}_{\ell}{\small ( t)-}\overset{m}{\underset { k^{\prime}=1}{\sum}}\zeta\exp\left ( -j2\pi f_{c}\tau_{\ell k^{\prime}}\right ) { \small s}_{k^{\prime}}\left ( t-\tau_{\ell k^{\prime}}\right ) \right ] ^{\ast}\cdot\zeta\frac{{\small \partial}\left [ \exp\left ( -j2\pi f_{c}\tau_{\ell k}\right ) { \small s}_{k}\left ( t-\tau_{\ell k}\right ) \right ] } { { \small \partial\tau}_{\ell k}}\right\ } dt.\nonumber\end{aligned}\ ] ] applying the second derivative to ( [ eq : app_b3 ] ) define a matrix @xmath139 with the following elements:@xmath416_{ii^{\prime } } & = \frac{\sigma_{w}^{2}}{2}[{\small \mathbf{j}\left ( \mathbf{\psi}\right ) } ] _ { ii^{\prime}}=\label{eq : app_b4}\\ & = e\left\ { \frac{\partial^{2}}{\partial{\tau_{\ell k}}\partial{\tau _ { \ell^{\prime}k^{\prime}}}}\int\left [ \zeta\zeta^{\ast}\exp\left ( j2\pi f_{c}\left ( \tau_{\ell k}-\tau_{\ell^{\prime}k^{\prime}}\right ) \right ) { \small s}_{k^{\prime}}\left ( t-{\tau_{\ell k^{\prime}}}\right ) { \small s}_{k}^{\ast}\left ( t-{\tau_{\ell k}}\right ) \right
. \right .
\nonumber\\ & \left
. + \left .
\zeta^{\ast}\zeta\exp\left ( -j2\pi\left ( \tau_{\ell k}-\tau_{\ell k^{\prime}}\right ) \right ) { \small s}_{k^{\prime}}^{\ast } \left ( t-{\tau_{\ell k^{\prime}}}\right ) { \small s}_{k}\left ( t-{\tau_{\ell k}}\right ) \right ] dt\right\ } \nonumber\\ & = \operatorname{re}\left\ { \left\vert \zeta\right\vert ^{2}\left [ \frac{\partial^{2}}{\partial{\tau_{\ell k}}\partial{\tau_{\ell^{\prime } k^{\prime}}}}\left ( \exp\left ( -j2\pi f_{c}\left ( \tau_{\ell k}-\tau _ { \ell^{\prime}k^{\prime}}\right ) \right ) \left [ \mathbf{r}_{s}\right ] _ { ii^{\prime}}\right ) \right ] \right\ } .\nonumber\end{aligned}\ ] ] in matrix form , @xmath417 where the operator @xmath418 and the matrix @xmath419 were defined in appendix [ section : appendixa ] , @xmath420 @xmath421 $ ] .
the second and third terms in ( [ eq : app_b1 ] ) define a matrix @xmath191 with the following elements:@xmath422_{11 } & = [ \mathbf{\lambda}_{\alpha c}]_{22}=\frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { ( mn+1),(mn+1)}=\frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { ( mn+2),(mn+2)}\label{eq : app_b6}\\ & = e\left\ { \overset{n}{\underset{\ell=1}{\sum}}\overset{m}{\underset { k=1}{\sum}}\int\left [ \overset{m}{\underset{k^{\prime}=1}{\sum}}\exp\left ( -j2\pi f_{c}\left ( \tau_{\ell k}-\tau_{\ell k^{\prime}}\right ) \right ) { \small s}_{k}\left
( t-{\tau_{\ell k}}\right ) { \small s}_{k^{\prime}}^{\ast } \left
( t-{\tau_{\ell k^{\prime}}}\right ) \right .
\nonumber\\ & \left .
+ \overset{m}{\underset{k^{\prime}=1}{\sum}}\exp\left ( j2\pi f_{c}\left ( \tau_{\ell k}-\tau_{\ell k^{\prime}}\right ) \right ) { \small s}_{k}^{\ast}\left ( t-{\tau_{\ell k}}\right ) { \small s}_{k^{\prime}}\left ( t-{\tau_{\ell k^{\prime}}}\right ) \right ] dt\right\ } \nonumber\\ & = \operatorname{re}\left\ { \overset{n}{\underset{\ell=1}{\sum}}\overset { n}{\underset{\ell^{\prime}=1}{\sum}}\overset{m}{\underset{k=1}{\sum}}\overset{m}{\underset{k^{\prime}=1}{\sum}}\exp\left ( -j2\pi f_{c}\left ( \tau_{\ell k}-\tau_{\ell^{\prime}k^{\prime}}\right ) \right ) \left [ \mathbf{r}_{s}\right ] _ { ii^{\prime}}\right\ } , \nonumber\end{aligned}\ ] ] and @xmath422_{12 } & = [ \mathbf{\lambda}_{\alpha c}]_{21}=\frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { ( mn+1)(mn+2)}=\frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { ( mn+2)(mn+1)}=\label{eq : app_b7}\\ & = e\left\ { \overset{n}{\underset{\ell=1}{\sum}}\overset{m}{\underset { k=1}{\sum}}\int\left [ \overset{m}{\underset{k^{\prime}=1}{\sum}}\left ( j\right ) ^{\ast}\exp\left ( -j2\pi f_{c}\left ( \tau_{\ell k}-\tau_{\ell k^{\prime}}\right ) \right ) { \small s}_{k}\left ( t-{\tau_{\ell k}}\right ) { \small s}_{k^{\prime}}^{\ast}\left ( t-{\tau_{\ell k^{\prime}}}\right ) \right .
\nonumber\\ & \left .
+ \overset{m}{\underset{k^{\prime}=1}{\sum}}\left ( j\right ) \exp\left ( j2\pi f_{c}\left ( \tau_{\ell k}-\tau_{\ell k^{\prime}}\right ) \right ) { \small s}_{k}^{\ast}\left ( t-{\tau_{\ell k}}\right ) { \small s}_{k^{\prime}}\left ( t-{\tau_{\ell k^{\prime}}}\right ) \right ] dt\right\ } \nonumber\\ & = -\operatorname{im}\left\ { \overset{n}{\underset{\ell=1}{\sum}}\overset { n}{\underset{\ell^{\prime}=1}{\sum}}\overset{m}{\underset{k=1}{\sum}}\overset{m}{\underset{k^{\prime}=1}{\sum}}\exp\left ( -j2\pi f_{c}\left ( \tau_{\ell k}-\tau_{\ell^{\prime}k^{\prime}}\right ) \right ) \left [ \mathbf{r}_{s}\right ] _ { ii^{\prime}}\right\ } .\nonumber\end{aligned}\ ] ] in matrix form , @xmath423{cc}\operatorname{re}\left [ \mathbf{er}_{s}\mathbf{e}^{h}\right ] & -\operatorname{im}\left [ \mathbf{er}_{s}\mathbf{e}^{h}\right ] \\ -\operatorname{im}\left [ \mathbf{er}_{s}\mathbf{e}^{h}\right ] & \operatorname{re}\left [ \mathbf{er}_{s}\mathbf{e}^{h}\right ] \end{array } \right ] .\label{e : lambdac}\ ] ] the fourth and fifth terms in ( [ eq : app_b1 ] ) define the matrix @xmath192 with the following elements:@xmath424_{i1 } & = \frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { i,(mn+1)}=\frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { ( mn+1),i^{\prime}}\label{eq : app_b8}\\ & = e\left\ { \frac{\partial}{\partial{\tau_{\ell k}}}\int\left [ \zeta\overset{m}{\underset{k^{\prime}=1}{\sum}}\exp\left ( -j2\pi f_{c}\left ( \tau_{\ell k}-\tau_{\ell k^{\prime}}\right ) \right ) { \small s}_{k}\left ( t-\tau_{\ell k}\right ) { \small s}_{k^{\prime}}^{\ast}\left ( t-\tau_{\ell k^{\prime}}\right ) \right
. \right .
\nonumber\\ & \left .
+ \zeta^{\ast}\overset{m}{\underset{k^{\prime}=1}{\sum}}\exp\left ( j2\pi f_{c}\left ( \tau_{\ell k}-\tau_{\ell k^{\prime}}\right ) \right ) { \small s}_{k}^{\ast}\left ( t-\tau_{\ell k}\right ) { \small s}_{k^{\prime}}\left ( t-\tau_{\ell k^{\prime}}\right ) \right ] dt\right\ } \nonumber\\ & = \frac{{\small \partial}}{{\small \partial\tau}_{\ell k}}\operatorname{re}\left\ { \overset{n}{\underset{\ell^{\prime}=1}{\sum}}\overset{m}{\underset{k^{\prime}=1}{\sum}}\zeta\exp\left ( -j2\pi f_{c}\left ( \tau_{\ell k}-\tau_{\ell^{\prime}k^{\prime}}\right ) \right ) \left [ \mathbf{r}_{s}\right ] _ { ii^{\prime}}dt\right\ } , \nonumber\end{aligned}\ ] ] and@xmath424_{i2 } & = \frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { i,(mn+2)}=\frac{\sigma_{w}^{2}}{2}\left [ \mathbf{j}\left ( \mathbf{\psi}_{c}\right ) \right ] _ { ( mn+2),i^{\prime}}\label{eq : app_b9}\\ & = e\left\ { \frac{\partial}{\partial{\tau_{\ell k}}}\int\left [ \left ( j\zeta\right ) \overset{m}{\underset{k^{\prime}=1}{\sum}}\exp\left ( -j2\pi f_{c}\left ( \tau_{\ell k}-\tau_{\ell k^{\prime}}\right ) \right ) { \small s}_{k}\left
( t-\tau_{\ell k}\right ) { \small s}_{k^{\prime}}^{\ast } \left ( t-\tau_{\ell k^{\prime}}\right ) \right
. \right .
\nonumber\\ & \left .
\left . + \left ( j\zeta\right ) ^{\ast}\overset{m}{\underset { k^{\prime}=1}{\sum}}\exp\left ( j2\pi f_{c}\left ( \tau_{\ell k}-\tau_{\ell k^{\prime}}\right ) \right ) { \small s}_{k}^{\ast}\left
( t-\tau_{\ell k}\right ) { \small s}_{k^{\prime}}\left ( t-\tau_{\ell k^{\prime}}\right ) \right ] dt\right\ } \nonumber\\ & = -\frac{{\small \partial}}{{\small \partial\tau}_{\ell k}}\operatorname{im}\left\ { \overset{n}{\underset{\ell^{\prime}=1}{\sum}}\overset{m}{\underset{k^{\prime}=1}{\sum}}\zeta\exp\left ( -j2\pi f_{c}\left ( \tau_{\ell k}-\tau_{\ell^{\prime}k^{\prime}}\right ) \right ) \left [ \mathbf{r}_{s}\right ] _ { ii^{\prime}}dt\right\ } .\nonumber\end{aligned}\ ] ] in matrix form , @xmath425{cc}\frac{\partial}{\partial\mathbf{\tau}}\operatorname{re}\left\ { \zeta\left [ \operatorname*{diag}(\mathbf{e)r}_{s}\right ] \mathbf{e}^{h}\right\ } ; & -\frac{\partial}{\partial\mathbf{\tau}}\operatorname{im}\left\ { \zeta\left [ \operatorname*{diag}(\mathbf{e)r}_{s}\right ] \mathbf{e}^{h}\right\ } \end{array } \right ] .\label{e : vc}\ ] ] _ orthogonal waveforms _
orthogonality implies that all cross elements @xmath426 for @xmath411 and @xmath427 .
therefore , the matrices defined by ( [ eq : app_b4])-([eq : app_b9 ] ) take the following form:@xmath157{c}\left [ \mathbf{s}_{c_{or}}\right ] _ { ii^{\prime}}=\left\ { \begin{array } [ c]{cc}4\pi^{2}\left\vert \zeta\right\vert ^{2}f_{c}^{2}f_{r_{k } } & i = i^{\prime}\\ 0 & i\neq i^{\prime}\end{array } \right .
\\ \lbrack\mathbf{\lambda}_{\alpha c_{or}}]_{11}=[\mathbf{\lambda}_{\alpha_{or}}]_{22}=\left\ { \begin{array } [ c]{cc}\frac{1}{mn } & i = i^{\prime}\\ 0 & i\neq i^{\prime}\end{array } \right . \\
\lbrack\mathbf{\lambda}_{\alpha c_{or}}]_{21}=[\mathbf{\lambda}_{\alpha_{or}}]_{12}=0\\ \lbrack\mathbf{v}_{c_{or}}]_{i1}=2\pi\zeta^{i}f_{c}\\ \lbrack\mathbf{v}_{c_{or}}]_{i2}=-2\pi\zeta^{r}f_{c}. \end{array } \label{eq : app_b10}\ ] ] where @xmath206 .
when we invoke the narrowband assumption @xmath207 it follows that @xmath208 .
the submatrix @xmath194 _ { 2\times2}$ ] is defined as : @xmath195 _ { 2\times2}=\left [ \mathbf{j}\left ( \mathbf{\theta}_{c}\right ) \right ] _ { 2\times2}^{-1}.\label{eq : a1eq1}\ ] ] for a given matrix of the form : @xmath428{cc}\mathbf{hs}_{c}\mathbf{h}^{t } & \mathbf{hv}_{c}\\ \mathbf{v}_{c}^{t}\mathbf{h}^{t } & \mathbf{\lambda}_{{\normalsize \alpha c}}\end{array } \right ] , \label{eq : a1eq2}\ ] ] where @xmath429 is a diagonal matrix of the form @xmath430 , and @xmath431 is some constant . by definition , the value of @xmath432 _ { 1,1}^{-1}$ ] is obtained by : @xmath433 _ { 1,1}^{-1}=\frac{\left\vert \widetilde{\mathbf{j}}\left ( \mathbf{\theta}_{c}\right ) _ { ex\left ( 1,1\right ) } \right\vert } { \left\vert \mathbf{j}\left ( \mathbf{\theta}_{c}\right ) \right\vert } , \label{eq : appcr1a}\ ] ] where @xmath434 denotes the determinant , and
@xmath435 is a submatrix , obtained by removing the first row and the first column of the @xmath436 matrix .
the determinant of @xmath436 , using the property that the determinant of a matrix does not change under linear operations , is : @xmath437{cc}\mathbf{h\mathbf{s}}_{c}\mathbf{h}^{t}-\mathbf{v}_{c}^{t}\mathbf{h}^{t}\mathbf{\lambda}_{{\normalsize \zeta}}^{-1}\mathbf{hv}_{c } & \mathbf{0}\\ \mathbf{v}_{c}^{t}\mathbf{h}^{t } & \mathbf{\lambda}_{{\normalsize \alpha c}}\end{array } \right\vert .\label{eq : appcr4a}\ ] ] this can be calculated and expressed as : @xmath438 repeating the same for the matrix @xmath439 : @xmath440{cc}\widetilde{\mathbf{h\mathbf{\mathbf{s}}}_{c}\mathbf{h}^{t}}_{ex\left ( 1,1\right ) } & \widetilde{\mathbf{hv}_{c}}_{ex\left ( 1,\right ) } \\
\widetilde{\mathbf{v}_{c}^{t}\mathbf{h}^{t}}_{ex\left ( , 1\right ) } & \mathbf{\lambda}_{{\normalsize \alpha c}}\end{array } \right ] .\label{eq : appcr7a}\ ] ] using the same matrix manipulation , we get : @xmath441 and using terms ( [ eq : appcr6a ] ) and ( [ eq : appcr8 ] ) in ( [ eq : appcr1a ] ) yields : @xmath433 _ { 1,1}^{-1}=\frac{\left\vert \widetilde{\mathbf{h\mathbf{s}}_{c}\mathbf{h}^{t}}-\widetilde{\mathbf{v}_{c}^{t}\mathbf{h}^{t}}\mathbf{\lambda}_{{\normalsize \alpha c}}^{-1}\widetilde{\mathbf{hv}_{c}}\right\vert } { \left\vert
\mathbf{h\mathbf{s}}_{c}\mathbf{h}^{t}-\mathbf{v}_{c}^{t}\mathbf{h}^{t}\mathbf{\lambda}_{{\normalsize \alpha c}}^{-1}\mathbf{hv}_{c}\right\vert } .\label{eq : appcr9a}\ ] ] by definition , this expression is identical to : @xmath433 _ { 1,1}^{-1}{\normalsize = } \left [ \left ( \mathbf{h\mathbf{s}}_{c}\mathbf{h}^{t}-\mathbf{v}_{c}^{t}\mathbf{h}^{t}\mathbf{\lambda}_{{\normalsize \alpha c}}^{-1}\mathbf{hv}_{c}\right ) ^{-1}\right ] _ { 1,1}.\label{eq : appcr10a}\ ] ] repeating the process for term located at @xmath442 , @xmath443 , and @xmath444 , results in : @xmath445 _ { 2\times 2}{\normalsize = } \left ( \mathbf{h\mathbf{s}}_{c}\mathbf{h}^{t}-\mathbf{v}_{c}^{t}\mathbf{h}^{t}\mathbf{\lambda}_{{\normalsize \alpha c}}^{-1}\mathbf{hv}_{c}\right ) ^{-1}.\label{eq : app_c2}\ ] ]
for a set of received waveforms @xmath446
@xmath447 the time delay estimates @xmath448 ^{t}$ ] are determined by maximizing the following statistic : @xmath449 .\label{eq : apptoa1}\ ] ] equivalently , @xmath450 _ { v=\mu_{\ell k}}=0.\label{e : tde}\ ] ] the time delay estimates are expressed in ( [ e : mu ] ) .
the properties of the noise @xmath451 can be computed from ( [ e : td ] ) , and ( [ e : r ] ) .
it is not difficult to show that the following relation holds:@xmath452 where @xmath453 s_{k}\left ( t-\tau_{\ell k}\right ) s_{k}^{\ast}\left ( t - v\right ) dt,\label{eq : apptoa4}\ ] ] and@xmath454 we wish to write ( [ eq : apptoa5 ] ) in the form of ( [ e : mu ] ) . with a few algebraic manipulations , including expanding @xmath455 in a taylor series around @xmath90 and neglecting terms @xmath456 ,
$ ] it can be shown that @xmath457 comparing this with ( [ e : mu ] ) , and invoking the narrowband assumption @xmath207 , we have for the error term @xmath458 to find the first and second order statistics of @xmath459 we need the statistical characterization of @xmath460 .
as previously stated , we assume the receiver noise @xmath461 is a gaussian random process with zero mean and autocorrelation function @xmath462 . since @xmath463 is a linear transformation of the process @xmath464
since the mean @xmath461 is zero , @xmath465 = 0.$ ] similarly , it can be shown that @xmath466 = \left\ { \begin{array } [ c]{lc}0 & \forall\ell k\neq nm\\ 2\pi^{2}\sigma_{w}^{2}f_{c}^{2 } & \forall\ell k = nm\text { } \end{array } \right . .\label{eq : apptoa14}\ ] ] using these results , we finally get @xmath467 & = \frac{e\left [ n_{\ell k}{}n_{nm}\right ] } { 16\pi^{4}\left\vert \zeta\right\vert ^{2}f_{c}^{4}}\label{eq : apptoa15}\\ & = \left\ { \begin{array } [ c]{lc}0 & \forall\ell k\neq nm\\ \frac{1}{8\pi^{2}f_{c}^{2}\left ( \left\vert \zeta\right\vert ^{2}/\sigma _ { w}^{2}\right ) } & \forall\ell k = nm\text { } \end{array } \right . , \nonumber\end{aligned}\ ] ] concluding that the covariance matrix of the terms @xmath451 is given by : @xmath468 e. fishler , a. haimovich , r. blum , l. cimini , d. chizhik , and r. valenzuela , performance of mimo radar systems : advantages of angular diversity , in _ proc .
38th asilomar conf .
signals , syst .
_ , pacific grove , ca , nov .
2004 , vol .
1 , pp . 305309 . d. w. bliss and k. w. forsythe , multiple - input multiple - output ( mimo ) radar and imaging : degrees of freedom and resolution , in _ proc . of 37th asilomar conference on signals , systems and computers ,
2003 , pp .
54 - 59 .
f. c. robey , s. coutts , d. weikle , j. c. mcharg , and k. cuomo , mimo radar theory and experimental results , in the _
38th asilomar conference on signals , systems and computers _ ,
november 2004 , pp .
300304 .
e. fishler , a. haimovich , r. blum , l. cimini , d. chizhik , and r. valenzuela , spatial diversity in radars - models and detection performance , _ ieee trans . on sig .
54 , march 2006 , pp .
823 - 838 .
fuhrmann and g. san antonio , transmit beamforming for mimo radar systems using partial signal correlations , in _ proc .
38th asilomar conf .
signals , syst .
_ , pacific grove , ca , nov .
2004 , vol .
1 , pp . 295299 . l. xu , j. li , p. stoica , k.w .
forsythe , and d.w .
bliss , waveform optimization for mimo radar : a cramer - rao bound based study , in proc .
2007 ieee int .
acoustics , speech , and signal processing , honolulu , hawaii , pp .
ii-91711 - 920 , april 2007 .
n. lehmann , a. m. haimovich , r. s. blum , and l. cimini , mimo - radar application to moving target detection in homogenous clutter , _
14th ieee workshop on sensor array and multi - channel processing _ , waltham , ma , july 2006 .
e. weinstein and a.j .
weiss , fundemental limitation in passive time - delay estimation - partii : wide -band systems , _ ieee trans . on acoustics , speech and signal proc .
assp-32 , no .
5 , october 1984 , pp .
1064 - 1078 . | this paper presents an analysis of target localization accuracy , attainable by the use of mimo ( multiple - input multiple - output ) radar systems , configured with multiple transmit and receive sensors , widely distributed over a given area .
the cramer - rao lower bound ( crlb ) for target localization accuracy is developed for both coherent and non - coherent processing .
coherent processing requires a common phase reference for all transmit and receive sensors .
the crlb is shown to be inversely proportional to the signal effective bandwidth in the non - coherent case , but is approximately inversely proportional to the carrier frequency in the coherent case .
we further prove that optimization over the sensors positions lowers the crlb by a factor equal to the product of the number of transmitting and receiving sensors .
the best linear unbiased estimator ( blue ) is derived for the mimo target localization problem .
the blue s utility is in providing a closed form localization estimate that facilitates the analysis of the relations between sensors locations , target location , and localization accuracy .
geometric dilution of precision ( gdop ) contours are used to map the relative performance accuracy for a given layout of radars over a given geographic area .
mimo radar , spatial processing , adaptive array . |
FILE - In this March 21, 2016 file photo, the Flint Water Plant water tower is seen in Flint, Mich. The U.S. Environmental Protection Agency said on March 27, 2017, that a $100 million grant to address... (Associated Press)
FILE - In this March 21, 2016 file photo, the Flint Water Plant water tower is seen in Flint, Mich. The U.S. Environmental Protection Agency said on March 27, 2017, that a $100 million grant to address drinking water issues in the city was approved after a formal application from Michigan state officials.... (Associated Press)
FILE - In this March 21, 2016 file photo, the Flint Water Plant water tower is seen in Flint, Mich. The U.S. Environmental Protection Agency said on March 27, 2017, that a $100 million grant to address drinking water issues in the city was approved after a formal application from Michigan state officials.... (Associated Press) FILE - In this March 21, 2016 file photo, the Flint Water Plant water tower is seen in Flint, Mich. The U.S. Environmental Protection Agency said on March 27, 2017, that a $100 million grant to address... (Associated Press)
DETROIT (AP) — Michigan and the city of Flint agreed Monday to replace thousands of home water lines under a sweeping deal to settle a lawsuit by residents over lead-contaminated water in the struggling community.
Flint will replace at least 18,000 lead or galvanized-steel water lines by 2020, and the state will pick up the bill with state and federal money, according to the settlement filed in federal court. It will be presented Tuesday to U.S. District Judge David Lawson for likely approval.
More than 700 water lines already have been replaced and work is ongoing, but the agreement would rid Flint's roughly 100,000 residents of uncertainty over how to pay for the enormous task. Under the settlement, the state will set aside $87 million and keep another $10 million in reserve if necessary.
"The proposed agreement is a win for the people of Flint," said Dimple Chaudhary, an attorney with the Natural Resources Defense Council, which is working with the American Civil Liberties Union of Michigan to represent Flint residents.
"It provides a comprehensive framework to address lead in Flint tap water and covers a number of critical issues related to water safety," Chaudhary told The Associated Press.
Despite the development, some residents still feel discouraged. Reneta Richard, a 38-year-old teacher, said another few years for new pipes "compiles the despair that I see and feel." She heats bottled water for kitchen use because hot tap water can damage the filter.
"When I see someone on TV just turn on the water and wash their hands — I haven't been able to do that for years," Richard said.
Flint's water was tainted with lead for at least 18 months, starting in spring 2014. The city, under the control of state-appointed financial managers, tapped the Flint River while a new pipeline was being built to Lake Huron, but the water wasn't treated to reduce corrosion. As a result, lead leached from old pipes and fixtures.
Gov. Rick Snyder finally acknowledged the disaster in fall 2015 after elevated lead levels were found in children. Water quality has improved since Flint returned to the Detroit regional system, but residents still are advised to use filters.
The agreement filed Monday was the result of negotiations involving a court-appointed mediator. In November, Lawson ordered the state to deliver bottled water to residents who have trouble with filters, although the state said that remedy would be extremely difficult to meet.
Residents who get new water lines will be urged to continue using a filter for six months. There will be no cost for replacement cartridges or household testing kits.
There will be tests for lead in the Flint system every six months until one year after the replacement of water lines. An independent monitor also will check household water samples for lead, and the results will be posted online.
The agreement also includes ways for the state to begin closing the nine Flint water distribution sites, starting May 1, depending on demand. They all could be closed after Sept. 1, depending on tap water quality.
___
Follow Ed White at http://twitter.com/edwhiteap ||||| The Flint Water Plant tower is seen in Flint. (Photo: AP)
Detroit — The state will spend an additional $47 million to help ensure safe drinking water in Flint by replacing lead pipes and providing free bottled water under a proposed settlement announced Monday.
The money is in addition to $40 million previously budgeted to address Flint’s widespread lead-contamination crisis. The state also will set aside $10 million to cover unexpected costs, bringing the total to $97 million.
The settlement was revealed in a lawsuit filed last year by a coalition of religious, environmental and civil rights activists. The coalition alleged Flint water was not safe to drink because state and city officials were violating the Safe Drinking Water Act.
“We think this proposed agreement provides a comprehensive framework to address lead contamination in Flint’s tap water,” said Dimple Chaudhary, a senior attorney for the Natural Resources Defense Council and lead plaintiffs’ counsel on the case. “It covers a number of critical issues related to water safety.”
The deal provides more money to repair the city’s water lines but also gives the state an opportunity in the future to stop providing free bottled water to residents.
U.S. District Judge David Lawson will review the settlement during a 1 p.m. hearing Tuesday in Detroit.
A spokeswoman for Gov. Rick Snyder declined comment Monday, citing the pending court hearing. A state Treasury spokeswoman declined comment.
The lawsuit was filed last year by a group led by the Natural Resources Defense Council, the ACLU of Michigan, Concerned Pastors for Social Action and Flint resident Melissa Mays, who also declined comment Monday.
The proposed deal covers a four-year period and comes 10 days after the Environmental Protection Agency awarded a $100 million emergency grant to Michigan to fund infrastructure upgrades in Flint, where lead-contaminated water damaged service lines.
The funding was approved by Congress in December and signed into law by President Barack Obama, but the EPA had to review and approve a formal request from state officials detailing how the city intends to use the grant money.
Under terms of Monday’s deal, the city will replace lead and galvanized steel service lines at homes served by the Flint’s municipal water system.
Also, the state will deliver free bottled water to homebound residents and continue operating at least nine water distribution centers Monday through Saturday. The state will provide bottled water, free filters, cartridges and water-testing kits at each of the centers.
The agreement does not provide door-to-door bottled water delivery, as sought by the plaintiffs, but residents can continue to call the 211 water response service and receive free bottled water within 24 hours.
The water-distribution centers can be closed after May 1 if 20 people or fewer pick up supplies at a particular location. The centers could be shut down as early as Sept. 1 depending on water-quality tests.
The city, meanwhile, will identify what type of service lines are in place at at least 18,000 homes and pay to replace any lead or galvanized steel with copper service lines. The 18,000 homes is an estimate of how many lead pipes exist in Flint.
An evaluation will be conducted in approximately one year and if there are more than 18,000 lines, the state will spend the $10 million in reserve funds to replace those pipes.
If costs rise beyond $97 million, the state is obliged to use its best efforts to secure additional funding, possibly from the Legislature, according to the proposed settlement.
The proposed agreement also includes the state paying $895,000 in litigation costs to the plaintiffs.
State Rep. Sheldon Neeley, D-Flint, said Monday that Flint can use all the help it can get.
“Right now, we’re still at a point where residents are still drinking bottled water, some are still cooking with bottled water and every little bit that has been allocated for getting people back to a normal standard quality of life is important.”
The proposed settlement comes 14 months after the coalition sued several parties, including state Treasurer Nick Khouri, the five-member Flint Receivership Transition Advisory Board, Flint City Administrator Natasha Henderson and the city of Flint.
The government also agreed to monitor lead levels in Flint’s tap water for one year — more time than required under the law. The settlement is retroactive to Jan. 1.
According to the settlement, an independent program will be created for additional monitoring. The monitor will not be affiliated with the state or Flint.
Additionally, Flint residents can continue to have their tap water tested for free for the next four years. Residents can have their water tested up to four times each year.
The state also will strengthen its water filter installation and education program, according to the deal.
Teams will go door-to-door to educate residents and ensure filters are installed correctly and provide replacement filter cartridges through next year.
Senate Minority Leader Jim Ananich, D-Flint, called it “a very fair settlement” and said that he’s been “given assurances” that Flint will have enough money to completely replace the city’s lead pipes.
“So I’m gonna hold them to that,” he said. “We’ll make sure that the resources are there.”
Staff Writer Michael Gerstein contributed.
Read or Share this story: http://detne.ws/2notU2c ||||| Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Spokesman says Gov. Snyder stands by his story on learning about Legionnaires' | 0:30 Air Adler, a spokesman for Gov. Rick Snyder, says the governor is standing by sworn testimony he gave Congress about when he learned of Legionnaires' disease outbreaks in the Flint area. Last week, a top Snyder aide testified Snyder learned about the outbreaks weeks earlier than he said. Wochit 1 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Study: Flint birth rates down, fetal deaths up after water change | 1:31 Kansas University assistant professor of economics David Slusky explains his study of the Flint water crisis’ impact on fertility rates and fetus deaths in the city. Kansas University 2 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint water crisis: Nick Lyon heads to court | 0:49 The highest-ranking government official charged in the Flint water investigation, Nick Lyon, heads to court. Wochit 3 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Frank Kelley: Drop Flint charges against Lyon | 1:11 Former Attorney General Frank Kelley is calling on his successor Attorney General Bill Schuette to drop criminal charges against Nick Lyon, the head of the state Department of Health and Human Services, in connection with the Flint water crisis. Wochit 4 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS What 4 Michigan gubernatorial candidates are saying about Flint | 0:46 The Flint water crisis promises to be a major issue in the 2018 Michigan governor's election. Paul Egan, Detroit Free Press 5 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS AG Bill Schuette on investigation into Gov. Snyder | 1:51 In an interview with the Free Press, Attorney General Bill Schuette was asked about his team’s investigation into Gov. Rick Snyder regarding the Flint water crisis. Here’s what he had to say. Junfu Han, Detroit Free Press 6 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Who is facing involuntary manslaughter charges in the Flint water crisis? | 1:22 Michigan Attorney General Bill Schuette today filed new, more serious charges in the Flint Water Crisis investigation, but also said the case is shifting to a new phase. Detroit Free Press 7 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS See who gave Flint more than $1B | 1:30 Since its founding, the Charles Stewart Mott Foundation has now granted $1 billion (in actual dollars — not inflation — adjusted) in the Flint area. Matthew Dolan, Detroit Free Press 8 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS 5 officials charged with involuntary manslaughter over Flint water crisis | 42:09 Michigan Attorney General Bill Schuette announces charges for six state and local officials in connection to the Flint water crisis. Five people have been charged with involuntary manslaughter in the case. Paul Egan, Detroit Free Press Paul Egan, Detroit Free Press 9 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint water might have made people sick again, but not from lead | 1:12 The Centers for Disease Control and Prevention found the first genetic link between Legionnaires' disease and Flint River water. Video provided by Newsy Newslook 10 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS See what's the latest on the Flint Water Crisis | 1:25 Pipes at 1,492 homes have been replaced so far through Flint Mayor Karen Weaver’s FAST Start Initiative. Matthew Dolan, Detroit Free Press 11 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint mayor wants meeting with governor | 0:33 Flint Mayor Karen Weaver is unhappy the state is ending subsidies for city water bills. 12 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS The lead levels in Flint's water are finally going down | 0:59 The levels of lead in the Michigan city's water are within federal standards, according to a new report. Video provided by Newsy Newslook 13 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint's Michael Moore rips Donald Trump in D.C. speech | 17:19 Flint filmmaker Michael Moore didn't hold back, ranting about President Donald Trump at the women's march in Washington D.C. on Saturday, Jan. 21, 2017. Associated Press 14 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Gov. Rick Snyder talks charges in Flint water crisis | 0:58 Gov. Rick Snyder talks to Paul Egan of the Detroit Free Press about potential charges regarding the Flint water crisis. Julia Nagy/Lansing State Journal 15 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Don't Forget About Flint | 17:45 The ongoing Flint water crisis has taken a toll on residents of this iconic Michigan city, who have been living with lead-tainted tap water for over two years. One Flint resident describes the experience as, “like being in war, but without violence.” 16 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Documentary on Flint water crisis to debut next year | 0:48 According to The Hollywood Reporter, a new documentary about the ongoing water crisis in the city of Flint is set to debut in 2017. Wochit 17 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint resident describes struggles dealing with Flint water crisis | 1:23 Flint resident Angie Thornton–George discusses the impacts on Flint residents lives dealing with the ongoing Flint water crisis. Ryan Garza Detroit Free Press 18 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS AG Bill Schuette announces charges in Flint water crisis | 26:51 Six more state employees were charged with crimes today for their roles in the Flint Water Crisis because of negligence and arrogance, Michigan Attorney General Bill Schuette said. 19 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint Mayor Karen Weaver speaks at Democratic National Convention | 4:19 Flint Mayor Karen Weaver speaks at the Democratic National Convention on Wednesday, July 27, 2016. USA TODAY 20 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Bill Schuette announces Flint water crisis lawsuits | 2:05 Michigan Attorney General Bill Schuette announced lawsuits against companies for their roles in the Flint water crisis on Wednesday, June 22, 2016. Matthew Dolan, Detroit Free Press 21 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint to Get Lake Water When System is Ready | 1:20 The mayor of Flint, Mich. said the city will get water from the Karegnondi Water Authority as soon as that system is ready, even though she had threatened to break an agreement to join the system following Flint's lead-tainted water crisis. (June 21) AP 22 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Church Aids Spanish Speakers in Flint Crisis | 1:56 The water crisis in Flint is exacerbating concerns some in the Michigan city’s Latino community have about deportation, immigration or even a simple understanding of what’s going on. (June 8) AP 23 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS NAACP sues Michigan officials For negligence in Flint water crisis | 0:58 The NAACP says Michigan officials stopped drinking Flint's water before they informed the public that there was a problem.Video provided by Newsy Newslook 24 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Snoop, Mo-Pete and hoops to help Flint | 4:03 Former MSU and NBA star Morris Peterson teamed up with rapper Snoop Dogg for the Hoop 4 Water charity game in Flint on Saturday. (Chris Solari/LSJ) 25 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Snyder, Weaver urge residents to flush taps | 1:23 Gov. Rick Snyder and Flint Mayor Karen Weaver speak to reporters at Flint City Hall on May 12, 2016. Matthew Dolan, Detroit Free Press Videolicious 26 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Obama: 'Corrosive attitude' led to Flint crisis | 1:57 President Barack Obama traveled to Flint, Michigan to talk to city leaders and residents about the city's water contamination crisis. He said a "corrosive attitude in our politics" is partly to blame. (May 4) AP 27 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Obama drinks filtered Flint water in Michigan | 2:51 Showing support for the beleaguered residents of Flint, Michigan, President Barack Obama drank filtered city water on Wednesday to show that it is again safe following a lead-contamination crisis. (May 4) AP 28 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Obama greets 'Little Miss Flint' with massive bear hug | 0:50 President Obama responded to a letter from 8-year-old Amariyanna Copeny, aka "Little Miss Flint", in the best way: by visiting her in her hometown. They met in person just before he gave a speech about the water crisis impacting the Flint community. 29 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS 8-year-old girl's letter inspires Obama to visit Flint | 1:32 Flint resident Amariyanna 'Mari' Copeny, 8, wrote a letter to President Barack Obama asking him to come to Flint. Inspired by the letter, the president will visit Flint to talk about the Flint water crisis. Ryan Garza Detroit Free Press 30 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint resident reacts to criminal charges announced today | 0:44 Flint resident David Frownfelter reacts to criminal charges announced by Michigan Attorney General Bill Schuette's ongoing Flint water investigation. Ryan Garza, Detroit Free Press 31 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Schuette announces charges in Flint water crisis | 4:34 Two Michigan Department of Environmental Quality employees and a Flint city official were charged in the Flint water crisis. Mandi Wright, Detroit Free Press 32 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Gov. Rick Snyder reacts to criminal charges in Flint water crisis | 1:03 Gov. Rick Snyder told reporters Wednesday that possible wrongdoing by a few individuals should not reflect on all 47,000 state employees. Paul Egan, Detroit Free Press Videolicious 33 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Snyder says he'll drink Flint water for 30 days | 1:07 Gov. Rick Snyder says he visited a Flint home Monday and filled three one-gallon jugs with filtered tap water. Paul Egan, Detroit Free Press 34 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Snyder discusses aid for Flint | 0:55 Gov. Rick Snyder discusses state and federal aid for the Flint drinking water crisis after a meeting in Flint Friday. Videolicious 35 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint Mayor Karen Weaver updates reporters on lead lines | 0:38 Flint Mayor Karen Weaver updates reporters about city's progress to replace lead service lines. Matthew Dolan Detroit Free Press 36 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Legionnaires' disease: 5 things you need to know | 0:55 In light of the recent outbreak of Legionnaires' disease in New York City, here are five facts you need to know about the disease. 37 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint man describes living without water during Flint water crisis | 1:55 Flint resident Darryl Wilson describes the difficulties of living through the Flint Water Crisis without running water in his home. 38 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Mica calls for McCarthy's resignation | 1:33 Rep John Mica calls for EPA Administrator Gina McCarthy’s resignation 39 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Mark Ruffalo puts spotlight on Flint water crisis | 2:18 Academy Award-nominated actor calls on President Obama to declare Flint a national disaster amid drinking water crisis. Detroit Free Press 40 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Elijah E. Cumming's opening statement at Flint water hearing | 0:37 Rep. Elijah E. Cummings' opening statements. 41 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Gerry Connolly questions Rick Snyder | 3:13 Rep. Gerry Connolly questions Gov. Rick Snyder. 42 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Snyder on what has changed after Flint crisis | 0:33 Gov. Rick Snyder talks about what has changed after the Flint water crisis. 43 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Snyder can only take so much at Flint hearing | 2:49 Gov. Rick Snyder interrupts EPA Administrator Gina McCarthy at the Flint water hearing. 44 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Gov. Snyder: Government 'failed the families of Flint' | 0:38 Michigan Governor Rick Snyder called the water crisis in Flint a 'failure of government at all levels' during his testimony at a hearing before Congress on Thursday. The people of Flint, including more than 8,000 children, were exposed to lead for mo 45 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint water crisis: 5 things you need to know | 1:35 The once quiet city of Flint, Michigan is facing a drinking water crisis that is drawing concern from around the nation. Here's what you need to know about how the public health crisis has evolved. VPC 46 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint resident begins gathering info for recall of Snyder | 1:16 Flint resident Quincy Murphy gathers information from registered voters for recall efforts against Michigan Governor Rick Snyder over the water crisis. Ryan Garza, Detroit Free Press 47 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS New trouble in Flint: Adults poisoned, too | 1:56 Flint resident Aaron Stinson, 39, discusses getting news that he has the highest blood lead level in county for adults during Flint water crisis. Ryan Garza, Detroit Free Press 48 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Michigan Gov. Rick Snyder addresses reporters in Flint | 3:40 Gov. Snyder answers questions about the federal government's response to the Flint water crisis and reaffirms that he will not resign. 49 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Snyder says feds had multiple failures on Flint | 0:52 Gov, Rick Snyder said Tuesday he hopes federal officials are being asked tough questions about Flint. Videolicious 50 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint Water Contamination Could Lead To Manslaughter Charges | 0:49 If the investigation finds a link, officials could be charged with involuntary manslaughter. 51 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint water main breaks causes boil water advisory | 0:29 Water flows from a large transmission water main break in Flint causing a precautionary boil advisory for residents already dealing with Flint water crisis. Ryan Garza, Detroit Free Press 52 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS AG Schuette's Flint water Investigation | 1:33 Attorney Todd Flood, with AG Bill Schuette (to Flood's left), describes charges that could emerge from the Flint drinking water investigation. Videolicious 53 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint doctor weighs in on water crisis | 0:28 Dr. Mona Hanna-Attisha says education is important because residents helped bring the crisis to light. Eric D. Lawrence/Detroit Free Press 54 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Warehouse fire dominates Highland Park skyline | 1:03 An early morning warehouse fire dominates the Highland Park skyline forcing some residents to evacuate and a warning for residents to boil their water. Mandi Wright/Detroit Free Press Mandi Wright/Detroit Free Press 55 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Pistons' Stan Van Gundy talks Flint water crisis | 2:22 Pistons coach Stan Van Gundy, the players and others are donating $500,000 to the relief efforts in Flint during the water crisis that has garnered national attention. Video by Vince Ellis, DFP 56 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint activist reacts to rejection of Snyder recall petition. | 0:40 Flint resident Quincy Murphy said Thursday he will try again after a state panel rejected his recall petition targeting Gov. Rick Snyder. 57 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint OHL team says its water is safe | 1:53 Flint Firebirds president Costa Papista says the water at Dort Federal Credit Union Event Center has been tested and found to contain no lead. 58 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Lions donate 100,000 bottles of water to Flint | 1:18 Detroit Lions defensive end Ziggy Ansah and his teammates led a drive that donated more than 100,000 bottles to community centers in Flint. By Dave Birkett, DFP. 59 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint resident speaks on water crisis | 0:44 Michigan State Police and National Guard members gave Flint residents bottles water, filters and lead testing kits Jan. 21. Katrease Stafford Detroit Free Press 60 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Michael Moore talks in Flint on the water crisis | 2:15 Flint native and director talked to a crowd of 150 about the ongoing Flint water crisis that is affecting thousands of people. Eric Seals/DFP 61 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Snyder answers questions during auto show tour | 2:57 Gov. Rick Snyder tours the Detroit auto show, but gets questions about the Flint water crisis and Detroit Public Schools. Kathleen Gray, Detroit Free Press 62 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Flint water emergency press conference | 1:26 Department of Health and Human Services Chief Medical Executive Eden Wells discusses Flint water emergency during press conference at State Emergency Operations Center in Lansing. Ryan Garza/Detroit Free Press 63 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Gov. Rick Snyder meets with Flint mayor on water crisis | 2:13 Michigan Gov. Rick Snyder and Flint Mayor Karen Weaver said they had a constructive meeting Thursday on the drinking water crisis. Paul Egan, Detroit Free Press 64 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Mich. Gov. Rick Snyder on the Flint water crisis | 2:23 Michigan governor Rick Snyder talks about the Flint water situation with the editorial board of the Detroit Free Press on Monday, December 14, 2015. Romain Blanquart Detroit Free Press 65 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Ann Arbor resident describes anger toward Gov. Snyder | 2:13 Susan Fecteau said she's upset with the way Gov. Rick Snyder has handled the Flint water crisis. Katrease Stafford, Detroit Free Press 66 of 67 Skip in Skip x Embed x Share CLOSE FLINT WATER CRISIS Matt Cartwright questions Gov. Rick Snyder | 1:12 Rep. Matt Cartwright questions Gov. Rick Snyder 67 of 67 Last VideoNext Video Spokesman says Gov. Snyder stands by his story on learning about Legionnaires'
Study: Flint birth rates down, fetal deaths up after water change
Flint water crisis: Nick Lyon heads to court
Frank Kelley: Drop Flint charges against Lyon
What 4 Michigan gubernatorial candidates are saying about Flint
AG Bill Schuette on investigation into Gov. Snyder
Who is facing involuntary manslaughter charges in the Flint water crisis?
See who gave Flint more than $1B
5 officials charged with involuntary manslaughter over Flint water crisis
Flint water might have made people sick again, but not from lead
See what's the latest on the Flint Water Crisis
Flint mayor wants meeting with governor
The lead levels in Flint's water are finally going down
Flint's Michael Moore rips Donald Trump in D.C. speech
Gov. Rick Snyder talks charges in Flint water crisis
Don't Forget About Flint
Documentary on Flint water crisis to debut next year
Flint resident describes struggles dealing with Flint water crisis
AG Bill Schuette announces charges in Flint water crisis
Flint Mayor Karen Weaver speaks at Democratic National Convention
Bill Schuette announces Flint water crisis lawsuits
Flint to Get Lake Water When System is Ready
Church Aids Spanish Speakers in Flint Crisis
NAACP sues Michigan officials For negligence in Flint water crisis
Snoop, Mo-Pete and hoops to help Flint
Snyder, Weaver urge residents to flush taps
Obama: 'Corrosive attitude' led to Flint crisis
Obama drinks filtered Flint water in Michigan
Obama greets 'Little Miss Flint' with massive bear hug
8-year-old girl's letter inspires Obama to visit Flint
Flint resident reacts to criminal charges announced today
Schuette announces charges in Flint water crisis
Gov. Rick Snyder reacts to criminal charges in Flint water crisis
Snyder says he'll drink Flint water for 30 days
Snyder discusses aid for Flint
Flint Mayor Karen Weaver updates reporters on lead lines
Legionnaires' disease: 5 things you need to know
Flint man describes living without water during Flint water crisis
Mica calls for McCarthy's resignation
Mark Ruffalo puts spotlight on Flint water crisis
Elijah E. Cumming's opening statement at Flint water hearing
Gerry Connolly questions Rick Snyder
Snyder on what has changed after Flint crisis
Snyder can only take so much at Flint hearing
Gov. Snyder: Government 'failed the families of Flint'
Flint water crisis: 5 things you need to know
Flint resident begins gathering info for recall of Snyder
New trouble in Flint: Adults poisoned, too
Michigan Gov. Rick Snyder addresses reporters in Flint
Snyder says feds had multiple failures on Flint
Flint Water Contamination Could Lead To Manslaughter Charges
Flint water main breaks causes boil water advisory
AG Schuette's Flint water Investigation
Flint doctor weighs in on water crisis
Warehouse fire dominates Highland Park skyline
Pistons' Stan Van Gundy talks Flint water crisis
Flint activist reacts to rejection of Snyder recall petition.
Flint OHL team says its water is safe
Lions donate 100,000 bottles of water to Flint
Flint resident speaks on water crisis
Michael Moore talks in Flint on the water crisis
Snyder answers questions during auto show tour
Flint water emergency press conference
Gov. Rick Snyder meets with Flint mayor on water crisis
Mich. Gov. Rick Snyder on the Flint water crisis
Ann Arbor resident describes anger toward Gov. Snyder
Matt Cartwright questions Gov. Rick Snyder
Buy Photo A proposed settlement has been reached in a major lawsuit over the Flint drinking water crisis. (Photo: Ryan Garza/Detroit Free Press)Buy Photo
LANSING -- The state will allocate $87 million for the City of Flint to identify and replace at least 18,000 unsafe water lines in Flint by 2020 under a proposed settlement of a federal lawsuit that also provides the state with a road map to end free distribution of bottled water later this year.
The proposed settlement also requires the state to pay $895,000 to the plaintiffs who brought the 2016 lawsuit, to cover their litigation costs.
Concerned Pastors for Social Justice, the Natural Resources Defense Council, the Michigan ACLU and Flint resident Melissa Mays don't get the door-to-door delivery of bottled water they had been seeking in recent months. But the plaintiffs get a schedule for water line replacements while the state gets a schedule for weaning Flint off the community resource stations where bottled water, water filters and filter replacement cartridges are now distributed free of charge. The centers could close as early as Sept. 1, subject to test results on Flint tap water.
U.S. District Judge David Lawson is to hold a hearing at 1 p.m. Tuesday to consider the agreement, which was the result of mediation. Lawson is expected to approve the agreement, subject to his oversight of its enforcement. A community meeting to discuss the settlement is planned for 6 p.m. Thursday at New Jerusalem Full Gospel Baptist Church, 1035 E. Carpenter Road in Flint.
Related:
Some — but not all — of the money the state allocates can come from funding approved by the federal government.
Flint's drinking water became contaminated with lead in April 2014, when a state-appointed emergency manager, as a short-term cost-cutting measure, switched the city's drinking water supply from Lake Huron water treated in Detroit to Flint River water treated at the Flint Water Treatment Plant. The Michigan Department of Environmental Quality has acknowledged a mistake in failing to require the use of corrosion-control chemicals as part of the treatment process. Corrosive water caused lead to leach from from joints, pipes and fixtures, causing a spike in toxic lead levels in the blood of Flint children and other residents.
Flint switched back to Detroit water in October 2015, but some risk remains because of damage to the city's water distribution infrastructure.
Under the proposed settlement, set out in 92 pages, not including exhibits:
• The city, compensated by the state, agrees to determine the composition of lines running from the street into at least 18,000 households and properties, and replace with copper those made of lead or galvanized steel, at no cost to the homeowners.
• The agreement calls for replacement of 6,000 lines by Jan. 1, 2018, and at least 6,000 more lines each of the two following years, with all lines covered by the agreement replaced by Jan. 1, 2020.
• The agreement does not call for door-to-door bottled water delivery, which the plaintiffs had sought, but calls for residents to be able to call the 211 city phone number and receive free water deliveries within 24 hours. The service can be discontinued if water monitoring for the six-month period ending June 30 is below the U.S. Environmental Protection Agency's "action level" for lead, the agreement says.
• The agreement requires the state and city to continue to operate at least nine community water resource sites, where residents can pick up bottled water, water filters and cartridges, until May 1. It permits the state to close three centers between May 1 and June 1, but only if demand has dropped off, and close up to two additional centers between June 1 and July 1, again if demand has dropped off.
• The state won't be required to operate any water distribution centers after Sept. 1, provided water monitoring for the six-month period ending June 30 is below the U.S. Environmental Protection Agency's "action level" for lead.
• The state will expand its program of water filter education, installation and maintenance. The state will make its best efforts to have at least 90 filter education specialists at work throughout the city, eight hours per day, Monday through Saturday, with specialists also available on Sundays by appointment and for follow-up.
• The state will advertise the work of the filter specialists on TV, radio and other media, including ads in Spanish.
• The state will provide the city with filter replacement cartridges so that residents will have free filter cartridges to use for one year after the replacement of their lead or galvanized steel water lines.
• The state will continue its Medicaid expansion for Flint residents — covering pregnant women and children younger than 21 up to 400% of the poverty level — through March 2021.
• The state will continue elevated blood level case management services, for children with elevated blood levels, plus other services for children and nutrition services, through September 2018.
• Abandoned households are not covered by the agreement, though any household with an active water account on the effective date of the agreement is covered, even if the water bill is overdue.
• The agreement calls for extensive water monitoring following the line replacements to ensure the water is safe to drink, including the use of a third-party independent monitor. It also calls for extensive public reporting.
Contact Paul Egan: 517-372-8660 or [email protected]. Follow him on Twitter @paulegan4.
Read or Share this story: http://on.freep.com/2noG4YO | – Flint residents battling tainted water may finally get some resolution on Tuesday. A federal judge is expected to approve a sweeping settlement that spells out a plan to replace water lines for thousands of homes, reports the AP. Under the deal, Flint would replace at least 18,000 lead or galvanized-steel water lines by 2020, and the state would pick up the bill with state and federal money. The cost is expected to be about $100 million. The settlement calls for the replacement of 6,000 lines in each of the next three years, with all the work done by Jan. 1, 2020, reports the Detroit Free Press. More than 700 water lines already have been replaced and work is ongoing, but the agreement would end the uncertainty over how to pay for the enormous task. The state will set aside $87 million and keep another $10 million in reserve if necessary. Another part of the deal revolves around bottled water. The state would deliver it for free to homebound residents and continue running at least nine distribution centers, reports the Detroit News. But if demand peters out and tests improve, the state can begin shutting down the distribution centers later this year. Plaintiffs wanted door-to-door delivery of bottled water, but the agreement rejected that. Instead, residents can continue to call the city's 211 service and get free bottled water within 24 hours. |
In a bombshell that would seem to be an April Fools joke, we believe Harley-Davidson will be building a production motorcycle powered by electric power, named “Livewire.” Harley filed a trademark application for the name on Nov. 1, 2012 with the USPTO, then also filed for a trademark in the EU on Nov. 2, 2012 which was approved March 13, 2013. This is potentially game-changing news, and if this indeed turns out to be true, it will give a whole new meaning to “The Motor Company.”
UPDATE: Harley-Davidson released a new teaser video that supports our theory:
Get the Flash Player to see this player.
Motorcycle.com has acquired images of this new motorcycle, which was first published by British tabloid the Daily Mail from the set of the next Avengers movie, currently filming in Seoul, South Korea. While this initially appears to be a far cry from anything H-D has built in the past, these photos clearly show a lack of an exhaust pipe on either side, a big, square shape resembling a battery where an engine would be, no clutch or shift lever, and most telling of all – the Harley-Davidson logo across the faux gas tank.
While we don’t know details about the bike, it’s not uncommon for one-off vehicles to be used in Hollywood. However, the fit, finish, and near production-level quality of this bike lend itself towards mass production rather than a life only on the big screen.
Discuss this at our Harley-Davidson LiveWire Forum.
Remember too that the Harley-Davidson Street was featured in a motorcycle chase scene in Marvel’s Captain America: The Winter Soldier movie, appearing in the film’s trailer a week and a half before Harley-Davidson announced the motorcycle existed. While the Street was hiding in plain sight, there is no hiding the Livewire.
It’s unlikely Harley has produced its own battery and motor system and has likely instead chosen to source those bits to more established players in this field. Zero, Brammo, or Mission Motors are the ones that come to mind. Judging by how substantial the battery is from these shots, we’re guessing its capacity to be anywhere from 9kWh to 15kWh. The guts of the Livewire are hidden in bodywork, so it’s difficult to determine which motor it is using, but we suspect it is liquid-cooled based on the pipe and fitting near the leading-edge of the bodywork.
Upon closer inspection, you can see the bar and shield logo on the brake calipers, plus a fork and brake rotor which appear very similar to that used on the Night Rod Special. Judging from the right profile photo, the Livewire has a very steep rake and a fairly modest wheelbase, which should provide relatively sharp handling for what could turn out to be a heavy motorcycle. Styling cues are unlike most other Harley models, but do perhaps pay some homage to the V-Rod and former Buell lines with the short tail, sculpted swingarm, and all-encompassing rear fender. We can also see a linkage-less single shock, right-side belt drive and ten-spoke wheels.
Motorcycle.com’s Editorial Director, Sean Alexander chimes-in with the following mini-analysis: “If these new Livewire rumors are valid, it signals an intelligent move that will allow Harley to dip their toe into the eVehicle pond, stake a claim so to speak and then take their time with an actual saleable product roll-out as they continue to evaluate the long-term sales potential of the segment. If the electric market continues to strengthen and they do indeed launch a full-fledged street legal electric motorcycle, Harley could use this early Livewire project to help them lay claim to being the first major motorcycle manufacturer to go beyond the scooter level with an electric. It’s also a pretty smart move to complement their existing efforts to appeal to Millennials.”
That’s all we know for now. Keep it here as we expect to hear something official soon. This is potentially major news from America’s motorcycle company. ||||| This Wednesday, June 18, 2014 photo shows the control screen on Harley-Davidson's new electric motorcycle, at the company's research facility in Wauwatosa, Wis. The company plans to unveil the LiveWire... (Associated Press)
MILWAUKEE (AP) — Harley-Davidson will unveil its first electric motorcycle next week, and President Matt Levatich said he expects the company known for its big touring bikes and iconic brand to become a leader in developing technology and standards for electric vehicles.
Harley will show handmade demonstration models Monday at an invitation-only event in New York. The company will then take about two dozen bikes on a 30-city tour for riders to test drive and provide feedback. Harley will use the information it gathers to refine motorcycle, which might not hit the market for several more years.
The venture is a risk for Harley because there's currently almost no market for full-size electric motorcycles. The millions of two-wheeled electric vehicles sold each year are almost exclusively scooters and low-powered bikes that appeal to Chinese commuters. But one analyst said investment by a major manufacturer could help create demand, and Levatich emphasized in an interview with The Associated Press that Harley is interested in the long-term potential, regardless of immediate demand.
"We think that the trends in both EV technology and customer openness to EV products, both automotive and motorcycles, is only going to increase, and when you think about sustainability and environmental trends, we just see that being an increasing part of the lifestyle and the requirements of riders," Levatich said. "So, nobody can predict right now how big that industry will be or how significant it will be."
At the same time, Levatich and others involved in creating the sleek, futuristic LiveWire predicted it would sell based on performance, not environmental awareness. With no need to shift gears, the slim, sporty bike can go from 0 to 60 mph in about 4 seconds. The engine is silent, but the meshing of gears emits a hum like a jet airplane taking off.
"Some people may get on it thinking, 'golf cart,'" lead engineer Jeff Richlen said. "And they get off thinking, 'rocket ship.'"
One hurdle the company has yet to address is the limited range offered by electric motorcycles. The batteries typically must be recharged after about 130 miles, and that can take 30 minutes to an hour.
San Jose State University police Capt. Alan Cavallo helped his department buy two bikes from Zero Motorcycles, the current top-selling brand, and said officers have been "super happy" with the quiet, environmentally friendly bikes made nearby in Scotts Valley, California. But he said American riders who like to hit the highway would likely lose patience with the technology.
"That's the deal with the cars; you can't jump in a Tesla and drive to LA, it won't make it," Cavallo said, adding later, "People want the convenience of 'I pull into a gas station, I pour some gas in my tank and I go.'"
Zero Motorcycles introduced its first full-size motorcycle in 2010 and expects to sell about 2,400 bikes this year, said Scot Harden, the company's vice president of global marketing. That would give it about half of the global market for full-size, high-powered electric motorcycles.
In comparison, Harley-Davidson alone sold more than 260,000 conventional motorcycles last year.
Outsiders focused on electric vehicle development predicted Harley would help boost sales for all electric motorcycle makers by creating greater awareness of and demand for electric bikes. Yamaha also has shown an electric motorcycle.
"It's the old 'a rising tide raises all boats,'" said Gary Gauthier, business and technology adviser for NextEnergy, a Detroit-based nonprofit focused on energy development.
John Gartner, a research director for the consulting firm Navigant Research, noted the major automakers helped drive sales for hybrid and electric cars.
"Their marketing budgets are much larger and they have dealerships set up everywhere, and so it's much easier for companies like Ford, BMW and Honda to advertise about their electric vehicles," Gartner said.
Levatich said true growth will require common standards for rapid charging and other features, as well as places for people to plug in. Harley expects to play a key role in developing electric vehicle standards, and its dealership network could provide charging stations to serve all drivers, he said.
"We've been very silent up to this point about our investment in EV technology," Levatich said. "... but now that we're public, and we're in this space, we expect to be involved and a part of leading the development of the standards, and the technology and the infrastructure necessary to further the acceptance and the utility of electric vehicles." | – When you think Harley-Davidson you probably think of rumbling engines, massive tailpipes, and long trips down dusty highways. But the company's newest project doesn't involve any of that. Instead, the company on Monday will unveil LiveWire, its first fully electric zero-emission bike, the AP reports. After an invitation-only event in New York, the company will take about two dozen LiveWires on a 30-city tour to let aficionados test-ride them. Harley is confident they'll enjoy the experience, too. "Some people may get on it thinking, 'golf cart,'" the lead engineer predicted. "And they get off thinking, 'rocket ship.'" The bike has also been spotted on the set of the upcoming Avengers movie, Motorcycle.com reports. It will probably be a few years before the bikes are actually available for public purchase, which Motorcycle.com's editorial director thinks is a smart move, allowing the company to dip its toe into the market while evaluating demand. Still, Gizmodo thinks the performance specs on the early prototypes are "distinctly unimpressive." With a top speed of 92 mph and a range of 53 miles, LiveWire is "left in the dust of other electric motorcycles currently on sale." |
here we generalize the above rg transformation and mera to lattice quantum double models ( see @xcite . ) local degrees of freedom are associated with _
oriented _ bonds of a lattice @xmath15 and identified with the group algebra of a discrete , in general non - abelian , group @xmath41 , i.e. , the hilbert space spanned by an orthonormal basis @xmath42 . a change in the orientation of a bond corresponds to the map @xmath43 .
the hamiltonian is a sum of mutually commuting projectors over vertices and plaquettes , @xmath44 where vertex projector @xmath45 acts on edges incoming to vertex @xmath46 by simultaneous right multiplication by each group element , @xmath47 right multiplication acts as @xmath48 , and projector @xmath49 selects configurations where the ordered product of group elements taken along an oriented circuit @xmath50 around @xmath51 is the unit element of @xmath41 , @xmath52 elementary moves are analogous to their counterparts for the toric code . the operations generalising cnots are controlled multiplications by the control element ( cms ) .
figure [ figure : qdelemmoves ] shows how to create plaquettes and vertices using the controlled right multiplication @xmath53 where the first element is the control and the second element is the target . to cover the case of different bond orientations
, we also consider the transformations @xmath54 , @xmath55 , and @xmath56 ; explicitly : @xmath57 by means of these operations , new edges initialised in states @xmath58 and @xmath59 are incorporated into the code , creating new plaquettes and vertices .
of course , the inverse elementary moves _ removing _ plaquettes and vertices from the code , needed for the mera construction , are in general not identical to those adding plaquettes and vertices .
note that operations leading to plaquette addition ( or removal ) can not be performed simultaneously for non - abelian groups , since the order of multiplication of the elements is important . the rg transformation corresponding to a quantum double model associated with group @xmath41 and defined on a square lattice proceeds along the same lines as for the toric code , but there are qualitative differences . to fix the setting ,
we work with a fiducial orientation of the bonds : horizontal bonds are oriented from left to right and vertical bonds are oriented upwards . then : * operations within a plaquette can not be performed simultaneously and must be applied in a certain order .
hence , disentanglers must be applied in three steps , while isometries demand another step with respect to the toric code rg . * which of the controlled operations @xmath60 , @xmath61 , @xmath62 , @xmath63 is needed at each step depends on the bond orientations .
the explicit form of the rg leading to a mera description of the quantum double model is shown in figure [ figure : qdmera ] .
the basic properties of the toric code mera ( bounded causal cone , topological degrees of freedom at the top of the tensor network , er fixed point in the infinite lattice limit ) generalise to the quantum double setting . in a @xmath63 dimensional lattice
the ground state typically obeys a boundary law @xmath64 for the entanglement entropy of a block of @xmath65 sites . in this case
the dimension @xmath9 for a site of @xmath1 must at least scale doubly exponentially in @xmath8 , @xmath66 . indeed , one the one hand the dimension of an effective space for a block of @xmath65 sites must be at least @xmath67 . on the other , @xmath68
after @xmath8 iterations of the rg transformation , where @xmath69 is the number of sites of @xmath3 effectively described by a single site of @xmath1 .
a. hamma , r. ionicioiu , p. zanardi , phys .
rev . * a71 * , 022315 ( 2005 ) , ` arxiv : quant - ph/0409073v2 ` .
a. kitaev , j. preskill , phys . rev
. lett .
* 96 * , 110404 ( 2006 ) , ` arxiv : hep - th/0510092v2 ` .
m. levin , x .-
wen , phys .
lett . * 96 * , 110405 ( 2006 ) , ` arxiv : cond - mat/0510613v2 [ cond-mat.str-el ] ` .
notice that the qubits being removed from the lattice during the rg transformation are _ exactly _ in a product state . in all previous examples
@xcite it was only possible to _ approximately _ disentangle the subsystems before their removal .
this points to a more general result , to be discussed elsewhere : under the present rg transformation based on entanglement renormalization , systems with topological order flow toward the string - net models of levin and wen @xcite , substantiating ideas already advocated by these authors . | the multi - scale entanglement renormalisation ansatz ( mera ) is argued to provide a natural description for topological states of matter .
the case of kitaev s toric code is analyzed in detail and shown to possess a remarkably simple mera description leading to distillation of the topological degrees of freedom at the top of the tensor network .
kitaev states on an infinite lattice are also shown to be a fixed point of the rg flow associated with entanglement renormalization .
all these results generalize to arbitrary quantum double models .
renormalization group ( rg ) transformations aim to obtain an effective description of the large distance behavior of extended systems @xcite . in the case of a system defined on a lattice ,
this can be achieved by constructing a sequence of increasingly coarse - grained lattices @xmath0 , where a single site of lattice @xmath1 effectively describes a block of an increasingly large number @xmath2 of sites in the original lattice @xmath3 @xcite .
real - space rg methods can , in particular , be applied to study quantum systems at zero temperature , in which case each site of @xmath1 is represented by a hilbert space @xmath4 @xcite . there
the goal is to identify the local degrees of freedom relevant to the physics of the ground state and to retain them in the hilbert space @xmath5 , whose dimension @xmath6 must be large enough to describe this physics .
a severe problem of such approach is that in @xmath7 dimensions , @xmath6 must grow ( doubly ) exponentially in @xmath8 @xcite as a result of the accumulation of short - range entanglement at the boundary of the block .
_ entanglement renormalization _
@xcite is a novel real - space rg transformation recently proposed in order to solve the above difficulties .
its defining feature is the use of _ disentanglers _ prior to the coarse - graining step .
these are unitary operations , acting on the interface of the blocks defined by the rg procedure , that reduce the amount of entanglement in the system , see figure [ figure:2dmera ] .
a major achievement of the approach is that , when applied to a large class of ground states in both one @xcite and two @xcite spatial dimensions , the dimension @xmath9 is seen not to grow with @xmath8 .
a steady @xmath9 is made possible by the disentangling step and has several implications @xcite .
it means that , in principle , the resulting rg transformation can be iterated indefinitely at a constant computational cost , allowing for the exploration of arbitrarily large length scales .
in addition , the system can be compared with itself at different length scales , and thus we can study rg flows in the space of ground state or hamiltonian couplings . finally , a constant @xmath9 also leads to an efficient representation of the system s ground state in terms of a tensor network , the _ multi - scale entanglement renormalization ansatz _ ( mera ) @xcite . to be kept .
we show the case of a tilted square lattice in preparation for the toric code where , in addition , each site will contain four qubits.,width=321 ] at zero temperature , strongly correlated quantum systems appear organized in a plethora of _ phases _ or _ orders _ , including _ local symmetry breaking _ orders and _ topological _ orders @xcite .
local symmetry breaking phases are described by a symmetry group and a local order parameter , and they are associated with the physical mechanism of condensation of point - like objects .
transitions between two such phases or orders involve a change in the symmetry , as described by landau s theory .
a simple picture emerges from the perspective of entanglement renormalization @xcite : under successive iterations of the rg transformation , ground states with local symmetry breaking order progressively lose their entanglement and eventually converge to a trivial fixed point , namely an unentangled ground state . on the other hand ,
critical ground states describing transitions between these phases are non - trivial that is , entangled fixed points of the rg transformation . in either case
, the mera provides an efficient , accurate representation of the ground state .
topological phases are fundamentally different from local symmetry breaking phases @xcite .
they do not stem from ( the breakdown of ) local group symmetries , but their _ topological order _ is linked to more complex mathematical objects , like tensor categories , topological quantum field theory , and quantum groups .
physically , topological phases exhibit gapped ground levels with robust degeneracy dependent only on the topology of the underlying space .
this , and the fact that excitations above the ground level possess anyonic statistics , boosts the interest of these phases as scenarios for topological quantum information storage and processing .
condensation of string - like objects ( in the so - called string - net models , see @xcite ) has been proposed as a general mechanism controlling topological phases .
as may be expected , such profound differences are also reflected in the way the ground state is entangled .
specifically , the notion of topological entanglement entropy @xcite ( the subleading term in a large - perimeter expansion of the entanglement entropy of a system ) has arisen as a quantitative measure of the ground state entanglement due to topological effects .
systems with topological order thus provide an unexplored scenario for entanglement renormalization techniques .
the purpose of this letter is to establish entanglement renormalization and the mera as valid tools also for the description and investigation of topological phases of matter . for simplicity ,
we analyze in detail kitaev s toric code @xcite , a four - fold degenerate ground state widely discussed in the context of quantum computation and closely related to @xmath10 lattice gauge theory @xcite and to the simplest of levin - wen s models for string - net condensation @xcite .
we show the following : ( @xmath11 ) a mera with finite , constant @xmath9 can represent the toric code _
exactly _ ; ( @xmath12 ) at each iteration of the rg transformation , entanglement renormalization factors out local degrees of freedom from the lattice , while leaving the topological degrees of freedom untouched ; @xmath13
the mera representation of the four ground states is identical except in its top tensor , which stores the topological degrees of freedom ; and ( @xmath14 ) in an infinite system , the toric code is the fixed point of this rg transformation . all these results also hold for more complicated models , such as quantum double lattice models , that we discuss in the appendix .
we conclude that the mera is naturally fitted to represent states with topological order , and the entanglement renormalization offers a new , useful framework for further studies .
following @xcite , we consider a square lattice @xmath15 on the torus , with spin-@xmath16 ( qubit ) degrees of freedom attached to each link .
the hamiltonian @xmath17 is a sum of constraint operators associated with vertices ` @xmath18 ' and plaquettes ` @xmath19 , ' namely @xmath20 stabilizers @xmath21 act as a simultaneous spin flip in all four qubits adjacent to a given vertex .
stabilizers @xmath22 yield the product of group assignments @xmath23 at the four qubits around a plaquette . all stabilizers commute with each other and have eigenvalues @xmath23 .
hamiltonian ( [ kitaevhamiltonian ] ) is gapped , and states in the ground level ( kitaev states ) are simultaneous eigenstates of all @xmath21 , @xmath22 with eigenvalue @xmath24 .
the degeneracy of the ground level ( i.e. , the number of kitaev states ) depends on the topology of the manifold underlying the lattice .
if this manifold is a topologically nontrivial riemann surface , information is encoded in nontrivial cycles , since operators @xmath25 , where @xmath26 are nontrivial cycles along bonds of the lattice , commute with all stabilizers . besides , such operators along homologically equivalent nontrivial cycles @xmath27 , @xmath28 have the same action on kitaev states .
hence , for a torus , two logical qubits are encoded in the action of these operators .
kitaev states are efficiently written in terms of their stabilizers .
the stabilizer formalism @xcite also provides us with a useful language to analyse the action of operators on kitaev states , and has proved instrumental in finding an exact mera .
the key observation to this purpose is that there exist ` elementary moves ' @xcite , minimal deformations of the lattice and its kitaev states , that respect the topological characteristics of the code .
these moves consists of addition or removal of faces and vertices together with qubits , and can be written in terms of controlled - not ( cnot ) operators , whose adjoint action has a very simple expression in terms of stabilizers : @xmath29 figure [ figure : elemmoves ] depicts the construction of elementary moves .
the creation of a face is achieved by introducing a new spin in a plaquette .
arrows stand for cnot operators from control qubits ( all qubits in one of the semiplaquettes ) to the target qubit ( the new qubit , introduced in state @xmath30 ) .
the following transformation of stabilizers holds ( the new site is denoted as @xmath31 ) : @xmath32 which ensures plaquette constraints are obeyed .
similarly , the two relevant vertex constraints are extended to the new qubit .
the creation of a new vertex is achieved instead by introducing a new qubit in state @xmath33 .
this qubit now plays the role of control for cnots acting on the qubits adjacent to one of the split vertices .
stabilizers transform as @xmath34 which is again compatible with the code constraints .
both final sets of operators are the correct stabilizers for the code in the modified lattice ( remember that @xmath35 . )
similarly , the two relevant paquette constraints are extended to the new qubit .
these operations can be inverted to _ decouple _ qubits in states @xmath30 and @xmath33 from the rest system .
the disentanglers and isometries , defining both the rg transformation and the mera for the kitaev states , are made of several of these decoupling moves .
we regard the original square lattice @xmath15 , on which the toric code is defined , as a ( tilted ) square lattice @xmath3 where each site contains four qubits .
then both disentanglers and isometries act on blocks of four sites of @xmath3 as in figure [ figure:2dmera ] equivalently , on blocks of 16 qubits in @xmath15 .
they consist of a series of cnots as specified in figures [ figure : meraa ] and [ figure : merab ] .
( a)(b ) ( a)(b ) ( c)(d ) upon applying the rg transformation , we obtain a coarse - grained lattice @xmath36 which is locally identical to @xmath37 and where , by construction , the toric code constraints are still satisfied .
this is quite remarkable . on the one hand , it is the first non - trivial example , in the context of entanglement renormalization , where the rg transformation is _ exact _
@xcite , leading to the first non - trivial model that can be _ exactly _ described with the mera . on the other hand ,
if we consider an infinite lattice , the above observation implies that kitaev states are an explicit fixed point of the rg flow in the space of ground states , as induced by the present rg transformation @xcite .
let us now consider a finite lattice @xmath37 on the torus .
the coarse - grained state carries exactly the same topological information ( values of @xmath38 along nontrivial cycles ) as the original state , since the elementary moves preserve such information at each intermediate step .
that is , different kitaev states are not mixed during the rg transformation . by iteration
, we obtain a sequence of increasingly coarse - grained lattices @xmath39 for ever smaller toruses .
the top lattice @xmath40 will contain only a few qubits . recall that the mera is made of all the disentanglers and isometries used in the rg transformations , together with a top tensor describing the state of @xmath40 @xcite .
it follows that the meras for different states of the toric code will contain identical disentanglers and isometries , and will only differ in their top tensor , where all the topological information is stored .
all the above results automatically extend to the loop model considered by levin and wen as the simplest of their family of string - net models @xcite .
indeed , the toric code on a square lattice can be locally transformed , using the decoupling moves depicted in figure [ figure : merabis ] , into a toric code on a triangular lattice , which is equivalent to the ground state of the loop model defined on the dual ( hexagonal ) lattice .
this local transformation shows that the topological order of both models are identical , a fact already pointed out in @xcite and which can also be understood in terms of the projected entangled - pair state ansatz ( peps ) @xcite .
( a)(b ) finally , our construction generalizes almost straightfowardly to quantum double models ( see , e.g. , @xcite ) , both for abelian and non - abelian groups .
this is achieved by replacing cnots with controlled group multiplication operators and by paying due attention to the order of the operations ( see appendix ) . in conclusion , we have shown that several models with topological order can be exactly represented with the mera , where topological degrees of freedom are naturally isolated in its top tensor .
we have also seen that such models are fixed points of the rg flow induced by entanglement renormalization .
our results are an unambiguous sign that entanglement renormalization and the mera , originally developed to efficiently simulate systems with local symmetry - breaking phases , provide also a most natural framework to study topological phases .
* acknowledgements : * we thank j. i. cirac , a. kitaev , d. prez - garca , j. preskill and f. verstraete for related discussions .
m. a. thanks the university of queensland for hospitality and a stimulating working atmosphere during his visit .
g. v. acknowledges financial support from the australian research council , ff0668731 . |
Ms. Badger was taken to Stamford Hospital; a supervisor there said that she was released Sunday night. Her friend, a contractor who was doing work on the house, was also taken to the hospital; his condition was not disclosed.
Ms. Badger’s parents, Lomer and Pauline Johnson, who died in the fire, were to celebrate their 49th anniversary on Monday, according to a family member, who asked not to be named.
Photo
Mr. Johnson, 71, spent his last day working at his dream job: as Santa Claus on the ninth floor of the Saks Fifth Avenue flagship store in Manhattan, the family member said. He was known for his real long, white beard.
“That’s all he ever wanted to be,” the family member said. “He stopped shaving the day he retired.”
Mr. Johnson had spent decades as safety director for the Brown-Forman Corporation, the parent company of Jack Daniels, working on, among other things, fire code for distilleries, according to the family member.
Known professionally as “Happy Santa,” he advertised his act through Gigmasters.com, but initially found work only in a Connecticut mall. But the jobs proved rife with anecdotes. Once, when a cashier was late to work, and a line of disappointed children were told they would have to wait an hour or more for their photo with Santa, Mr. Johnson took it upon himself to open the gate and declare that pictures that morning would be free — as long as visitors had their own cameras.
This year, he successfully auditioned to be Saks’s Santa, and on Christmas Eve he worked there, giving out candy canes and posing for photos, while his wife watched and updated the family on the phone about the scene, the family member said. Ms. Johnson, 69, was a retired electrical contractor who had owned John Waters Inc., a heating and cooling company in Louisville, Ky., which she purchased almost 30 years ago, unusual for a woman in that region at that time.
Advertisement Continue reading the main story
Five years ago, they moved to the New York area to be near their grandchildren.
Property records show that Ms. Badger bought the three-story, 19th-century house in December 2010 for $1.725 million. The property is surrounded by other old and large houses in an affluent neighborhood of Stamford, 35 miles northeast of Midtown Manhattan. The house, neighbors and officials said, had been undergoing renovations in the past six months.
“It did not appear that the renovations were part of the cause; they might have been part of the spread,” Timothy Conroy, deputy fire chief, said in a telephone interview, adding that the cause was under investigation. A total of 46 firefighters were dispatched to the scene, he said.
“We have not had a loss of life like this since back in the ’80s, where there was also the loss of five people,” he said. “I can’t remember anything like this.”
Photo
The heat and the height of the flames made it impossible to rescue the people remaining in the house, Antonio Conte, the acting Stamford fire chief, said at a news conference, according to The Associated Press.
The man who had been able to escape was screaming, “Help me, help me!” Mr. Mangano said. The man was led away, wearing only a T-shirt and boxer shorts, by firefighters.
“His hands were limp in front of him,” Mr. Mangano said.
Mr. Mangano, who lives around the corner from Ms. Badger’s home, said he did not know the family. When he arrived there, he said, “Flames were shooting out of every window — it was like a movie set.”
Newsletter Sign Up Continue reading the main story Please verify you're not a robot by clicking the box. Invalid email address. Please re-enter. You must select a newsletter to subscribe to. Sign Up You will receive emails containing news content , updates and promotions from The New York Times. You may opt-out at any time. You agree to receive occasional updates and special offers for The New York Times's products and services. Thank you for subscribing. An error has occurred. Please try again later. View all New York Times newsletters.
Another neighbor, Sam Cingari, 71, said he was awakened at 5 a.m. by the piercing, anguished voice of a woman. “I heard someone screaming at the top of their lungs,” Mr. Cingari said in a telephone interview. “The flames were coming through the top floor, and I thought, ‘Nobody could possibly survive this.’ ”
A woman who answered the phone in Ms. Badger’s hospital room said she did not wish to talk.
Ms. Badger worked for Calvin Klein in the early 1990s, developing the popular underwear campaign with Mark Wahlberg. She founded her own company, specializing in beauty and luxury brands, in 1994; it is now called Badger & Winters Group. Ms. Badger had initiated divorce proceedings with her husband, Matthew, but they had an amicable relationship, the relative said.
Firefighters were still on the block in the afternoon on Sunday. The roof caved in on the house; all that remained visible from a distance were two chimneys.
Advertisement Continue reading the main story
Mary Abbazia, a neighbor who awoke to sirens, said that she did not know the family, but that news of the fire had spread across Facebook on Sunday morning. “There’s no words,” she said, as neighborhood children played soccer behind her, one block from the destroyed home. “It’s always sad, especially on Christmas.”
At a news conference, the mayor of Stamford, Michael Pavia, said, “There probably has not been a worse Christmas Day in the city of Stamford,” The Associated Press reported.
By evening, flowers were left on the blackened porch, in front of a swing, still fully intact. ||||| A house severely damaged in a Christmas morning fire that killed three children and two grandparents, one of whom worked as Santa Claus at Saks Fifth Avenue, has been torn down.
Firefighters spray water on the roof of a house where an early morning fire left five people dead Sunday, Dec. 25, 2011, in Stamford, Conn. Officials said the fire, which was reported shortly before 5... (Associated Press)
Antonio Conte, Stamford, Conn. Fire Chief, left, stands by as he and Michael Pavia, mayor of Stamford, Conn., hold a news conference regarding the early morning fire which left five people dead at a home... (Associated Press)
Rubble left after the demolition of a house where a fire left five people dead Christmas Day lies on the ground, Monday, Dec. 26, 2011, in Stamford, Conn. (AP Photo/Tina Fineberg) (Associated Press)
FILE - In this Aug. 25, 1998 file photo, Madonna Badger, president and creative director of what was then called Badger Worldwide Advertising, now Badger and Winters Group, poses in her New York office.... (Associated Press)
Firefighters investigate a house where an early morning fire left five people dead Sunday, Dec. 25, 2011, in Stamford, Conn. Officials said the fire, which was reported shortly before 5 a.m., killed two... (Associated Press)
Yellow tape stretches across the driveway, Monday, Dec. 26, 2011, at the house where a fire left five people dead Christmas Day, in Stamford, Conn. (AP Photo/Tina Fineberg) (Associated Press)
The back of a house where an early morning fire left five people dead is seen Sunday, Dec. 25, 2011, in Stamford, Conn. Officials said the fire, which was reported shortly before 5 a.m., killed two adults... (Associated Press)
Flowers sit at the base of a mailbox, Monday, Dec. 26, 2011, outside the house where a fire left five people dead on Christmas Day, in Stamford, Conn. (AP Photo/Tina Fineberg) (Associated Press)
A section of a house where an early morning fire left five people dead is seen Sunday, Dec. 25, 2011 in Stamford, Conn. Officials said the fire, which was reported shortly before 5 a.m., killed two adults... (Associated Press)
The building department determined that the $1.7 million house was unsafe and ordered it razed, Stamford fire chief Antonio Conte said.
The home's owner, advertising executive Madonna Badger, and her male acquaintance escaped from the fire. But Badger's three daughters _ a 10-year-old and 7-year-old twins _ and her parents, who were visiting for the holiday, died, police said.
Neighbors said they awoke to the sound of screaming shortly before 5 a.m. Sunday and rushed outside to help, but could do nothing as flames devoured the large, turreted home.
Police said the male acquaintance who escaped the blaze with Badger was a contractor working on the home. He was also hospitalized but his condition was not released.
Interviews with them will be finished Monday, Conte said. He had no details on the investigation.
A spokeswoman for Saks Fifth Avenue confirmed in a statement that Badger's father, Lomer Johnson, had worked as a Santa this year at its flagship store in Manhattan.
"Mr. Johnson was Saks Fifth Avenue's beloved Santa, and we are heartbroken about this terrible tragedy," spokeswoman Julia Bently said.
Badger, an ad executive in the fashion industry, is the founder of New York City-based Badger & Winters Group. A supervisor at Stamford Hospital said she was treated and discharged by Sunday evening. Her whereabouts Monday was unknown.
Property records show Badger bought the five-bedroom, waterfront home for $1.7 million last year. The house was situated in Shippan Point, a wealthy neighborhood that juts into Long Island Sound.
The lot where the house once stood was covered with charred debris and cordoned off by police with tape on Monday. Passers-by left bouquets, stuffed animals and candles nearby.
Connecticut Gov. Dannel P. Malloy, a former mayor of Stamford, offered his condolences to Badger and her family in a statement and said her loss "defies explanation."
The fire was Stamford's deadliest since a 1987 blaze that also killed five people, Conte said. ||||| The fatal Connecticut fire that killed a fashion-marketing exec’s three children and parents was possibly sparked by embers from disposed fireplace ashes, The Post has learned.
The ashes from the family’s Christmas Eve yule log may have been still smoldering when they were left outside the 100-year-old, $1.7 million, Long Island Sound-view Victorian, said a source.
The wind may have blown the embers into the old, wooden building, sparking the blaze.
The already tragic story took another heartbreaking twist today as details emerged about how the oldest victim, Lomer Johnson, of Southbury, Conn. — a retiree who worked as jolly St. Nick — tried in vain to save his granddaughter.
“He had the little girl with him,” Stamford Fire Chief Antonio Conte told reporters yesterday.
Johnson, 71, was outside, face down on a small, jutting roof. The child was just inside the window.
“I think he had his granddaughter and he tried to get her out,” the chief said.
It was not clear which of the three little girls who perished in the flames he’d been trying to rescue.
Johnson and his wife, Pauline, were staying in the turreted, under-renovation home visiting their daughter, former Calvin Klein art director Madonna Badger, and Badger’s three daughters, 10-year-old Lily and 7-year-old twins Sarah and Grace.
The other two girls were found on the second floor, one floor down from all their bedrooms, the chief told reporters.
Pauline, the grandmother, was found on a staircase hallway between the second and third floors.
Of the seven in the house, only Badger, a founding partner at the top-tier branding firm Badger & Winters, and her companion, contractor Michael Borcina, survived the 5 a.m. blaze.
Badger, in the early 1990s, became a top fashion marketer with her spicy Marky Mark underwear ads and sultry Kate Moss Obsession ads for Calvin Klein.
Borcina, who was doing the renovation work on the house , remains in stable condition at a local hospital.
Firefighters told the chief that Borcina and Badger were both trying desperately to re-enter the house when they arrived and had to be restrained.
Badger had moved to the mansion with her girls from Manhattan at Thanksgiving 2010 and is estranged from her husband, the girls’ father, Matthew Badger.
The mansion’s charred remains — deemed a safety hazard — were razed by the city today at the conclusion of an on-site investigation and the removal of the bodies.
Stanford officials are expected to reveal the cause of the fire and other details — including whether the family had working smoke detectors, or whether the on-going renovations somehow abetted the flames —at a press conference tomorrow.
At Saks, where Johnson played Santa, shocked friends shared fond memories.
“He was a great guy, always joking,” an eighth-floor Saks security guard told The Post.
He and his wife were to celebrate their 49th wedding anniversary the day after the fire.
As for the girls, they were “just incredibly sweet and really magical,” said Sam Badger, a nephew of their father.’
“Lily was a little more quite, a little more reserved than her sisters,” he remembered.
“The twins just kind of seemed to bounce off each other, I suppose. They just seemed to be like one person, almost,” he said. “It’s really so sad to see them all suffer this fate. This is the most tragic thing.”
Additional reporting by Christina Carrega, Yoav Gonen, Laurel BabcockDaniel Gold and Mitchel Maddux
The mom was later released and transferred to an undisclosed location. She looked devastated as she briefly emerged from the hospital.
A relative of her estranged husband, Matthew Badger, said he is “absolutely distraught.”
He was at home in New York when the blaze broke out, and rushed to Stamford, according to cops.
“They were an amazing family,” the relative said.
Stamford Mayor Michael Pavia called it “a terrible, terrible day.”
With AP | – Survivors are trying to pick up the pieces after the furious blaze that engulfed a $1.7 million-dollar Connecticut home yesterday, killing the parents and children of fashion marketing executive Madonna Badger, the New York Post reports. Her father, Lomer Johnson, had just landed his dream holiday-season job—one befitting his real white beard—as Santa Claus at Saks Fifth Avenue in New York. "That’s all he ever wanted to be," a relative told the New York Times. "He stopped shaving the day he retired." The relative added that Johnson and his wife's 49th anniversary would have been today. Badger's badly damaged house was torn down today after building department officials determined it was unsafe—little wonder considering reports emerging from the fire. One neighbor awoke to see “a ball of flames in the sky. ... The velocity of the flames was unlike anything I’ve ever seen. It was just all over the house." Witnesses say Badger and her male friend, who the AP describes as a contractor working on the home, were led from the house, dazed, supported on both sides. Today, the Post notes that Badger looked "devastated" as she was released from the hospital. A relative of her estranged husband, Matthew Badger, says he is "absolutely distraught." |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Know Before You Owe Federal Student
Loan Act of 2015''.
SEC. 2. REQUIRED PERIODIC DISCLOSURES DURING PERIODS WHEN LOAN PAYMENTS
ARE NOT REQUIRED.
Section 433 of the Higher Education Act of 1965 (20 U.S.C. 1083) is
amended by adding at the end the following:
``(f) Required Periodic Disclosures During Periods When Loan
Payments Are Not Required.--During any period of time when a borrower
of one or more loans, made, insured, or guaranteed under this part or
part D is not required to make a payment to an eligible lender on the
borrower's loan from that eligible lender, such eligible lender shall
provide such borrower with a statement that corresponds to each payment
installment time period in which a payment would be due if payments
were required to be made, and that includes, in simple and
understandable terms--
``(1) the original principal amount of each of the
borrower's loans, and the original principal amount of those
loans in the aggregate;
``(2) the borrower's current balance, as of the time of the
statement, as applicable;
``(3) the interest rate on each loan;
``(4) the total amount the borrower has paid in interest on
each loan;
``(5) the aggregate amount the borrower has paid for each
loan, including the amount the borrower has paid in interest,
the amount the borrower has paid in fees, and the amount the
borrower has paid against the balance;
``(6) the lender's or loan servicer's address and toll-free
phone number for payment and billing error purposes;
``(7) an explanation--
``(A) that the borrower has the option to pay the
interest that accrues on each loan while the borrower
is a student at an institution of higher education or
during a period of deferment or forbearance, if
applicable; and
``(B) if the borrower does not pay such interest
while attending an institution or during a period of
deferment or forbearance, any accumulated interest on
the loan will be capitalized when the loan goes into
repayment, resulting in more interest being paid over
the life of the loan;
``(8) the amount of interest that has accumulated since the
last statement based on the typical installment time period and
the aggregate interest accrued to date; and
``(9) a suggested payment amount equal to the interest
charged since the last installment time period.''.
SEC. 3. PRE-LOAN COUNSELING AND CERTIFICATION OF LOAN AMOUNT.
Section 485(l) of the Higher Education Act of 1965 (20 U.S.C.
1092(l)) is amended--
(1) in the subsection heading, by striking ``Entrance
Counseling'' and inserting ``Pre-Loan Counseling'';
(2) in paragraph (1)--
(A) in subparagraph (A)--
(i) in the matter preceding clause (i), by
striking ``a disbursement to a first-time
borrower of a loan'' and inserting ``the first
disbursement of each new loan (or the first
disbursement in each award year if more than
one new loan is obtained in the same award
year)''; and
(B) in clause (ii)(I), by striking ``an entrance
counseling'' and inserting ``a counseling'';
(3) in paragraph (2)--
(A) by striking clause (i) of subparagraph (G) and
inserting the following:
``(i) an estimate of the borrower's
projected loan debt-to-income ratio upon
graduation, calculated using--
``(I) the best available data on
starting wages for the borrower's
program of study; and
``(II) the estimated total student
loan debt, including Federal debt and,
to the best of the institution's
knowledge, private loan debt already
incurred, and the estimated future debt
required to complete the program of
study; and''; and
(B) by adding at the end the following:
``(L) A statement that the borrower should borrow
the minimum amount necessary to cover expenses and that
the borrower does not have to accept the full amount of
loans for which the borrower is eligible.
``(M) A warning that the higher the borrower's
debt-to-income ratio is, the more difficulty the
borrower is likely to experience in repaying the loan.
``(N) Options for reducing borrowing through
scholarships, reduced expenses, work-study, or other
work opportunities.
``(O) An explanation of the importance of
graduating on time to avoid additional borrowing, what
course load is necessary to graduate on time, and
information on how adding an additional year of study
impacts total indebtedness.''; and
(4) by adding at the end the following:
``(3) In addition to the other requirements of this
subsection, each eligible institution shall, prior to
certifying a Federal direct loan under part D for disbursement
to a student (other than a Federal Direct Consolidation Loan or
a Federal Direct PLUS loan made on behalf of a student), ensure
that the student manually enter, either in writing or through
electronic means, the exact dollar amount of Federal direct
loan funding under part D that such student desires to
borrow.''.
SEC. 4. CONFORMING AMENDMENTS.
(a) Program Participation Agreements.--Section 487(e)(2)(B)(ii)(IV)
of the Higher Education Act of 1965 (20 U.S.C. 1094(e)(2)(B)(ii)(IV))
is amended--
(1) by striking ``Entrance and exit counseling'' and
inserting ``Pre-loan and exit counseling''; and
(2) by striking ``entrance and exit counseling'' and
inserting ``pre-loan and exit counseling''.
(b) Regulatory Relief and Improvement.--Section 487A of the Higher
Education Act of 1965 (20 U.S.C. 1094a) is amended by striking
``entrance and exit interviews'' and inserting ``pre-loan and exit
interviews'' each place the term appears. | Know Before You Owe Federal Student Loan Act of 2015 This bill amends title IV (Student Assistance) of the Higher Education Act of 1965 to expand lender disclosure requirements. A lender must provide a statement to a Federal Family Education Loan or Direct Loan borrower during a period when loan payments are not required. Such statement must include the current loan balance, original principal loan amount, interest rate, total interest paid, aggregate payments, lender or servicer contact information, and accumulated interest amount. It must also explain the option to pay accrued interest before it capitalizes and suggest a payment amount based on interest charged. Additionally, the legislation modifies loan counseling requirements for an institution of higher education (IHE) that participates in federal student aid programs. Currently, an IHE must provide one-time entrance counseling to a student who is a first-time federal student loan borrower. This bill requires an IHE to provide pre-loan counseling to a student borrower of a federal student loan at or prior to the first disbursement of each new loan. It revises and expands required elements of pre-loan counseling to include a borrower's estimated debt-to-income ratio at graduation, a statement to borrow the minimum necessary amount, a warning that high debt-to-income ratio makes repayment more difficult, options to reduce borrowing, and an explanation of the importance of on-time graduation. Prior to certifying a Federal Direct Loan disbursement to a student, an IHE must ensure that such student manually enters the exact dollar amount of the loan. |
SECTION 1. SHORT TITLE.
This Act may be cited as the ``Investing in Innovation for
Education Act of 2011''.
SEC. 2. INVESTING IN INNOVATION.
(a) In General.--Title IV of the Elementary and Secondary Education
Act of 1965 (20 U.S.C. 7101 et seq.) is amended by adding at the end
the following:
``PART D--INVESTING IN INNOVATION
``SEC. 4401. PURPOSES.
``The purposes of this part are to--
``(1) fund the identification, development, evaluation, and
expansion of innovative, evidence-based practices, programs,
and strategies in order to significantly--
``(A) increase student academic achievement and
decrease achievement gaps;
``(B) increase high school graduation rates;
``(C) increase college enrollment rates and rates
of college persistence;
``(D) improve teacher and school leader
effectiveness; and
``(E) increase the identification of innovative
educational strategies in rural areas; and
``(2) support the rapid development, expansion, and
adoption of tools and resources that improve the efficiency,
effectiveness, or pace of adoption of such educational
practices, programs, and strategies.
``SEC. 4402. NATIONAL ACTIVITIES.
``The Secretary may reserve not more than 10 percent of the funds
appropriated under section 4408 for each fiscal year to carry out
activities of national significance, which activities may include--
``(1) capacity building;
``(2) technical assistance, including to applicants from
rural areas;
``(3) pre-application workshops and web-based seminars for
potential applicants, including applicants from rural areas;
``(4) the recruitment of peer-reviewers, including
individuals with a background in rural education, to
participate in the review of applications submitted under
section 4404;
``(5) dissemination of best practices developed with grant
funds provided under this part, including best practices
developed with grant funds in rural areas;
``(6) carrying out prize awards consistent with section 24
of the Stevenson-Wydler Technology Innovation Act of 1980 (15
U.S.C. 3719); and
``(7) entering into partnerships with other agencies,
nonprofits, and the private sector to carry out advanced
research and development activities, including research and
activities in rural areas.
``SEC. 4403. PROGRAM AUTHORIZED; LENGTH OF GRANTS; PRIORITIES.
``(a) Program Authorization.--The Secretary shall use funds made
available to carry out this part to award grants, on a competitive
basis, to local educational agencies, educational service agencies, and
nonprofit organizations that propose to provide support to 1 or more
public schools or local educational agencies, or both, consistent with
section 4405.
``(b) Duration of Grants.--The Secretary shall award grants under
this part for a period of not more than 3 years, and may extend such
grants for an additional 2-year period if the grantee demonstrates to
the Secretary that it is making significant progress on the program
performance measures identified in section 4406.
``(c) Rural Areas.--The Secretary shall ensure that not less than
25 percent of the funds awarded under subsection (a) for any fiscal
year are for projects that meet both of the following requirements:
``(1) The grantee is--
``(A) a local educational agency with an urban-
centric district locale code of 32, 33, 41, 42, or 43,
as determined by the Secretary;
``(B) a consortium of such local educational
agencies; or
``(C) an educational service agency or a nonprofit
organization with demonstrated expertise in serving
students from rural areas.
``(2) A majority of the schools to be served by the project
are designated with a school locale code of 41, 42, or 43, or a
combination of such codes, as determined by the Secretary.
``(d) Priorities.--In awarding grants under this part, the
Secretary may give priority to an eligible entity that includes, in its
application under section 4404, a plan to--
``(1) improve early learning outcomes;
``(2) support college access and success;
``(3) support family and community engagement;
``(4) address the unique learning needs of students with
disabilities or English language learners;
``(5) support the effective use of education technology to
improve teaching and learning;
``(6) improve the teaching and learning of science,
technology, engineering, or mathematics; or
``(7) serve schools in rural local educational agencies.
``(e) Standards of Evidence.--The Secretary shall set standards for
the quality of evidence that an applicant shall provide in order to
demonstrate that the activities the applicant proposes to carry out
with funds under this part are likely to succeed in improving student
outcomes or outcomes on other performance measures. These standards may
include any of the following:
``(1) Strong evidence that the activities proposed by the
applicant will have a statistically significant effect on
student academic achievement, student growth, or outcomes on
other performance measures.
``(2) Moderate evidence that the activities proposed by the
applicant will improve student academic achievement, student
growth, or outcomes on other performance measures.
``(3) A rationale based on research findings or a
reasonable hypothesis that the activities proposed by the
applicant will improve student academic achievement, student
growth, or outcomes on other performance measures.
``SEC. 4404. APPLICATIONS.
``(a) Applications.--
``(1) In general.--Each local educational agency,
educational service agency, or nonprofit organization that
desires to receive a grant under this part shall submit an
application to the Secretary at such time, in such manner, and
containing such information as the Secretary may reasonably
require.
``(2) Reasonable period of time.--The Secretary shall
ensure that prospective applicants are provided a reasonable
period of time in which to prepare and submit their
applications.
``(b) Contents.--At a minimum, each application shall--
``(1) describe the project for which the applicant is
seeking a grant and how the evidence supporting that project
meets the standards of evidence established by the Secretary
under section 4403(e);
``(2) describe how the applicant will address at least 1 of
the areas described in section 4405(a)(1);
``(3) provide an estimate of the number of students that
the applicant plans to serve under the proposed project,
including the percentage of those students who are from low-
income families, and the number of students to be served
through additional expansion after the grant ends;
``(4) demonstrate that the applicant has established 1 or
more partnerships with private organizations, nonprofit
organizations, or community-based organizations, and that the
partner or partners will provide matching funds, except that
the Secretary may waive the matching funds requirement, on a
case-by-case, upon a showing of exceptional circumstances, such
as the difficulty of raising matching funds for a project to
serve a rural area;
``(5) describe the applicant's plan for continuing the
proposed project after funding under this part ends;
``(6) if the applicant is a local educational agency--
``(A) document the local educational agency's
record during the previous 3 years in--
``(i) increasing student achievement,
including achievement for each subgroup
described in section 1111(b)(2)(C)(v); and
``(ii) decreasing achievement gaps; and
``(B) demonstrate how the local educational agency
has made significant improvements in other outcomes, as
applicable, on the performance measures described in
section 4406;
``(7) if the applicant is a nonprofit organization--
``(A) provide evidence that the nonprofit
organization has helped at least 1 school or local
educational agency, during the previous 3 years,
significantly--
``(i) increase student achievement,
including achievement for each subgroup
described in section 1111(b)(2)(C)(v); and
``(ii) reduce achievement gaps; and
``(B) describe how the nonprofit organization has
helped at least 1 school or local educational agency
make a significant improvement, as applicable, in other
outcomes on the performance measures described in
section 4406;
``(8) if the applicant is an educational service agency--
``(A) provide evidence that the agency has helped
at least 1 school or local educational agency, during
the previous 3 years, significantly--
``(i) increase student achievement,
including achievement for each subgroup
described in section 1111(b)(2)(C)(v); and
``(ii) reduce achievement gaps; and
``(B) describe how the agency has helped at least 1
school or local educational agency make a significant
improvement, as applicable, in other outcomes on the
performance measures described in section 4406;
``(9) provide a description of the applicant's plan for
independently evaluating the effectiveness of activities
carried out with funds under this part;
``(10) provide an assurance that the applicant will--
``(A) cooperate with cross-cutting evaluations;
``(B) make evaluation data available to third
parties for validation and further study; and
``(C) participate in communities of practice; and
``(11) if the applicant is a nonprofit organization that
intends to make subgrants, consistent with section 4405(b),
provide an assurance that the applicant will apply paragraphs
(1) through (10), as appropriate, in the applicant's selection
of subgrantees and in its oversight of those subgrants.
``(c) Criteria for Evaluating Applications.--The Secretary shall
award grants under this part on a competitive basis, based on the
quality of the applications submitted and, consistent with the
standards established under section 4403(e), each applicant's
likelihood of achieving success in improving student outcomes or
outcomes on other performance measures.
``SEC. 4405. USES OF FUNDS.
``(a) Uses of Funds.--Each local educational agency, educational
service agency, or nonprofit organization that receives a grant under
this part--
``(1) shall use the grant funds to address, at a minimum, 1
of the following areas of school innovations:
``(A) Improving the effectiveness of teachers and
school leaders and promoting equity in the distribution
of effective teachers and school leaders.
``(B) Strengthening the use of data to improve
teaching and learning.
``(C) Providing high-quality instruction based on
rigorous standards that build toward college and career
readiness and measuring students' mastery using high-
quality assessments aligned to those standards.
``(D) Turning around the lowest-performing schools.
``(E) Any other area of school innovation, as
determined by the Secretary;
``(2) shall use those funds to develop or expand strategies
to improve the performance of high-need students on the
performance measures described in section 4406; and
``(3) may use the grant funds for an independent
evaluation, as required by section 4404(b)(9), of the
innovative practices carried out with the grant.
``(b) Authority to Subgrant.--A nonprofit organization that
receives a grant under this part may use the grant funds to make
subgrants to other entities to provide support to 1 or more schools or
local educational agencies. Any such entity shall comply with the
requirements of this part relating to grantees, as appropriate.
``SEC. 4406. PERFORMANCE MEASURES.
``The Secretary shall establish performance measures for the
programs and activities carried out under this part. These measures, at
a minimum, shall track the grantee's progress in--
``(1) improving outcomes for each subgroup described in
section 1111(b)(2)(C)(v) that is served by the grantee on
measures, including, as applicable, by--
``(A) increasing student achievement and decreasing
achievement gaps;
``(B) increasing high school graduation rates;
``(C) increasing college enrollment rates and rates
of college persistence;
``(D) improving teacher and school leader
effectiveness;
``(E) improving school readiness; and
``(F) any other indicator as the Secretary or
grantee may determine; and
``(2) implementing its project in rural schools, as
applicable.
``SEC. 4407. REPORTING; ANNUAL REPORT.
``A local educational agency, educational service agency, or
nonprofit organization that receives a grant under this part shall
submit to the Secretary, at such time and in such manner as the
Secretary may require, an annual report that includes, among other
things, information on the applicant's progress on the performance
measures established under section 4406, and the data supporting that
progress.
``SEC. 4408. AUTHORIZATION OF APPROPRIATIONS.
``There are authorized to be appropriated to carry out this part
$500,000,000 for fiscal year 2012 and such sums as may be necessary for
each of the 5 succeeding fiscal years.''.
(b) Table of Contents.--The table of contents in section 2 of the
Elementary and Secondary Education Act of 1965 is amended by inserting
after the item relating to section 4304 the following:
``PART D--Investing in Innovation
``Sec. 4401. Purposes.
``Sec. 4402. National activities.
``Sec. 4403. Program authorized; length of grants; priorities.
``Sec. 4404. Applications.
``Sec. 4405. Uses of funds.
``Sec. 4406. Performance measures.
``Sec. 4407. Reporting; annual report.
``Sec. 4408. Authorization of appropriations.''. | Investing in Innovation for Education Act of 2011 - Amends the Elementary and Secondary Education Act of 1965 to direct the Secretary of Education to award competitive grants to local educational agencies (LEAs), educational service agencies, and nonprofit organizations to support the school innovation efforts of public schools and LEAs.
Requires at least 25% of the grant funds to be awarded for projects in rural areas.
Requires each grant applicant to demonstrate that it has partnered with at least one private, nonprofit, or community-based organization that will provide matching funds. Allows the Secretary to waive the matching funds requirement upon a showing of exceptional circumstances.
Requires each grant to be used to address at least one of the following areas of school innovation: (1) improving the effectiveness of teachers and school leaders and promoting their equitable distribution, (2) strengthening the use of data to improve education, (3) providing high-quality instruction that is based on rigorous standards and measuring students' proficiency using high-quality assessments that are aligned to those standards, (4) turning around the lowest-performing schools, and (5) any other area of school innovation the Secretary chooses.
Directs the Secretary to establish performance measures for tracking each grantee's progress in: (1) improving the academic performance of public elementary and secondary school students, and specified subgroups of those students; and (2) implementing its project in rural schools, as applicable. Requires grantees to use grant funds to develop or expand strategies to improve high-need students' showing on those performance measures. |
researchers in many fields of science and technology now routinely use _ ab initio _ molecular dynamics ( aimd ) simulations for investigating various properties of complex systems @xcite . however , the computational cost of aimd is still a serious obstacle , even on a supercomputer . if , however , the purpose of the simulation is to obtain low - energy conformations through simulated annealing , or to equilibrate the system prior to the production run , the accuracy of time integration is not of primary concern . in this case , the computational cost of aimd is minimized by using the largest possible time step .
when the verlet method is used to integrate the equations of motion , the maximum size of the time step is given by @xmath0 , where @xmath1 is the period of the fastest oscillation in the system @xcite .
in practice , however , aimd simulations often break down at @xmath2 because of the strong anharmonicity of the interatomic forces . in this work ,
we show that a slight modification of the verlet method allows us to increase the stability limit of the time step significantly with only a small loss in accuracy .
the classical hamiltonian for a system of @xmath3 atoms is given by @xmath4 where @xmath5 and @xmath6 are vectors of dimension @xmath7 , representing atomic positions and momenta , @xmath8 is the mass matrix , and @xmath9 is the potential energy .
then , @xmath10 and @xmath11 satisfy the equations of motion , @xmath12 in general , these equations can not be solved analytically , and thus must be evaluated numerically .
when these equations are discretized in time with a time step of @xmath13 , and neglecting @xmath14 terms , the ( velocity ) verlet method is obtained @xcite : @xmath15 where the force is defined by @xmath16 , and the subscript denotes the time - step number .
this integrator is symplectic , time - reversible , and requires only one force evaluation per step .
therefore , the verlet method is still widely used for aimd @xcite .
it is common practice to use @xmath17 for production runs , where @xmath18 is the theoretical limit defined in the introduction .
in contrast , much larger time steps are acceptable for equilibration and simulated annealing where only modest accuracy is required . at some point , however , the total energy diverges and time evolution breaks down . in our experience , the breakdown occurs at @xmath2 in the following manner . (
a ) : : two atoms approach each other very closely .
( b ) : : strong repulsive forces act between them .
this effect is more pronounced in aimd because of the stronger anharmonicity .
( c ) : : these forces give rise to large atomic velocities .
( d ) : : go to ( a ) if necessary .
when the time step is large , this cycle often continues until two atoms nearly overlap , indicating the breakdown of the simulations .
we also note that even a single atom can cause a breakdown if its kinetic energy is sufficiently large .
the basic idea of our approach is to avoid the breakdown by setting an upper limit on the kinetic energy of each atom . to this end
, we propose to modify the verlet method as follows : @xmath19 where the modification of @xmath6 at @xmath20 , eq.([modpeq ] ) , can be written as @xmath21 in pseudo - code format . here , @xmath22 is defined by @xmath23 with @xmath24 and @xmath25 is the target temperature .
this procedure requires two dimensionless parameters : @xmath26 determines the cutoff energy and @xmath27 corresponds to the kinetic energy after the scaling , i.e. @xmath28 holds for all atoms which satisfy eq.([ekincond ] ) . in what follows
, this procedure is called _
stabilization_. it is also possible to apply the stabilization to thermostatted systems without serious difficulties .
moreover , the computational cost is negligible .
on the other hand , the current implementation ignores the conservation of the total energy and momentum .
when a thermostat is applied , this is not a serious problem as long as only a small fraction of the atoms satisfy eq.([ekincond ] ) at each time step .
if , however , the drift of the total energy is significant , it may be necessary to include dissipative forces to compensate for the drift @xcite .
here we study the effect of stabilization on the performance of aimd simulations for a high - temperature molten salt .
molten lithium fluoride was modeled by 72 lif pairs in a cubic supercell of length 12.06 .
atomic forces were calculated within the density functional theory @xcite , and norm - conserving pseudopotentials were used @xcite .
the electronic orbitals were expanded by the finite - element basis functions @xcite with an average cutoff energy of 78 ryd , while the resolution was enhanced by more than a factor of two near the atoms @xcite . only the @xmath29-point was used to sample the brillouin zone .
the electronic states were quenched to the ground state at each time step with the limited - memory bfgs method @xcite .
the equations of motion were integrated using the verlet method with and without the stabilization .
after equilibration , production runs of 240 ps were carried out using @xmath30 fs . the temperature was controlled by the berendsen thermostat with a relaxation time of @xmath31 . in table [ mdlif ] , we show the simulation details for all runs .
we used the same initial conditions @xmath32 and experimental masses for all atoms in these runs .
we note in passing that the period of the fastest oscillation in this system is not a well - defined quantity .
however , @xmath33 0.5 fs @xcite , 1.5 fs @xcite , and 4 fs @xcite were used in previous studies of this system .
the verlet method was found to be stable up to @xmath33 6 fs without stabilization , while a divergence of the total energy was observed at @xmath33 7 fs after running for 203 ps .
when the stabilization was performed , the simulation was valid even for @xmath33 11 fs .
we note , however , that the values of @xmath26 , @xmath27 , and @xmath31 had to be reduced for larger @xmath13 to stabilize the simulations .
we show the effect of stabilization for @xmath33 8 fs in fig.[stabfig ] .
distributions of the kinetic energy before and after the stabilization are compared in fig.[ekinfig ] .
the original distribution decays very slowly with energy , and is extended up to 4.4 ryd .
long tail _ is responsible for the breakdown of the simulations .
after the stabilization , the distribution is truncated at @xmath34 . in fig.[rdffig ] , we compare the radial distribution functions ( @xmath35li - li@xmath36 , @xmath35li - f@xmath36 , and @xmath35f - f@xmath36 ) obtained from the simulations . the first peak of @xmath35li - f@xmath36 shows some broadening for @xmath33 10 and 11 fs . however , all runs give similar results at larger distances .
moreover , @xmath35li - li@xmath36 and @xmath35f - f@xmath36 rdfs remain essentially the same for all runs up to @xmath33 11 fs . the self - diffusion coefficients given in table [ mdlif ] show some scatter , but no clear dependence on the simulation conditions @xcite .
these results are in reasonable agreement with the experimental values ( 8.9@xmath3710@xmath38@xmath39/s for li and 7.2@xmath3710@xmath38@xmath39/s for f ) measured at 1123 k @xcite .
we have shown that the stability limit of the verlet method can be increased by @xmath40 for molten lif without significant loss in accuracy if the kinetic energy of each atom is carefully controlled .
preliminary aimd simulations of liquid water are also showing promising results .
the stabilization method presented in this paper would be particularly useful when only modest accuracy is required within the framework of aimd , e.g. , for equilibration and global optimization .
this algorithm may also be used in conjunction with other methods to accelerate the simulations even further , such as the langevin dynamics @xcite , linear scaling method @xcite and mass scaling method @xcite .
this work has been supported by the strategic programs for innovative research ( spire ) and a kakenhi grant ( 22104001 ) from the ministry of education , culture , sports , science & technology ( mext ) , and the computational materials science initiative ( cmsi ) , japan .
when the stabilization is performed according to sec.[theosec ] , the conservation of total momentum is not strictly valid .
therefore , the motion of the center of mass was explicitly taken into account when calculating the self - diffusion coefficients .
.details of aimd simulations for molten lif .
simulation lengths shorter than 240 ps indicate failed runs .
mod represents the probability that each atom satisfies eq.([ekincond ] ) at each time step .
[ cols="^,^,^,^,^,^,^,^,^,^ " , ] | the verlet method is still widely used to integrate the equations of motion in _ ab initio _ molecular dynamics simulations .
we show that the stability limit of the verlet method may be significantly increased by setting an upper limit on the kinetic energy of each atom with only a small loss in accuracy .
the validity of this approach is demonstrated for molten lithium fluoride . |
Subsets and Splits