text
stringlengths
3
744k
summary
stringlengths
24
154k
By several measures, young adults tend to be less religious than their elders; the opposite is rarely true In the United States, religious congregations have been graying for decades, and young adults are now much less religious than their elders. Recent surveys have found that younger adults are far less likely than older generations to identify with a religion, believe in God or engage in a variety of religious practices. But this is not solely an American phenomenon: Lower religious observance among younger adults is common around the world, according to a new analysis of Pew Research Center surveys conducted in more than 100 countries and territories over the last decade. Although the age gap in religious commitment is larger in some nations than in others, it occurs in many different economic and social contexts – in developing countries as well as advanced industrial economies, in Muslim-majority nations as well as predominantly Christian states, and in societies that are, overall, highly religious as well as those that are comparatively secular. For example, adults younger than 40 are less likely than older adults to say religion is “very important” in their lives not only in wealthy and relatively secular countries such as Canada, Japan and Switzerland, but also in countries that are less affluent and more religious, such as Iran, Poland and Nigeria. While this pattern is widespread, it is not universal. In many countries, there is no statistically significant difference in levels of religious observance between younger and older adults. In the places where there is a difference, however, it is almost always in the direction of younger adults being less religious than their elders. Same pattern seen over multiple measures of religious commitment Overall, adults ages 18 to 39 are less likely than those ages 40 and older to say religion is very important to them in 46 out of 106 countries surveyed by Pew Research Center over the last decade. In 58 countries, there are no significant differences between younger and older adults on this question. And just two countries – the former Soviet republic of Georgia and the West African country of Ghana – have younger adults who are, on average, more religious than their elders. (For theories about why younger adults often are less religious, see Chapter 1. For a discussion of some of these exceptions, see the sidebar in Chapter 2.) Similar patterns also are found using three other standard measures of religious identification and commitment: affiliation with a religious group, daily prayer and weekly worship attendance. In 41 countries, adults under 40 are significantly less likely than their elders to have a religious affiliation, while in only two countries (Chad and Ghana) are younger adults more likely to identify with a religious group. In 63 countries, there is no statistically significant difference in affiliation rates. Younger adults are less likely to say they pray daily in 71 of 105 countries and territories for which Pew Research Center survey data are available, while they are more likely to pray daily in two countries (Chad and Liberia). And adults under 40 are less likely to attend religious services on a weekly basis in 53 of 102 countries; the opposite is true in just three countries (Armenia, Liberia and Rwanda). While the number of countries with a significant age gap shows how widespread this pattern is, it does not give a sense of the magnitude of the differences between older and younger adults on these measures. In many countries, the gaps are relatively small. Indeed, the average gap between younger adults and older adults across all the countries surveyed is 5 percentage points for affiliation, 6 points for importance of religion, 6 points for worship attendance and 9 points for prayer. But a substantial number of countries have much bigger differences. There are gulfs of at least 10 percentage points between the shares of older and younger adults who identify with a religious group in more than two dozen countries – mostly with predominantly Christian populations in Europe and the Americas. For example, the share of U.S. adults under age 40 who identify with a religious group is 17 percentage points lower than the share of older adults who are religiously affiliated. The gap is even larger in neighboring Canada (28 points). And there are double-digit age gaps in affiliation in countries as far flung as South Korea (24 points), Uruguay (18 points) and Finland (17 points). A note on averages To help make sense of an enormous pool of data, this report sometimes cites global averages of country-level data. In calculating the averages, each country is weighted equally, regardless of population size. Global averages, therefore, should be interpreted as the average finding among all countries surveyed, not as population-weighted averages representing all people around the world. Differences among regions, religions Age gaps are more common in some geographic regions than others. For instance, in 14 out of 19 countries and territories surveyed in Latin America and the Caribbean, adults under age 40 are significantly less likely than their elders to say religion is very important in their lives. This is also the case in about half of the European countries surveyed (18 out of 35), and in both countries in North America (the U.S. and Canada; Mexico is included in the figures for Latin America). On the other hand, in sub-Saharan Africa, where overall levels of religious commitment are among the highest in the world, there is no significant difference between older and younger adults in terms of the importance of religion in 17 out of 21 countries surveyed. Age gaps are also more common within some religious groups than in others. For example, religion is less important to younger Christian adults in nearly half of all the countries around the world where sample sizes are large enough to allow age comparisons among Christians (37 out of 78). For Muslims, this is the case in about one-quarter of countries surveyed (10 out of 42). Among Buddhists, younger adults are significantly less religious in just one country (the United States) out of five countries for which data are available. There is no age gap by this measure among Jews in the U.S. or Israel, or among Hindus in the U.S. or India. Do age gaps mean the world is becoming less religious? The widespread pattern in which younger adults tend to be less religious than older adults may have multiple potential causes. Some scholars argue that people naturally become more religious as they age; to others, the age gap is a sign that parts of the world are secularizing (i.e., becoming less religious over time). (For a detailed discussion of theories about age gaps and secularization, see Chapter 1.) But even if parts of the world are secularizing, it is not necessarily the case that the world’s population, overall, is becoming less religious. On the contrary, the most religious areas of the world are experiencing the fastest population growth because they have high fertility rates and relatively young populations. Previously published projections show that if current trends continue, countries with high levels of religious affiliation will grow fastest. The same is true for levels of religious commitment: The fastest population growth appears to be occurring in countries where many people say religion is very important in their lives. These are among the key findings of a new Pew Research Center analysis of surveys collected over the last decade in 106 countries. The data analyzed in this report come from 13 different Pew Research Center studies, including annual Global Attitudes Surveys as well as major studies on religion in sub-Saharan Africa; the Middle East and other countries with large Muslim populations; Latin America; the United States; Central and Eastern Europe; and Western Europe. The number of countries analyzed varies by measure and type of comparison. While data are available for as many as 106 countries depending on the measure, the number of countries with reliable data on a particular religious group depends on the size of that group in each country’s sample. For example, there are sufficient data to gauge the importance of religion among Christians in 84 countries, and the sample sizes are large enough to compare responses among older and younger Christians in 78 of those 84 countries. Another limitation is that the measures of religious observance contained in many surveys around the world and analyzed in this report may not be equally suitable for all religious groups. In particular, rates of prayer and attendance at worship services are generally seen as reliable indicators of religious observance within Abrahamic faiths (Christianity, Islam and Judaism), but they may not be as applicable for Buddhism, Hinduism and other Eastern religions. Because of these disparities, this report does not seek to compare levels of religious commitment between the world’s major religions (e.g., to compare Christians with Buddhists or Muslims). Rather, the primary focus is on age differences within religious groups and within countries or geographic regions (e.g., comparing younger Christians with older Christians, or younger Indonesians with older Indonesians). This study, produced with funding from The Pew Charitable Trusts and the John Templeton Foundation, is part of the Pew-Templeton Global Religious Futures project, a broader effort to understand religious change, including the demographic patterns shaping religion around the world. Previous reports have focused on gender and religion, religion and education and population growth projections for major world religions. The rest of this report looks in more detail at both age gaps in religious commitment (Chapter 2) and overall levels of religious commitment around the world (Chapter 3), by four standard measures: religious affiliation, importance of religion, attendance and prayer. Appendixes detail the methodology and sources used, and include tables that show each of the four measures for every country surveyed with data for overall levels of religious commitment, figures for adults over and under 40, age gaps for the total population and age gaps by religious group. But, first, Chapter 1 examines theories about why levels of religious observance vary so markedly across different age groups and different parts of the world. ||||| Analysis of 106 nations finds Ghana and Georgia are only places where under-40s are more religious than older compatriots Young more religious than old in only two countries in world Young people are more religious than their elders in only two countries – Ghana, and the former Soviet republic of Georgia – according to a global analysis. In 46 out of 106 countries surveyed by the Washington-based Pew Research Center, people between the ages of 18 and 39 are less likely to say religion is very important to them than adults over the age of 40. Countries where the age gap is most marked are Poland, Greece, Chile, Romania and Portugal – all predominantly Christian countries, and all with a percentage point difference between the two age groups of 20 or higher. The US has a 17-point difference, and Ireland a nine-point gap. The UK is among 58 countries in which there is no significant difference between younger and older adults. In Lebanon, a majority Muslim country but with a large Christian population, there is a 20-point age gap. In Iran, ruled by an Islamic theocracy, there is a nine-point difference. There is an age gap in a majority of Latin American and Caribbean countries, about half of European countries, and in North America. It is more likely to be a feature of Christian-majority countries than Muslim-majority ones. According to Pew: “Although the age gap in religious commitment is larger in some nations than in others, it occurs in many different economic and social contexts – in developing countries as well as advanced industrial countries, in Muslim-majority nations as well as predominantly Christian states, and in societies that are, overall, highly religious as well as those that are comparatively secular.” The report, The Age Gap in Religion Around the World, says that a common explanation is that “new generations become less religious in tandem with economic development – as collective worries about day-to-day survival become less pervasive and tragic events become less frequent. “According to this line of thinking, each generation in a steadily developing society would be less religious than the last, which would explain why young adults are less religious than their elders at any given time.” Better education, and a trend towards religious belief as one gets older and faces mortality, could also help explain the gap. The report notes that the most religious areas of the world are experiencing the fastest population growth, due to high fertility rates and relatively young populations.
– Is there a generation gap when it comes to religious belief? A new study by Pew Research Center surveyed religious beliefs in 106 countries over the last decade and found that in 46 of them, individuals between the ages of 18 and 39 were less religious than those 40 and over. The only two countries where people under 40 were more likely than their elders to say that religion is “very important” in their lives were Ghana and the former Soviet republic of Georgia. The age gap occurs “in developing countries as well as advanced industrial countries, in Muslim-majority nations as well as predominantly Christian states, and in societies that are, overall, highly religious as well as those that are comparatively secular,” says the report. The 10 countries with the biggest generation gap in religious belief, per the Guardian:
following traumatic spleen rupture or surgery , heterotrophic autotransplantation of the spleen tissue occurs in a process called splenosis . it can occur anywhere in the abdominal cavity and can even reach the brain tissue . due to the fact that splenosis is a rare incident , the pathogenesis is not fully understood . yet in order to predict when splenic implants will occur , some assume that the spillage of the damaged splenic pulp into the adjacent cavity leads to the seeding or it may be due to hematogenous spread . manipulation of splenosis can lead to many severe complications like obstruction and bleeding that can occur as a result of splenotic tissue overgrowth 1 , 2 . in 1995 , higgins and crain 3 reported the first successful laparoscopic procedure for pelvic splenosis and ever since it has been the treatment of choice for splenosis 3 , 4 . in this case , while undergoing a bariatric surgery , we report an incidental finding of splenosis in a morbidly obese patient who underwent splenectomy due to a traumatic injury 10 years ago . a 30yearold female , known to have hypertension and diabetes , complained of excessive weight gain , after her 1 year gastric band lost its effectiveness . after further questioning , the patient shared that she had a laparoscopic splenectomy 10 years ago after a trauma . nonetheless , a laparoscopic gastric band removal and conversion into sleeve gastrectomy was still planned . during the operation , implanted splenic bodies were incidentally found all over her abdomen such as in the stomach , liver , gallbladder , diaphragm , and at the left upper quadrant , except the lower pelvis , as seen in the images below . beginning the laparoscopic surgery according to plan , unaware of her splenotic condition , she had three trochars inserted into her abdomen in preparation for the gastric sleeve . when the camera was first inserted , we directly observed multiple spleens implanted , covering the abdomen and concluded the diagnosis to most likely be splenosis as the patient underwent a splenectomy in the past . after observing splenosis all over the stomach , the gastric sleeve procedure became complicated and difficult to fulfill as the splenotic implants were also found on the greater curvature of the stomach ( fig . moreover , implanted spleens were found surrounding the gastric band , thus it was an emergency to remove the gastric band before any erosion occurs , injuring the spleen and causing a massive bleed . in a meticulous manner , laparoscopic dissection of the fibrosis around the gastric band was done , avoiding the surrounding splenic nodules ( fig . 2 ) . the band was then cut and withdrawn from the stomach and out of the abdominal cavity . then some spleens were dissected and sent for pathological examination to confirm our diagnosis . moreover , a very rare event of splenosis was found , silently surrounding and obstructing the gallbladder of the patient ( fig . 4 ) ; which should be checked on a regular basis in case of cholecystitis or other gallbladder abnormalities . another interesting finding is the seeding on the diaphragm as seen in figs 5 , 6 , 7 . postoperatively , the patient was informed about the complication that arose in the procedure that led to aborting the gastric sleeve surgery . she was then consulted by a nutritionist in order to guide her through a healthy lifestyle of diet and exercise . the patient remained in the hospital for 3 days in order to monitor her stability and assess the risk of any postoperative complications with the splenic implants . pathology : received in formalin are two fatty fragments enclosing around seven dark brownish nodular formations . multiple splenic nodules , consistent with suggested diagnosis of splenosis , no evidence of other pathologies . splenosis is known as heterotopic splenic tissue autotransplantation in a different anatomical location than that of the normal spleen , usually after trauma or surgical avulsion of the spleen or even hematological diseases . although these implants acquire the normal function of the spleen , it is a benign acquired disease unlike the congenital accessory spleen , which arises from the dorsal mesogastrium 5 and is found near the splenic hilum . however , in splenosis , the splenic tissues have poorly formed capsules , no hilum and vary in shape and size 5 , 6 . they even differ by their blood supply ; splenic implants use the surrounding peritoneal vessels , rather than the splenic artery 7 . splenosis can occur anywhere in the body , but it is mainly found in the abdomen , usually at port sites of previous surgeries 6 , 7 . statistically , it has been shown to occur in 1617% of patients who undergo elective splenectomy for hematological disease , and in 4476% of patients with a posttraumatic splenectomy due to spillage after traumatic insult 6 , 8 . in our case , although the patient did not report any hematological diseases , the patient underwent a splenectomy due to a trauma , thus increasing the risk of splenic tissue spillage in the most common location , the abdomen . in light of splenosis usually being asymptomatic , it is most commonly incidentally discovered , during any abdominal surgery or through an evaluation for another disease . it is usually suspected in patients with a history of a splenectomy and symptomatic . in this case , our patient was asymptomatic , nor did she experience any discomfort . although the first part of the surgery was completed , gastric band removal , it was solely due to the fear of an erosion or inflammation of the gastric band that can later on cause an easily avoidable complication . thus , only the gastric band was removed and the sleeve gastrectomy was aborted due to the splenotic implant locations that made the surgery anatomically difficult and dangerous as there was a high risk of bleeding . on the other hand , symptoms of splenosis can be nonspecific , for example , abdominal pain due to an infarction , enlarging mass , obstruction from the adhesions , gastrointestinal bleeding , hydronephrosis , or pressure by mass effect 9 . although our patient was asymptomatic , she did experience an enlarged abdomen which was thought to be a result of her weight gain . looking back , this may also be an effect of the splenosis , rather than simply from the retired gastric band . the diagnosis of splenosis is normally well established by a radionuclide scintigraphic study of the liver and spleen with technetium99 m sulfur colloid , which can accurately detect splenic tissue as small as 1 cm in diameter 5 . scintigraphy with technetium99 m radiolabelled heatdamaged erythrocytes or with indium 111labeled platelets , and ferumoxidesenhanced mri are other valuable tools for diagnosing splenosis 10 , 11 . once splenosis is confirmed , no further workup is necessary unless the patient is symptomatic . unfortunately , our patient 's case was found incidentally upon laparoscopic surgery , which is rarely used to diagnose . usually splenosis is confirmed by radionuclide scintigraphic study , which did not apply in our case . despite not knowing the degree of function of the spilled splenic tissue , this may be demonstrated by a peripheral blood smear and noting the presence of damaged , nonfunctional rbcs , like h j bodies , heinz bodies , and pitted red cells , which increase in the asplenic state . earlier in medicine , the preferred treatment of choice for identification and removal of ectopic splenotic implants is by surgery . nowadays , since one can visualize the retrogastric pouch in laparoscopic surgery , which is not seen through the open approach , laparoscopic surgery is now preferred as it decreases the risk of spillage , using an endobag . the seeding of spilled splenic cells under highpressure pneumoperitoneum during a slightly longer procedure is a new and specific problem being faced in laparoscopic splenectomy , minimally invasive surgery such as laparoscopy remains the ideal treatment for patients with symptomatic splenosis12 . although , there hasn't been a lot of research concerning laparoscopic surgery in an asymptomatic patient , in our case , the sole purpose of the surgery was for a gastric band removal followed by a gastric sleeve , which was then complicated by the incidental finding of splenosis in the abdomen . in conclusion , great technical and meticulous care is required during laparoscopic proceduces , specifically splenectomy , in order to avoid rupturing the splenic capsule , spilling , and splenetic implantations known as splenosis . nevertheless , in the case of splenosis , it is not necessarily considered as a potential life threatening and dangerous incident , especially in asymptomatic patients and do not require surgery unless they are symptomatic . although known to be a rare incidence with only a few reported cases after laparoscopic splenectony 7 , splenosis cases are expected to increase in the future .
key clinical messagein any patient , the occurrence of postsplenectomy splenosis can complicate the planning of further surgeries . in our case , the gastric sleeve procedure was aborted , as it would have put the patient 's life in danger . therefore , only the gastric band was removed , eliminating future erosion .
with development of economy and changes in lifestyle , the epidemic of obesity and related metabolic syndrome is alarming , which has become one of the most important public health problems worldwide . obesity is a well - known factor inducing diseases such as hypertension , coronary heart disease , and diabetes . dieting and exercise are generally considered to be the most effective treatments for obesity , yet long term persistence is difficult . most antiobesity drugs have been withdrawn due to their adverse effects , such as psychiatric disorders and nonfatal myocardial infarction . in 2009 , five independent researchers identified and characterized the presence of brown adipose tissue ( bat ) in adult using 18f - fdg pet / ct [ 48 ] , evoking attentions on using bat to counter the spread of obesity due to its immense capacity to convert excess energy into heat . increasing evidences indicated that bat recruitment [ 913 ] and bat transplantation [ 14 , 15 ] may play an important role in decreasing body weight and improving whole - body energy metabolism . mammalian adipose tissue can be divided into white adipose tissue ( wat ) and brown adipose tissue ( bat ) . wat is characterized by a single , large lipid droplet and few mitochondria that stores excess energy in form of triglycerides , which also secretes a variety of cytokines to regulate energy metabolism . it is well established that visceral adipose tissue plays an important part in the pathophysiology of insulin resistance . visceral adipose tissue makes obese individuals more prone to metabolic and cardiovascular diseases than fat distributing subcutaneously . bat contains numerous small lipid droplets and a much higher number of mitochondria . at the molecular level , uncoupling protein 1 ( ucp1 ) , which is uniquely expressed in bat , is bound up with the heat production process . ucp1 is localized on the inner mitochondrial membrane and uncouples the activity of the respiratory chain from atp synthesis , thereby releasing energy as heat . in humans , it was estimated that as little as 50 g of bat could utilize up to 20% of basal caloric needs if maximally stimulated . previous studies suggest that 3-adrenergic agonist induce ucp1 expression in wat . despite these evidences , mirnas are small noncoding rnas of ~22 nucleotides in length that play a crucial part in posttranscriptional gene regulation . mirnas are involved in the pathogenesis of cardiovascular diseases and have become an intriguing target for therapeutic intervention [ 2225 ] . in addition , mirnas might serve a valuable diagnostic function for cardiovascular pathologies because mirnas leak into circulating blood from injured cells . adrenergic stimulation inhibits expression of mir-133 ( a muscle - enriched mirna ) to abolish posttranscriptional silencing of prdm16 . mir-196a induces functional brown adipogenesis in wat through the suppression of hoxc8 . in vivo , the brite / beige cells are described as ucp1-positive islets within white fat depots following cold or -adrenergic stimulation . meanwhile , thyroxine ( t3 ) could also induce ucp1 expression in white adipocytes in our previous study . however , whether more mirnas are involved in the process of wat browning especially epididymal adipose tissue ( visceral adipose tissue ) needs further study . in this study , we investigated the regulation and involvement of mirnas in the browning of epididymal adipose tissue . the effects of cl316243 on the expressions of genes involved in wat browning were detected and relevant regulatory mirnas were analyzed using bioinformation software . here , we sought to observe whether members of mirna were involved in the process of browning of visceral adipose tissue and whether it could serve as a novel treatment for obesity . for cl316243 treatment , fourteen six - week - old male c57bl/6j mice ( purchased from vital river laboratory animal technology co. ltd . ) one group was injected intraperitoneally ( i.p . ) once daily with 1 mg / kg cl316243 ( tocris bioscience ) in 0.9% nacl for 7 days ; 0.9% nacl was used in the control group instead of cl316243 . the mice were maintained at 22 2c on a 12 h/12 h light cycle ( 8.00 a.m. to 8.00 p.m. ) in office of laboratory animal welfare , certified animal facility , with free access to water and standard laboratory chow diet . the interscapular brown adipose tissues , inguinal subcutaneous adipose tissues , and retroperitoneal epididymal adipose tissues were dissected out . total rna were isolated from brown adipose tissues ( bat ) , epididymal adipose tissues , and subcutaneous adipose tissues using trizol reagent ( invitrogen ) . the cdna was synthesized using random hexamers ( invitrogen ) for subsequent real - time quantitative pcr analysis ( abi prism viia7 ; applied biosystems inc . ) . primers for ucp1 , pgc-1 , c / ebp , prdm16 , ppar2 , cidea , dio2 , and cpt1b were listed in table 1 . small rnas were extracted from brown adipose tissues , epididymal adipose tissues , and subcutaneous adipose tissues using the bioopure rna isolation kit ( bioo scientific , austen , tx , usa ) according to the manufacturer 's recommendations . the cdna was synthesized using mircute mirna first - strand cdna synthesis kit ( qiagen ) and pcr products were detected using mircute mirna qpcr detection kit ( qiagen ) according to the manufacturer 's recommendations for subsequent real - time quantitative pcr analysis ( abi prism viia7 ; applied biosystems inc . ) . primers for mir-9 , mir-338 - 3p , mir - let7 g , and u6 were listed in table 1 . the relative expressions among the different genes and mirnas were determined using the 2 method . samples were cut into 5 m sections , and hematoxylin - eosin staining was routinely performed . for all comparisons , student 's t - tests were performed using spss 21.0 software and p values less than 0.05 were considered to be significant on a two - tailed test . effect of cl316243 on the expression of genes involved in wat browning in the adipose tissues . as a first step , we evaluated the expression of the genes involved in wat browning in bat with cl316243 treatment . we did not detect a significant change in the expression of genes involved in wat browning such as ucp1 , pgc-1 , c / ebp , prdm16 , and cidea in bat . as shown in figure 1(a ) , compared with the control group , bat specific genes ucp1 , cidea and bat differentiation genes pgc-1 , c / ebp , and prdm16 mrna expression had no statistical differences in bat of cl316243 treated group ( p > 0.05 ) . next , we investigated the effect of cl316243 treatment on the expression of the browning genes in subcutaneous adipose tissue . as shown in figure 1(b ) , there were no significant changes in expression of the bat specific genes ucp1 , cidea and bat differentiation genes pgc-1 , c / ebp , and prdm16 mrna expression ( p > 0.05 ) . interestingly , as shown in figure 1(c ) , bat specific genes ucp1 , cidea , dio2 , and fatty acid oxidation related gene cpt1b mrna expression were significantly raised in epididymal adipose tissue with cl316243 treatment ( p < 0.05 ) . but the expression of other bat differentiation related genes pgc-1 , c / ebp , and prdm16 mrna had no statistical differences ( p > 0.05 ) . histological examinations revealed typical unilocular cells with rich cytoplasmic staining and multilocular lipid droplets in cl316243 treatment group in epididymal adipose tissue ( figure 2 ) . in silico what is noteworthy is the fact that ucp1 mrna expression was more significantly raised in the epididymal adipose tissue with cl316243 treatment than control group . we used the miranda prediction algorithm to identify putative mirnas that could regulate ucp1 genes . a panel of 2 mirnas , namely , mir-9 and mir-338 - 3p , was selected for its putative ability to target the 3-utrs of ucp1 mrna ( figure 3(a ) ) . expressions of mir-9 and mir-338 - 3p in the epididymal adipose tissue of mice with cl316243 treatment . next we investigated the expression of the ucp1-targeting mirnas in epididymal adipose tissue with cl316243 treatment . compared with control group , as shown in figure 3(b ) , the expression of mir-9 and mir-338 - 3p was significantly restrained in epididymal adipose tissue with cl316243 treatment ( p < 0.05 ) , and mir - let7 g ( not ucp1-targeting mirnas , as negative control ) expression had no statistical differences ( p > 0.05 ) . this study showed that there were more beige cells in epididymal adipose tissue with 3-adrenergic agonist ( cl316243 ) treatment , labeled by significantly raised expression of brown adipose specific gene ucp1 and the appearance of lots of beige cells in he sections . these results were consistent with previous researches on the browning effects of cl316243 in epididymal adipose tissue of animals [ 3436 ] . the present study highlighted the potential involvement of mirnas in the regulation of ucp1 expression on activating 3-adrenoceptor . the bat specific genes dio2 , cidea , and cpt1b mrna were also induced in response to selective 3 agonist . interestingly , ucp1 mrna expression level was restrained in brown adipose tissue in cl316243 group although there was no significant difference . we estimated that the browning of epididymal adipose tissue may increase thermogenesis , so interscapular brown adipose tissue restrained its ability of generating heat to keep balance of body temperature . earlier studies show that enhanced expression of ucp1 in wat of mice could reduce obesity [ 12 , 37 , 38 ] . furthermore , the distribution of white adipose tissue ( wat ) affects metabolic risk greatly . increased volume of visceral adipose tissue is associated with a higher risk of metabolic disease . interestingly , these beige cells play a more important role than the interscapular bat in mice losing weight following -adrenergic or cold stimulation . so browning visceral adipose tissue ( e.g. , epididymal adipose tissue ) the exact mechanism involved in the appearance of beige cells in epididymal adipose tissue with a 3-adrenergic agonist is still not fully elucidated . as previously showed , several mirnas have been demonstrated to be involved in activating of bat or the process of browning of wat . however , whether more mirnas are involved in the process of wat browning is still unknown . mirnas bind to complementary target sites and lead to repression of translation or degradation of the target transcript . here we showed that mir-9 and mir-338 - 3p which possibly targeted ucp1 were dramatically decreased with cl316243 treatment . these results seemed to strictly correlate with an increase in ucp1 mrna , confirming the hypothesis that cl316243 could reduce mir-9 and mir-338 - 3p expression leading to reducing the degradation of ucp1 messenger rna . we proposed that 3-adrenergic stimulations might restrain expression of mir-9 and mir-338 - 3p in the epididymal adipose tissue in mice . here our study showed that prediction of mirnas according to the mirna database may be an alternative way to identify novel mirnas involved in the process of wat browning . we focused on the ucp1-targeting mirnas , which may indicate the process of browning wat clearly and lay a good foundation for the prevention and treatment of obesity . clearly , we need to identify whether mir-9 and mir-338 - 3p are the potential candidates that target ucp1 . mirnas by luciferase reporter assays and qrt - pcr identification are ongoing but beyond the scope of this paper . taken together , our findings suggest that potential ucp1-targeting mir-9 and mir-338 - 3p may be involved in the process of the browning of epididymal adipose tissue through posttranscriptional suppression of ucp1 gene expression .
background . white adipose tissue browning may be a promising strategy to combat obesity . ucp1 is strongly induced in white adipose tissue with 3-adrenergic agonist treatment , but the causes of this increase have not been fully elucidated . this study aims to explore more mirnas involved in the process of browning of visceral adipose tissue . methods . total of fourteen mice were randomly divided into control and study group . study group mice were injected intraperitoneally with cl316243 once daily for seven days ; meanwhile the control group were treated with 0.9% nacl . after a 7-day period , the expression of genes involved in wat browning and potential ucp1-targeting mirnas in adipose tissues was analyzed by qpcr . results . qpcr analysis revealed that ucp1 , dio2 , cidea , and cpt1b in epididymal adipose tissue were overexpressed in cl316243 group . furthermore , potential ucp1-targeting mir-9 and mir-338 - 3p in epididymal adipose tissue were significantly decreased in cl316243 group . conclusion . this suggests that potential ucp1-targeting mir-9 and mir-338 - 3p may be involved in the browning of epididymal adipose tissue by regulating ucp1 gene expression . in this study , we demonstrated that this increase of ucp1 is due , at least in part , to the decreased expression of certain ucp1-targeting mirnas in epididymal adipose tissue compared to control .
the aharon - vaidman game @xcite is a conceptually simple example of how quantum mechanics can be both beneficial and counter - intuitive . in the classical analog , alice puts a particle in one of three boxes such that when bob , who in next turn is allowed to check only two of them , is most likely to find it . alice who can either accept or not accept a particular game trial wins whenever she accepts a trial in which bob has also discovered the particle . hence , it is obvious that alice will not use the box that bob does not have access to and therefore her chance to win is @xmath0 . however , the result of the game can be totally different when the particle is described by laws of quantum mechanics and alice prepares it in equal superposition of being in each of the boxes . then her chance to win can reach @xmath1 if she takes a specific projective measurement after bob s turn . in this article we present an experimental realization of the av quantum game using a single photon as the incident particle and a system of three slits in lieu of the boxes . the original three box paradox was proposed by aharonov and vaidman in ref . the quantum game @xcite was conceived much later and exhibits a clear quantum advantage if the game rules are followed with care . the setup comprising of a single photon source ( heralded parametric down conversion source or attenuated laser ) , triple slit @xcite and single photon detectors allowed us to perform optimized quantum tomography to characterize the qutrit states and to play the game correctly in the next step . some quantum communication protocols that can be considered as quantum games such ascoin tossing @xcite and the byzantine agreement @xcite have been demonstrated previously . however , the av game is a specific example of a quantum game where one can demonstrate the quantum advantage playing the game with a single particle at a time in contrast to the entanglement based games @xcite . we will begin with the introduction of the concept of the qutrit encoded in spatial degrees of freedom of a single photon @xcite . in the next step we discuss the av quantum game @xcite and its experimental implementation . presented implementation of the game is conceptually similar to the young experiment , where multilevel quantum systems can be encoded in paths related to a photon passing through the slits . recently this type of quantum state encoding has drawn much interest . in particular , taguchi _ et . used parametric down conversion source to prepare two qubit @xcite and two qutrit @xcite states . in turn , lima _ et . al . _ in ref . @xcite demonstrated the seven and eight dimensional state encoding . alternative approaches resort to state encoding in various hybrid ways such as energy - time @xcite or polarization - orbital angular momentum @xcite . those implementations were useful to perform a bell test for energy - time entangled qutrits @xcite and to demonstrate the optimal cloning strategy @xcite , respectively . more recently , the noncontextuality of quantum mechanics was tested based on similar scheme @xcite . the young type qutrit is realized using triple slits and a single photon source(sps ) . the photon s initial spatial mode is gaussian with the characteristic diameter much larger than the size of the slits and with the peak intensity coincident with the slit area , see inset in fig . [ fig : experiment ] . under these conditions one can consider the state of a photon of wavelength @xmath2 in the position @xmath3 to be a plane wave @xmath4 propagating in direction given by the wave vector @xmath5 of length @xmath6 . moreover the photon s initial propagation direction is assumed to be paraxial and the distance between the slits larger than their characteristics widths . this allows us to approximate the phase to be constant at each of the slit . hence , the spatial wave function of the photon passing through the @xmath7th slit can be written as @xmath8 , where @xmath9 , and @xmath10 stands for the transmission probability amplitude , which we assume is constant on the slit and @xmath11 elsewhere . this means that the total wave function of the photon passing through three slits comprises of three orthogonal contributions . each of them can be written in momentum representation as @xcite : @xmath12 , where @xmath13 , @xmath14 is the slit width and @xmath15 is the distance between the slits . these definitions allow us to write the state of the transmitted photon as : @xmath16 which accounts for the basic definition of a young type qutrit . here amplitudes @xmath17 , @xmath18 and @xmath19 depend on the transmission functions @xmath10 . the projective measurements are determined by the laws of propagation and the geometry of the setup . for simplicity , we chose to detect in the positions corresponding to near and far field . this can be done using a lens and placing a detector in the focal plane ( far field ) and in the plane where the image of the slits is formed ( near field ) . in the near field , if the active area of the detector is larger than the image of each slit , the probability to detect a photon prepared in the state @xmath20 as defined above in the position corresponding to @xmath7th slit image is proportional to @xmath21 . hence it is easy to see that each of the three positions can be associated with the measurement operator defined as @xmath22 where @xmath23 is the normalization factor to be specified later and subscript nf stands for near field . the interpretation of measurements in the far field needs more attention . a detection in the position @xmath24 in the focal plane corresponds to the projector onto @xmath25 , which is related to the plane wave propagating in the direction given by the transverse wave vector @xmath26 . hence the probability to detect a photon @xmath27 can be seen as proportional to @xmath28 , where we introduced @xmath29 . based on this observation we can define the measurement operator in the far field as : @xmath30 where @xmath31 is the normalization factor , the phase parameter reads as @xmath32 and the subscript ff stands for far field . the measurement operators @xmath33 and @xmath34 can be used to construct rank 7 positive value operator measure ( povm ) set allowing one for reconstruction of arbitrary pure state . for this reason we take three near field measurements @xmath35 , @xmath9 and six far field operators @xmath36 corresponding to @xmath37 . this specific choice requires renormalization , which can be done when @xmath38 . nm ) , laser power controller ( lpc ) and neutral filter ( nf ) . the pdc - sps ( @xmath39 nm ) is based on ppktp crystal pumped by blue continuous wave laser . heralding photon is detected by detector d3 . the single photons from both sources are coupled to single mode fibers ( smf ) . a qutrit is prepared using the blocking mask and three slits . next the measurement part of the setup comprises of a 2 inch diameter @xmath40 mm lens ( l3 ) , 2 inch diameter pellicle beamsplitter ( bs ) , color filters ( f ) and two detection systems for far ( d1 ) and near filed ( d2 ) , each comprised of multimode fiber mounted on a precise motorized stage ( thorlabs zst13 ) and a perkin elemer avalanche photodiode . ] the ability to encode and measure qutrit states can be utilized to demonstrate aharon and vaidman s quantum game @xcite . a classical strategy allows alice for at most @xmath0 chance to win . on the other hand , when she uses quantum particles her chance rises above this limit and ideally reaches @xmath1 , when she chooses her initial state to be @xmath41 in the first turn of the game . in the second turn , assuming bob has access to slits ( boxes ) @xmath11 and @xmath42 , if he decides to check if the photon is passing though slit number @xmath11 , he does a projective measurement on the state @xmath43 . if he finds the particle , then his state becomes @xmath44 , otherwise @xmath45 . the photon detection results in losing the photon from the system , otherwise the photon goes through opened slits . this can be simulated by blocking slit number 0 , which will allow us to simulate all those cases when bob s detector did not click . on the other hand , the case of finding a photon in slit number 0 can be simulated by closing all others . next , in the third turn of the game alice makes a projective measurement on @xmath46 . this can be done by placing the detector d1 in the far field plane in the position corresponding to povm element @xmath47 . if alice detects a particle , she accepts the game trial , and if she does not , she cancels it . now it is clear that alice can not lose as whenever bob does not detect a photon , the state after the second turn is @xmath48 and alice s detector never clicks as @xmath49 . if bob found the particle in a slit @xmath11 and tried to leave no trace of that , alice has @xmath50 chance to detect it after that . the same reasoning holds if bob chooses the slit number @xmath42 . the experimental setup is depicted in fig . [ fig : experiment ] . we used two single photon sources : heralded parametric downconversion ( pdc ) source based on periodically polled potassium titanyl phosphate ( ppktp ) crystal ( pdc - sps ) and attenuated hene laser ( al - sps ) . in order to fulfill the assumption of the plane wave incidence at the slits we used single mode fibers ( smfs ) and optics to set the characteristic spatial mode diameter to approximately @xmath51 mm . we control the state of the qutrit using blocking mask ( b ) to change the configuration of opened slits and tilting slightly the mirror ( m ) to change the incidence angle @xmath52 . under these simplifying assumptions the experimentally possible states are in the following form @xmath53 . the far and near field measurements were implemented by photon counting in the transverse planes at distances of @xmath54 and @xmath55 mm , respectively . large @xmath42 inch pellicle beamsplitter ( bs ) was introduced to reduce the disturbance of the setup while changing the planes of detection . each detector system ( d1,d2 ) was comprised of a multimode fiber mounted on precise motorized stage and perkin elemer avalanche photodiode . step motors were used to control the transverse position of the fiber with an accuracy of @xmath56 m . counts were registered by the field programmable gate array ( fpga ) logic system . before simulating the av game and characterizing prepared states the setup was calibrated . it was done by opening all slits and setting the initial direction of photon propagation to @xmath57 , which corresponded to preparing the state @xmath58 . next we measured the photon count rates as a function of the detector position in far- and near - field planes . the results together with the best fits are presented as insets in fig . [ fig : experiment ] . blue ( online ) dots represent experimental data , while the continuous line is a theoretical fit . the positions corresponding to the far-(near- ) field part of povm are marked with bigger red dots on inset next to detector d1(d2 ) . note that the smoothed shape of the slits image ( near field ) is attributed to the finite size of a detector . . photon count probabilities were measured in the positions related to projective measurements @xmath34 and @xmath33 for three typical states @xmath59 , @xmath60 , @xmath61 . [ cols="^,^,^,^,^",options="header " , ] we characterized the prepared qutrit , which has been done resorting to povm set described earlier and quantum state tomography methods . for this reason the photon counts were measured by placing detectors in positions related to measurements operators @xmath34 and @xmath33 . those positions are marked with red dots on inset plots in fig . [ fig : experiment ] . in order to justify the results of the quantum game we took three typical states : @xmath62 , @xmath63 , @xmath64 . for the first state all slits were open , for the second one a slit number 0 was closed and for the last state the propagation direction of the photon was modified in order to introduce a phase @xmath65 . these measurements were done using al - sps , its outcomes are gathered in table [ tab : counts ] and the results of tomographic reconstruction using the maximal likelihood method are depicted in fig . [ fig : density ] . it is seen in fig . [ fig : density](a , b ) that for the states @xmath66 and @xmath67 the real part dominates as there is no phase present . on the other hand changing the initial photon direction @xmath52 it was possible to introduce the phase as is apparent in fig . [ fig : density](c ) as the imaginary bars are significant . ideally , the imaginary part of the density matrix for states @xmath66 and @xmath67 is zero . here , the nonzero height is attributed to the noise originating from dark counts , stray light and imperfect positioning . . left ( right ) column depicts real ( imaginary ) part of the reconstructed density matrix . the reconstructed phase related to state @xmath68 was @xmath69 . ] for the quantum game the qutrit was prepared in @xmath70 , which was characterized before , see fig . [ fig : density](a ) . we simulated all possible scenarios of bob s measurement using pdc - sps and al - sps . the measured photon counts are presented in table as an inset of fig . [ fig : qg ] . in the perfect case one expects no counts when two slits are open and alice sets her detector in the far field plane in the position related to @xmath47 . this corresponds to the first local maximum marked with red dot on plot related to far field inserted in fig . [ fig : experiment ] . here , the measured counts are attributed to the finite size of the multimode fiber core , dark counts and stray light . we estimate that the former two contribute approximately 3 coincidence counts per 2 seconds . despite the background noise from stray light and dark counts , and the slight non - ideal properties of alice s measurements , she won in 87% of the accepted trials using pdc - sps and in 82% of the accepted trials using al - sps . note that the overall efficiency of the game , which is due to experimental deficiencies including the photon collection , the detection efficiencies and number of alice s detectors , will only limit the number of accepted trials , but not the percentage of winning trials . s ( @xmath71 min , coincidence window @xmath71 ns ) . ] in conclusion , we experimentally presented a simple way to implement a qutrit system into a single photon s spatial degree of freedom , which allowed us to perform state tomography and simulate the av quantum game . the encoding part resorted to the young type experiment , where a photon passes through 3 slits , which defined its state . by controlling an initial propagation direction of a photon and configuration of the slits it was possible to encode a certain class of states . our state reconstruction technique was based on a small number of measurements over a short period of time , which makes our method stable and time efficient in contrast to ref . the authors acknowledge insightful discussions with c. fuchs , m. graydon and g. noel tabia from perimeter institute and funding from nserc ( cgs , quantumworks , discovery , usra ) , ontario ministry of research and innovation ( era program ) , cifar , industry canada and the cfi . pk acknowledges fruitful discussions with r. demkowicz - dobrzanski from warsaw university , support by the foundation for polish science team project cofinanced by the eu european regional development fund and the mobility plus project financed by polish ministry of science and higher education . ac acknowledges support by micinn project no . fis2008 - 05596 and the wenner - gren foundation .
the aharon - vaidman ( av ) game exemplifies the advantage of using simple quantum systems to outperform classical strategies . we present an experimental test of this advantage by using a three - state quantum system ( qutrit ) encoded in a spatial mode of a single photon passing through three slits . the preparation of a particular state is controlled as the photon propagates through the slits by varying the number of open slits and their respective phases . the measurements are achieved by placing detectors in the specific positions in the near and far fields after the slits . this set of tools allowed us to perform tomographic reconstructions of generalized qutrit states , and to implement the quantum version of the av game with compelling evidence of the quantum advantage .
Add this Tweet to your website by copying the code below. مزید جانیئے Add this video to your website by copying the code below. مزید جانیئے ہہم، سرور تک پہنچنے میں ایک مسئلہ تھا۔ دوبارہ کوشش کریں؟ بنیادی ٹویٹ شامل کریں میڈیا شامل کریں ٹوئٹر کا مواد اپنی ویب سائٹ یا ایپلی کیشن میں ایمبیڈ کر کے، آپ ٹوئٹر کے ڈیولپر اقرارنامہ اور ڈیولپر پالیسی سے اتفاق کر رہے ہیں۔ پیش منظر ||||| Add this Tweet to your website by copying the code below. مزید جانیئے Add this video to your website by copying the code below. مزید جانیئے ہہم، سرور تک پہنچنے میں ایک مسئلہ تھا۔ دوبارہ کوشش کریں؟ بنیادی ٹویٹ شامل کریں میڈیا شامل کریں ٹوئٹر کا مواد اپنی ویب سائٹ یا ایپلی کیشن میں ایمبیڈ کر کے، آپ ٹوئٹر کے ڈیولپر اقرارنامہ اور ڈیولپر پالیسی سے اتفاق کر رہے ہیں۔ پیش منظر ||||| Danish finance minister Kristian Jensen. Photo: Liselotte Sabroe/Ritzau Scanpix Fox Business Network’s inaccuracy-laden comparison between Denmark and Venezuela reflects the overly politicised nature of media in the United States, Minister of Finance Kristian Jensen said Monday. Jensen spoke to news agency Ritzau after a clip from Fox Business Network, a cable news channel owned by the Fox News Group, was widely shared online on Monday. Fox Business is the most-watched business news station in the United States. In the clip, presenter Trish Regan says Danes do not want to work or finish their studies, cites incorrect employment and education figures, and simultaneously accuses Denmark and Venezuela of having “stripped people of their opportunities.” READ ALSO: Why Fox Business' comparison of Denmark and Venezuela is built on fallacy “(The Fox report) is completely exaggerated and lacking in credibility. It simply reflects how politicised news coverage in the United States has become,” Jensen said. “Completely detached from fact, they are using reports to make incorrect claims,” the minister added. Jensen also responded to Regan’s report via Twitter on Monday in a post which received widespread support from Danes. “So Danes don't wants [sic.] to work? 11 places better than US in OECD statistics! We are working much more than Americans and at the same time ranking as the world's best in Work-Life-Balance. You should come to Denmark if you dare be confronted with facts,” the finance minister tweeted to Regan. So Danes don’t wants to work? 11 places better than US in OECD statistics! We are working much more than Americans and at the same time ranking as the worlds best in Work-Life-Balance. You should come to Denmark if you dare be confronted with facts @trish_regan 😀🇩🇰 pic.twitter.com/hFf9ysy62x — Kristian Jensen (@Kristian_Jensen) August 13, 2018 “Excuse my direct language, but she must simply not be allowed to piss all over Denmark [Danish: hun skal simpelt hen ikke pisse på Danmark, ed.],” he told Ritzau on Monday evening. “I will not accept something so despicably wrong being reported,” he added. The minister also pointed out the absurdity of suggesting similarity between governments in Denmark and Venezuela. “Venezuela is a socialist dictatorship. Denmark is a free, regulated market economy. We are, by all statistics, one of the most open, free economies,” he said. “Yes, we have high taxes. People pay a lot of tax because we get a lot back from public finances,” he added. Jensen also said reporting with no basis in fact was a significant democratic problem. “The United States has lost sight of the facts in its political journalism. That is very, very dangerous – if you no longer maintain journalism should be based on facts, then we are losing something on which our democracy is built,” he said. “Everyone is entitled to their own opinions, but no-one is entitled to their own facts,” the minister said. READ ALSO: More Danes applied for study programmes with best job prospects: ministry ||||| In spite of the fierce pushback from the government and the public, Ms. Regan’s statement did evoke some agreement in Denmark. While the mention of Venezuela — which is suffering widespread starvation and hyperinflation — won her few friends, the argument that high taxes take a toll on enterprise was another matter. Ms. Regan made an interesting case about the incentive structure of Danish society, said Anders Krab-Johansen, the publisher and chief executive of Berlingske Media, one of Denmark’s largest media groups. “The reward of making an extra effort for oneself or for society is very diminished compared to other countries,” said Mr. Krab-Johansen, who attended college in the United States. Following a Danish model in the United States, he added, would affect “what I think Americans like best about their country — the high level of individual freedom.” On Wednesday, Ms. Regan returned briefly to the subject of Denmark on the air, to offer a clarification. “I was never implying that conditions in Denmark were similar in any way to the current tragedy on the ground there in Venezuela,” she said, adding that she had merely cited evidence to show that “socialism is not the way.” That statement pleased the finance minister, who returned to Twitter to thank Danes who had objected to her earlier comments. ||||| Danish politician Dan Jørgensen explains why Denmark is not like Venezuela. Youtube / Birgitte Roel “As Shakespeare said, there’s something rotten in [the state of] Denmark,” said Fox News news anchor Trish Regan on a segment last week, after blaming the crisis in Venezuela on socialism. She went on to use ‘facts’ to paint a picture of how Denmark is actually another example of a socialist dystopia, seemingly implying that its economy was headed the same way as Venezuela. Ad “Denmark, like Venzuela, has stripped people of their opportunities. Is that the direction we want to go in?” she finished, adding the warning that Bernie Sanders and ‘this woman from Queens’ seem to like socialism. Confused as to whether Fox News is a comedy network, outraged but helpful Danes decided to aid Trish Regan in getting the facts straight. Ad Denmark’s Minister for Finance took to Twitter: So Danes don’t wants to work? 11 places better than US in OECD statistics! We are working much more than Americans and at the same time ranking as the worlds best in Work-Life-Balance. You should come to Denmark if you dare be confronted with facts @trish_regan 😀🇩🇰 pic.twitter.com/hFf9ysy62x — Kristian Jensen (@Kristian_Jensen) August 13, 2018 Denmark’s ambassador to the US deplored the lack of cupcake cafés in Denmark. Dear @trish_regan. We did some quick research on #Denmark’s global rankings (https://t.co/IYtJ4jU2th). Useful context to your story abt state of affairs in my country. Go see 🇩🇰 for yourself (we would love to assist) although lack of cupcake cafés probl will be disappointing 😊 pic.twitter.com/BwU1wukSsV — Lars Gert Lose 🇩🇰 (@DKambUSA) August 13, 2018 The former Minister for Food, Agriculture and Fisheries made a video published on Youtube with the title, “Something is rotten at Fox News”. The criticism and sheer unbelievability of the claims on the Fox Business segment seems to have swayed Trish Regan, if only slightly. She issued a clarification saying she did not at all intend to compare Denmark with Venezuela. ||||| The interactive transcript could not be loaded. Rating is available when the video has been rented. This feature is not available right now. Please try again later.
– Insult the Danes, and expect the Danes to return fire. Fox Business Network host Trish Regan learned this after she held up the nation as an example of the dangers of socialism with the most unflattering of comparisons, reports the New York Times. "Denmark, like Venezuela, has stripped people of their opportunities," said Regan. "Nobody is incentivized to do anything because they’re not going to be rewarded." That set off a social media storm in the nation, with Finance Minister Kristian Jensen leading the way with a tweet showing how Denmark ranks 11 places ahead of the US in employment stats. "We are working much more than Americans and at the same time ranking as the world's best in Work-Life-Balance," he adds. "You should come to Denmark if you dare be confronted with facts." As for the big comparison, "Venezuela is a socialist dictatorship. Denmark is a free, regulated market economy," he said, per the Local. Other government officials chimed in, and a former government minister issued a rebuttal in this video, reports Business Insider. In a followup video, Regan clarified: “I was never implying that conditions in Denmark were similar in any way to the current tragedy on the ground there in Venezuela," she said. "I was merely pointing out ... that socialism is not the way." Jensen, for his part, took that as a victory. "Thanks to all who helped put pressure on Trish and Fox to have them correct their claim," he wrote, with translation from the Times. In Regan's defense, a Danish media exec says taxes in Denmark are too high, and thus "the reward of making an extra effort for oneself or for society is very diminished compared to other countries."
tracheostomy has been used widely in prolonged respiratory support for pediatric patients with neuromuscular or respiratory dysfunction , or airway obstructions resulting from anatomical abnormalities and subglottic stenosis12 ) . however , tracheostomy is associated with some complications especially in children3 ) . tracheoinnominate artery fistula ( tif ) is an uncommon and potentially fatal complication of tracheostomy and is associated with high morbidity and mortality , therefore prompt diagnosis and treatment are essential4 ) . although tif is primarily treated surgically with the ligation of the innominate artery and repair of the tif , the survival rate after surgical treatment of tif has been reported to be 25%50%7 ) . endovascular techniques have significantly advanced , providing physicians with an alternative treatment to a traditional open - surgical treatment in selected cases . we report a case of tif in a 14-year - old boy that was successfully treated by the endovascular stent grafting of the innominate artery . the patient was delivered via cesarean section at gestational age 33 weeks with birth weight of 1,660 g. he had a history of severe periventricular leukomalacia , cystic encephalomalacia , and hydrocephalus with a ventriculoperitoneal shunt . he was diagnosed with aspiration pneumonia and admitted to the intensive care unit . in the last 6 months , he had developed aspiration pneumonia three times . he needed prolonged ventilator care because of the frequent bouts of aspiration pneumonia and severe inspiratory stridor . tracheostomy operation was performed on the 7th hospital day . on the 6th day after the tracheostomy , a small amount of fresh blood was found in the tracheostomy tube , and on the 8th day , the tracheostomy tube was removed accidentally by the patient 's coughing . his blood pressure dropped below 70/30 mmhg , heart rate 60 beats / min , and oxygen saturation 40%50% . endotracheal intubation was performed , and the hemorrhage was controlled temporarily by the hyperinflation of the cuff of the endotracheal tube . laboratory tests revealed markedly decreased hemoglobin levels but there was no feature suggestive of disseminated intravascular coagulation ( white blood cell , 16,540/mm ; hemoglobin , 5.5 g / dl ; hematocrit , 16.1% ; platelet count , 158,000/l ; prothrombin time , 1.19 international normalized ratio ; activated partial thromboplastin time , 40.4 seconds ; fibrinogen , 254 mg / dl ; d - dimer , 0.98 mg / l ) . he received transfusions of packed red blood cell and inotropics , such as dopamine and dobutamine . contrast - enhanced thoracic computed tomography ( ct ) demonstrated that the innominate artery was abutting on the adjacent tracheostomy tube ( fig . 1 ) . suspecting tif , the patient was considered to be at high risk for open surgery . emergency angiography was performed , and an 8-fr sheath was inserted into the right femoral artery . right innominate arteriogram showed that the endotracheal tube was close to the innominate artery , and the undulation of contrast medium around the innominate arterial wall revealed injuries to the arteries ( fig . the diameter and length of the innominate artery were 7.3 mm and 48 mm , respectively , on contrast enhanced ct . the 612 mm/58 mm jo - stent - graft ( jostent , abbott vascular ltd . , rangendingen , germany ) was selected to match the measured innominate artery diameter . the jo - stent - graft mounted on the 6/80 mm balloon catheter ( foxcross pta catheter , abbott vascular ltd . , baar , switzerland ) was placed through the innominate artery from the right common carotid artery . the balloon was inflated with the 8/40 mm balloon ( foxcross pta catheter , abbott vascular ltd . , baar , switzerland ) for attaching the innominate artery and jo - stent - graft . because the injured artery was adjacent to the bifurcation of the right carotid artery and the right subclavian artery , the stent graft was deployed through the innominate artery from the right common carotid artery ( fig . 3 ) . the ruptured vessel wall was sealed and reconstructed by inserting a jo - stent - graft . angiography performed after the stent placement demonstrated no extravasation of contrast medium or occlusion of the innominate artery . the right subclavian artery was covered and occluded by the stent graft , and it was refilled with significant retrograde flow from right vertebral artery , indicating subclavian steal ( fig . the patient recovered without any complications , such as sepsis , local infection , neurologic and right upper limb deficits of the subclavian steal syndrome . twenty - four months after the endovascular repair , no ischemic complications had developed , and the patient had no clinical symptoms such as hemoptysis . tif is a rare but usually life - threatening complication of tracheostomy , and can be fatal in the absence of prompt management such as a surgical operation . the reported incidence is 0.1%1% , and it usually develops 714 days after tracheostomy . however , tif can occur at any time after the procedure56 ) . the innominate artery commonly traverses the trachea at the level of the 9th tracheal ring , but the cross can occur anywhere between the 6th and 13th tracheal rings6 ) . a tracheostomy is performed at the level between the 2nd and 4th tracheal rings in adults and at the 3rd and 4th tracheal rings in children9 ) . a high - located innominate artery , especially in thin and young patients is a possible risk factor for fistula formation4 ) . the mechanism of injury is likely to be pressure necrosis of the anterior tracheal wall leading to erosion of the posterior aspect of the innominate artery caused by the tracheostomy cuff10 ) . this is due to the mechanical force generated by the cuff or tip of the tracheostomy tube . the necrosis of the trachea can occur by a low - lying tracheostomy tube below the 3rd to 4rd trachea rings , overinflated cuffs , high - riding innominate artery , local infections by the tracheal defects , prolonged tracheostomy , or neck and chest deformities41112 ) . in our case , the tracheostomy site was located below the 4th tracheal ring , and the innominate artery was located at midtrachea . contrast - enhanced thoracic ct showed that the tracheostomy tube was adjacent to the innominate artery ( fig . 1 ) . early diagnosis is the most important factor in the successful treatment of tif . when unexpected bleeding occurs around a tracheostomy site between 3 days and 6 weeks following the tracheostomy , the clinician should consider that the bleeding could be associated with tif . sentinel bleeding is reported in more than 50% of patients who develop massive hemorrhage in tif12 ) . thus , even a small amount of fresh bleeding should not be ignored . in our patient , a small amount of fresh blood was aspirated while suctioning through the tracheostomy site , but it was ignored . a few days later , acute massive bleeding occurred , and the patient developed potentially fatal hypovolemic shock . if this maneuver fails to stop the bleeding , direct digital compression can be applied to the anterior tracheal wall through the stoma6 ) . in pediatric patients , a practical pediatric strategy is the inflation of a tracheostomy or endotracheal tube cuff13 ) . the main treatment of tif is a surgical procedure with ligation of the innominate artery and bypass grafting of the distal innominate artery14 ) . the mortality rate for emergency surgery has been reported to be more than 50%7 ) . recently , endovascular procedures involving stent graft implantation or endovascular embolization have significantly improved , and can be safely performed as an alternative to the traditional open surgical method in selected circumstances15 ) . cavalcante et al.10 ) described in 2015 the case of tif successfully treated with a stent graft in a 4-year - old boy but first endovascular procedure ( coil embolization ) was not successful . although endovascular stenting does not guarantee the complete elimination of tif and it has some complications of graft infection , graft occlusion , and rebleeding due to tracheal erosion , it may be less invasive and more expeditious than surgical exploration7 ) . we decided to perform an endovascular treatment because open surgery was too dangerous due to several existing comorbidities . endovascular stent grafting was done instead of transarterial embolization of the innominate artery because the entire occlusion of the innominate artery carries the high risk of cerebrovascular insufficiency . there were no complications such as stent graft infection , occlusion , or rebleeding for 2 years . therefore , endovascular graft stenting of the innominate artery can be feasible , safe , and less invasive for the treatment of tif in pediatric cases .
tracheoinnominate artery fistula is a rare , fatal complication of tracheostomy , and prompt diagnosis and management are imperative . we report the case of tracheoinnominate artery fistula after tracheostomy in a 14-year - old boy with a history of severe periventricular leukomalacia , hydrocephalus , cerebral palsy , and epilepsy . the tracheoinnominate artery fistula was successfully treated with a stent graft insertion via the right common femoral artery . endovascular repair of the tracheoinnominate artery fistula via stent grafting is a safe , effective , and minimally invasive treatment for patients in poor clinical conditions and is an alternative to traditional open surgical treatment .
Close Get email notifications on James Beaty daily! Your notification has been saved. There was a problem saving your notification. Whenever James Beaty posts new content, you'll get an email delivered to your inbox with a link. Email notifications are only sent once a day, and only if there are new matching items. ||||| Oklahoma 12-year-old dies from B.B. gun shot to the head By David Ferguson Wednesday, August 20, 2014 8:24 EDT A 12-year-old boy in Bache, Oklahoma was killed during a weekend sleepover when his best friend shot him in the head with a B.B. gun. Pittsburgh County Sheriff Joel Kerns told Tulsa’s Channel 6 News that the accidental killing took place on Friday night. “This young man was actually his best friend,” said Kerns. “They wasn’t arguing, they wasn’t fighting. We think it’s just horseplay.” The boy who shot the other boy told authorities that he thought the gun was unloaded. The victim’s mother rushed the boy to McAlester Regional Medical Center. Kerns explained that she felt she could transport him to the hospital faster than an ambulance could reach the house, which he described as being in a “semi-remote area.” On Saturday, the victim was transferred to a hospital in Tulsa where he died of head trauma. Kerns said that such deaths are uncommon with B.B. and pellet guns, but occasionally they occur. “It’s not unusual to lose an eye, or something of that effect with a B.B. gun, but you don’t think of death when you think of pellet or B.B. guns,” he said. “It just tears you up, and there’s another person always affected by this, it ain’t just the victim and his family, this juvenile has got to live with this the rest of his life,” the sheriff said. The incident, Kerns told Channel 6, underscores the need for anyone who handles a gun, even pellet and B.B. guns, should take a gun safety course. Watch video about this story, embedded below: NewsOn6.com – Tulsa, OK – News, Weather, Video and Sports – KOTV.com |
– A night of fun between two best friends at a sleepover turned deadly when one allegedly accidentally shot the other in the head with a pellet gun, McAlester News-Capital reports. Justin Ingle, 12, died Saturday at a Tulsa, Okla., hospital of head trauma, reports the Raw Story. The incident happened Friday night in the town of Bache. Police say the boy was hit in the temple from only a few feet away by a pellet traveling at 1,250 feet per second, according to KOTV. "This juvenile has got to live with this the rest of his life," the sheriff says, referring to the boy—not being named due to his age—who allegedly fired the shot. He told authorities he didn't think the pellet gun was loaded. An investigation is ongoing, but so far, authorities believe the shooting was accidental—"horseplay," as the sheriff calls it. "You don’t think of death when you think of pellet or BB guns," the sheriff says, but "this is not your Red Rider standard gun." In the past, it was "not unusual to lose an eye, or something of that effect with a BB gun," but these days, new models are much more powerful and can kill, though such deaths are uncommon, he explains. (Earlier this year, a 7-year-old was accidentally shot and killed by a 5-year-old who was looking for his toy gun.)
SECTION 1. SHORT TITLE; TABLE OF CONTENTS. (a) Short Title.--This Act may be cited as the ``Omnibus Trade Act of 2010''. (b) Table of Contents.--The table of contents for this Act is as follows: Sec. 1. Short title; table of contents. TITLE I--EXTENSION OF TRADE ADJUSTMENT ASSISTANCE AND HEALTH COVERAGE IMPROVEMENT Subtitle A--Extension of Trade Adjustment Assistance Sec. 101. Extension of trade adjustment assistance. Sec. 102. Merit staffing for State administration of trade adjustment assistance. Subtitle B--Health Coverage Improvement Sec. 111. Improvement of the affordability of the credit. Sec. 112. Payment for the monthly premiums paid prior to commencement of the advance payments of credit. Sec. 113. TAA recipients not enrolled in training programs eligible for credit. Sec. 114. TAA pre-certification period rule for purposes of determining whether there is a 63-day lapse in creditable coverage. Sec. 115. Continued qualification of family members after certain events. Sec. 116. Extension of COBRA benefits for certain TAA-eligible individuals and PBGC recipients. Sec. 117. Addition of coverage through voluntary employees' beneficiary associations. Sec. 118. Notice requirements. TITLE II--ANDEAN TRADE PREFERENCES ACT Sec. 201. Extension of Andean Trade Preference Act. TITLE III--OFFSETS Sec. 301. Customs user fees. Sec. 302. Time for payment of corporate estimated taxes. TITLE IV--BUDGETARY EFFECTS Sec. 401. Compliance with PAYGO. TITLE I--EXTENSION OF TRADE ADJUSTMENT ASSISTANCE AND HEALTH COVERAGE IMPROVEMENT Subtitle A--Extension of Trade Adjustment Assistance SEC. 101. EXTENSION OF TRADE ADJUSTMENT ASSISTANCE. (a) In General.--Section 1893(a) of the Trade and Globalization Adjustment Assistance Act of 2009 (Public Law 111-5; 123 Stat. 422) is amended by striking ``January 1, 2011'' each place it appears and inserting ``Febrary 13, 2011''. (b) Application of Prior Law.--Section 1893(b) of the Trade and Globalization Adjustment Assistance Act of 2009 (Public Law 111-5; 123 Stat. 422 (19 U.S.C. 2271 note prec.)) is amended to read as follows: ``(b) Application of Prior Law.--Chapters 2, 3, 4, 5, and 6 of title II of the Trade Act of 1974 (19 U.S.C. 2271 et seq.) shall be applied and administered beginning February 13, 2011, as if the amendments made by this subtitle (other than part VI) had never been enacted, except that in applying and administering such chapters-- ``(1) section 245 of that Act shall be applied and administered by substituting `February 12, 2012' for `December 31, 2007'; ``(2) section 246(b)(1) of that Act shall be applied and administered by substituting `February 12, 2012' for `the date that is 5 years' and all that follows through `State'; ``(3) section 256(b) of that Act shall be applied and administered by substituting `the 1-year period beginning February 13, 2011, and ending February 12, 2012,' for `each of fiscal years 2003 through 2007, and $4,000,000 for the 3-month period beginning on October 1, 2007,'; ``(4) section 298(a) of that Act shall be applied and administered by substituting `the 1-year period beginning February 13, 2011, and ending February 12, 2012,' for `each of the fiscal years' and all that follows through `October 1, 2007'; and ``(5) subject to subsection (a)(2), section 285 of that Act shall be applied and administered-- ``(A) in subsection (a), by substituting `February 12, 2011' for `December 31, 2007' each place it appears; and ``(B) by applying and administering subsection (b) as if it read as follows: ```(b) Other Assistance.-- ```(1) Assistance for firms.-- ```(A) In general.--Except as provided in subparagraph (B), assistance may not be provided under chapter 3 after February 12, 2012. ```(B) Exception.--Notwithstanding subparagraph (A), any assistance approved under chapter 3 on or before February 12, 2012, may be provided-- ```(i) to the extent funds are available pursuant to such chapter for such purpose; and ```(ii) to the extent the recipient of the assistance is otherwise eligible to receive such assistance. ```(2) Farmers.-- ```(A) In general.--Except as provided in subparagraph (B), assistance may not be provided under chapter 6 after February 12, 2012. ```(B) Exception.--Notwithstanding subparagraph (A), any assistance approved under chapter 6 on or before February 12, 2012, may be provided-- ```(i) to the extent funds are available pursuant to such chapter for such purpose; and ```(ii) to the extent the recipient of the assistance is otherwise eligible to receive such assistance.'.''. (c) Conforming Amendments.-- (1) Section 236(a)(2)(A) of the Trade Act of 1974 (19 U.S.C. 2296(a)(2)(A)) is amended to read as follows: ``(2)(A) The total amount of payments that may be made under paragraph (1) shall not exceed-- ``(i) $575,000,000 for fiscal year 2010; and ``(ii) $66,500,000 for the 6-week period beginning January 1, 2011, and ending February 12, 2011.''. (2) Section 245(a) of the Trade Act of 1974 (19 U.S.C. 2317(a)) is amended by striking ``December 31, 2010'' and inserting ``February 12, 2011''. (3) Section 246(b)(1) of the Trade Act of 1974 (19 U.S.C. 2318(b)(1)) is amended by striking ``December 31, 2010'' and inserting ``February 12, 2011''. (4) Section 255(a) of the Trade Act of 1974 (19 U.S.C. 2345(a)) is amended-- (A) in the first sentence to read as follows: ``There are authorized to be appropriated to the Secretary to carry out the provisions of this chapter $50,000,000 for fiscal year 2010 and $5,800,000 for the 6-week period beginning January 1, 2011, and ending February 12, 2011.''; and (B) in paragraph (1), by striking ``December 31, 2010'' and inserting ``February 12, 2011''. (5) Section 275(f) of the Trade Act of 1974 (19 U.S.C. 2371d(f)) is amended by striking ``2011'' and inserting ``and annually thereafter''. (6) Section 276(c)(2) of the Trade Act of 1974 (19 U.S.C. 2371e(c)(2)) is amended to read as follows: ``(2) Funds to be used.--Of the funds appropriated pursuant to section 277(c), the Secretary may make available, to provide grants to eligible communities under paragraph (1), not more than-- ``(A) $25,000,000 for fiscal year 2010; and ``(B) $2,900,000 for the 6-week period beginning January 1, 2011, and ending February 12, 2011.''. (7) Section 277(c) of the Trade Act of 1974 (19 U.S.C. 2371f(c)) is amended-- (A) by amending paragraph (1) to read as follows: ``(1) In general.--There are authorized to be appropriated to the Secretary to carry out this subchapter-- ``(A) $150,000,000 for fiscal year 2010; and ``(B) $17,3000 for the 6-week period beginning January 1, 2011 and ending February 12, 2011.''; and (B) in paragraph (2)(A), by striking ``December 31, 2010'' and inserting ``February 12, 2011''. (8) Section 278(e) of the Trade Act of 1974 (19 U.S.C. 2372(e)) is amended by striking ``2011'' and inserting ``and annually thereafter''. (9) Section 279A(h)(2) of the Trade Act of 1974 (19 U.S.C. 2373(h)(2)) is amended by striking ``2011'' and inserting ``and annually thereafter''. (10) Section 279B(a) of the Trade Act of 1974 (19 U.S.C. 2373a(a)) is amended to read as follows: ``(a) In General.-- ``(1) Authorization.--There are authorized to be appropriated to the Secretary of Labor to carry out the Sector Partnership Grant program under section 279A-- ``(A) $40,000,000 for fiscal year 2010; and ``(B) $4,600,000 for the 6-week period beginning January 1, 2011, and ending February 12, 2011. ``(2) Availability of appropriations.--Funds appropriated pursuant to this section shall remain available until expended.''. (11) Section 285 of the Trade Act of 1974 (19 U.S.C. 2271 note) is amended-- (A) by striking ``December 31, 2010'' each place it appears and inserting ``February 12, 2011''; and (B) in subsection (a)(2)(A), by inserting ``pursuant to petitions filed under section 221 before February 12, 2011'' after ``title''. (12) Section 298(a) of the Trade Act of 1974 (19 U.S.C. 2401g(a)) is amended by striking ``$90,000,000 for each of the fiscal years 2009 and 2010, and $22,500,000 for the period beginning October 1, 2010, and ending December 31, 2010'' and inserting ``$10,400,000 for the 6-week period beginning January 1, 2011, and ending February 12, 2011''. (13) The table of contents for the Trade Act of 1974 is amended by striking the item relating to section 235 and inserting the following: ``Sec. 235. Employment and case management services.''. (d) Effective Date.--The amendments made by this section shall take effect on January 1, 2011. SEC. 102. MERIT STAFFING FOR STATE ADMINISTRATION OF TRADE ADJUSTMENT ASSISTANCE. (a) In General.--Notwithstanding section 618.890(b) of title 20, Code of Federal Regulations, or any other provision of law, the single transition deadline for implementing the merit-based State personnel staffing requirements contained in section 618.890(a) of title 20, Code of Federal Regulations, shall not be earlier than February 12, 2011. (b) Effective Date.--This section shall take effect on December 14, 2010. Subtitle B--Health Coverage Improvement SEC. 111. IMPROVEMENT OF THE AFFORDABILITY OF THE CREDIT. (a) In General.--Section 35(a) of the Internal Revenue Code of 1986 is amended by striking ``January 1, 2011'' and inserting ``February 13, 2011''. (b) Conforming Amendment.--Section 7527(b) of such Code is amended by striking ``January 1, 2011'' and inserting ``February 13, 2011''. (c) Effective Date.--The amendments made by this section shall apply to coverage months beginning after December 31, 2010. SEC. 112. PAYMENT FOR THE MONTHLY PREMIUMS PAID PRIOR TO COMMENCEMENT OF THE ADVANCE PAYMENTS OF CREDIT. (a) In General.--Section 7527(e) of the Internal Revenue Code of 1986 is amended by striking ``January 1, 2011'' and inserting ``February 13, 2011''. (b) Effective Date.--The amendment made by this section shall apply to coverage months beginning after December 31, 2010. SEC. 113. TAA RECIPIENTS NOT ENROLLED IN TRAINING PROGRAMS ELIGIBLE FOR CREDIT. (a) In General.--Section 35(c)(2)(B) of the Internal Revenue Code of 1986 is amended by striking ``January 1, 2011'' and inserting ``February 13, 2011''. (b) Effective Date.--The amendment made by this section shall apply to coverage months beginning after December 31, 2010. SEC. 114. TAA PRE-CERTIFICATION PERIOD RULE FOR PURPOSES OF DETERMINING WHETHER THERE IS A 63-DAY LAPSE IN CREDITABLE COVERAGE. (a) IRC Amendment.--Section 9801(c)(2)(D) of the Internal Revenue Code of 1986 is amended by striking ``January 1, 2011'' and inserting ``February 13, 2011''. (b) ERISA Amendment.--Section 701(c)(2)(C) of the Employee Retirement Income Security Act of 1974 (29 U.S.C. 1181(c)(2)(C)) is amended by striking ``January 1, 2011'' and inserting ``February 13, 2011''. (c) PHSA Amendment.--Section 2701(c)(2)(C) of the Public Health Service Act (as in effect for plan years beginning before January 1, 2014) is amended by striking ``January 1, 2011'' and inserting ``February 13, 2011''. (d) Effective Date.--The amendments made by this section shall apply to plan years beginning after December 31, 2010. SEC. 115. CONTINUED QUALIFICATION OF FAMILY MEMBERS AFTER CERTAIN EVENTS. (a) In General.--Section 35(g)(9) of the Internal Revenue Code of 1986, as added by section 1899E(a) of the American Recovery and Reinvestment Tax Act of 2009 (relating to continued qualification of family members after certain events), is amended by striking ``January 1, 2011'' and inserting ``February 13, 2011''. (b) Conforming Amendment.--Section 173(f)(8) of the Workforce Investment Act of 1998 (29 U.S.C. 2918(f)(8)) is amended by striking ``January 1, 2011'' and inserting ``February 13, 2011''. (c) Effective Date.--The amendments made by this section shall apply to months beginning after December 31, 2010. SEC. 116. EXTENSION OF COBRA BENEFITS FOR CERTAIN TAA-ELIGIBLE INDIVIDUALS AND PBGC RECIPIENTS. (a) ERISA Amendments.-- (1) PBGC recipients.--Section 602(2)(A)(v) of the Employee Retirement Income Security Act of 1974 (29 U.S.C. 1162(2)(A)(v)) is amended by striking ``December 31, 2010'' and inserting ``February 12, 2011''. (2) TAA-eligible individuals.--Section 602(2)(A)(vi) of such Act (29 U.S.C. 1162(2)(A)(vi)) is amended by striking ``December 31, 2010'' and inserting ``February 12, 2011''. (b) IRC Amendments.-- (1) PBGC recipients.--Section 4980B(f)(2)(B)(i)(V) of the Internal Revenue Code of 1986 is amended by striking ``December 31, 2010'' and inserting ``February 12, 2011''. (2) TAA-eligible individuals.--Section 4980B(f)(2)(B)(i)(VI) of such Code is amended by striking ``December 31, 2010'' and inserting ``February 12, 2011''. (c) PHSA Amendments.--Section 2202(2)(A)(iv) of the Public Health Service Act (42 U.S.C. 300bb-2(2)(A)(iv)) is amended by striking ``December 31, 2010'' and inserting ``February 12, 2011''. (d) Effective Date.--The amendments made by this section shall apply to periods of coverage which would (without regard to the amendments made by this section) end on or after December 31, 2010. SEC. 117. ADDITION OF COVERAGE THROUGH VOLUNTARY EMPLOYEES' BENEFICIARY ASSOCIATIONS. (a) In General.--Section 35(e)(1)(K) of the Internal Revenue Code of 1986 is amended by striking ``January 1, 2011'' and inserting ``February 13, 2012''. (b) Effective Date.--The amendment made by this section shall apply to coverage months beginning after December 31, 2010. SEC. 118. NOTICE REQUIREMENTS. (a) In General.--Section 7527(d)(2) of the Internal Revenue Code of 1986 is amended by striking ``January 1, 2011'' and inserting ``February 13, 2011''. (b) Effective Date.--The amendment made by this section shall apply to certificates issued after December 31, 2010. TITLE II--ANDEAN TRADE PREFERENCES ACT SEC. 201. EXTENSION OF ANDEAN TRADE PREFERENCE ACT. (a) Extension.--Section 208(a)(1) of the Andean Trade Preference Act (19 U.S.C. 3206(a)(1)) is amended to read as follows: ``(1) remain in effect-- ``(A) with respect to Colombia after February 12, 2011; and ``(B) with respect to Peru after December 31, 2010;''. (b) Ecuador.--Section 208(a)(2) of the Andean Trade Preference Act (19 U.S.C. 3206(a)(2)) is amended by striking ``December 31, 2010'' and inserting ``February 12, 2011''. (c) Treatment of Certain Apparel Articles.--Section 204(b)(3)(E)(ii)(II) of the Andean Trade Preference Act (19 U.S.C. 3203(b)(3)) is amended (ii)(II), by striking ``December 31, 2010'' and inserting ``February 12, 2011''. (d) Annual Report.--Section 203(f)(1) of the Andean Trade Preference Act (19 U.S.C. 3202(f)(1)) is amended by striking ``every 2 years'' and inserting ``annually''. TITLE III--OFFSETS SEC. 301. CUSTOMS USER FEES. Section 13031(j)(3) of the Consolidated Omnibus Budget Reconciliation Act of 1985 (19 U.S.C. 58c(j)(3)) is amended-- (1) in subparagraph (A), by striking ``September 30, 2019'' and inserting ``January 7, 2020''; and (2) in subparagraph (B)(i), by striking ``September 30, 2019'' and inserting ``January 14, 2020''. SEC. 302. TIME FOR PAYMENT OF CORPORATE ESTIMATED TAXES. The percentage under paragraph (2) of section 561 of the Hiring Incentives to Restore Employment Act in effect on the date of the enactment of this Act is increased by 4.5 percentage points. TITLE IV--BUDGETARY EFFECTS SEC. 401. COMPLIANCE WITH PAYGO. The budgetary effects of this Act, for the purpose of complying with the Statutory Pay-As-You-Go Act of 2010, shall be determined by reference to the latest statement titled ``Budgetary Effects of PAYGO Legislation'' for this Act, submitted for printing in the Congressional Record by the Chairman of the Senate Budget Committee, provided that such statement has been submitted prior to the vote on passage. Speaker of the House of Representatives. Vice President of the United States and President of the Senate.
Omnibus Trade Act of 2010 - Title I: Extension of Trade Adjustment Assistance and Health Coverage Improvement - Subtitle A: Extension of Trade Adjustment Assistance - (Sec. 101) Amends the Trade and Globalization Adjustment Assistance Act of 2009 to extend trade adjustment assistance (TAA) programs through February 12, 2011. Requires funding levels for TAA programs under prior law to apply beginning February 13, 2011, as if the amendments made by this Act had never been enacted. Amends the Trade Act of 1974, however, to authorize appropriations through February 12, 2012 for: (1) TAA programs for workers, firms, and farmers; and (2) alternative TAA for older workers. Limits for FY2010, and the six-week period January 1, 2011-February 12, 2011, the total amount of payments that may be made by the Secretary of Labor for training assistance for workers. Authorizes appropriations for the reemployment TAA program through February 12, 2011. Authorizes the Secretary for FY2010, and the six-week period January 1, 2011- February 12, 2011, to make certain TAA funds available for grants to assist eligible communities to develop strategic plans for their economic adjustment to the impact of trade . Extends permanently certain reporting requirements under: (1) the Community College and Career Training Grant program, and (2) the Industry or Sector Partnership Grant program for communities impacted by trade. Authorizes appropriations to the Secretary for FY2010, and the six-week period January 1, 2011-February 12, 2011, to carry out the Sector Partnership Grant program. (Sec. 102) Extends the single transition deadline for implementing certain merit-based personnel staffing requirements for state administration of TAA to a date not earlier than February 12, 2011. Subtitle B: Health Coverage Improvement - (Sec. 111) Amends the Internal Revenue Code (IRC) to extend through February 12, 2011, the 80% tax credit for health insurance costs (including advance payments) for TAA (as well as Pension Benefit Guaranty Corporation [PBGC] pension) recipients. (Sec. 113) Makes TAA recipients who are in a break in training under a training program, or who are receiving unemployment compensation, eligible for such tax credit for the period through February 12, 2011. (Sec. 114) Amends the IRC, the Employee Retirement Income Security Act of 1974 (ERISA), and the Public Health Service Act (PHSA) to extend through February 12, 2011, the TAA pre-certification period rule disregarding any 63-day lapse in creditable health care coverage for TAA workers. (Sec. 115) Extends the continued eligibility for the credit for qualifying family members and certain qualified TAA-eligible individuals and PBGC pension recipients for COBRA premium assistance through February 12, 2011. (Sec. 117) Extends through February 12, 2011, coverage under an employee benefit plan funded by a voluntary employees' beneficiary association established pursuant to an order of a bankruptcy court, or by agreement with an authorized representative. Title II: Andean Trade Preferences Act - (Sec. 201) Amends the Andean Trade Preference Act (ATPA), as amended and expanded by Andean Trade Promotion and Drug Eradication Act (ATPDEA), to extend duty-free treatment or other preferential treatment of the products of Colombia and Ecuador through February 12, 2011. Extends through FY2012 preferential treatment for apparel articles assembled in one or more ATPDEA beneficiary countries from regional fabrics or regional components, and specified other type apparel (brassieres). Title III: Offsets - (Sec. 301) Amends the Consolidated Omnibus Budget Reconciliation Act of 1985 (COBRA) to extend certain customs users fees for the processing of merchandise entered into the United States through January 7, 2020, and other specified customs users fees through January 14, 2020. (Sec. 302) Amends the Hiring Incentives to Restore Employment Act to increase required estimated tax payments of corporations with at least $1 billion in assets in the third quarter of 2015 by 4.5% to 126.0% of such amount. Title IV: Budgetary Effects - (Sec. 401) Declares that the budgetary effects of this Act, for the purpose of complying with the Statutory Pay-As-You-Go Act of 2010, shall be determined by reference to the latest statement titled "Budgetary Effects of PAYGO Legislation" for this Act, provided that such statement has been submitted before the vote on passage.
(CNN) The Boy Scouts of America announced Monday that it's lifting the ban on gay adults as Scout leaders. "On Monday July 27 the national executive board ratified a resolution removing the national restriction on openly gay leaders and employees," Boy Scouts of America President Robert Gates said in a video statement on Monday. The move has been in the works for weeks. This month, the organization's executive committee adopted a resolution that would change the policy. "This resolution will allow chartered organizations to select adult leaders without regard to sexual orientation, continuing Scouting's longstanding policy of chartered organizations selecting their leaders," the Boy Scouts said in a statement July 13. Photos: Boy Scouts by the numbers Photos: Boy Scouts by the numbers Boy Scouts of America is an organization that says it is focused on mentoring young men and helping them develop life skills. Here's a look at BSA by the numbers. (Source: Boy Scouts of America) Hide Caption 1 of 8 Photos: Boy Scouts by the numbers 105: The number of years since Boy Scouts of America was incorporated. Membership topped 20 million by 1952. Hide Caption 2 of 8 Photos: Boy Scouts by the numbers 2.6 million: The number of youth members as of May 2014. BSA also boasts more than 1 million adult volunteers. Hide Caption 3 of 8 Photos: Boy Scouts by the numbers 437,160: The number of youth members in units chartered by The Church of Jesus Christ of Latter-day Saints, the most of any faith-based organization. As of 2013, the United Methodist Church had the second-strongest membership, followed by the Catholic Church. Hide Caption 4 of 8 Photos: Boy Scouts by the numbers 181: The number of NASA astronauts that participated in Scouting. Neil Armstrong was an Eagle Scout, the highest rank in the program. Hide Caption 5 of 8 Photos: Boy Scouts by the numbers 191: Number of lawmakers in the 113th Congress that participated in Boy Scouts. Eighteen governors were Scouts or Scout volunteers as of April 2013. Hide Caption 6 of 8 Photos: Boy Scouts by the numbers 19: The number of presidents that have served as honorary president of Boy Scouts of America. (That's every president since BSA was founded.) Hide Caption 7 of 8 Photos: Boy Scouts by the numbers 161: The number of countries with Boy Scout organizations, as of 2010. Hide Caption 8 of 8 "This change allows Scouting's members and parents to select local units, chartered to organizations with similar beliefs, that best meet the needs of their families. This change would also respect the right of religious chartered organizations to continue to choose adult leaders whose beliefs are consistent with their own," it read. LGBT advocacy groups have said the change doesn't go far enough. "Today's vote by the Boy Scouts of America to allow gay, lesbian and bisexual adults to work and volunteer is a welcome step toward erasing a stain on this important organization," said Human Rrights Campaign President Chad Griffin in a statement. "But including an exemption for troops sponsored by religious organizations undermines and diminishes the historic nature of today's decision. Discrimination should have no place in the Boy Scouts, period." Some religious groups, on the other hand, say the decision goes too far. The Church of Jesus Christ of Latter-day Saints said in a statement that the organization is "re-evaluating" its relationship with the Scouts. "The Church has always welcomed all boys to its Scouting units regardless of sexual orientation," the statement reads in part. "However, the admission of openly gay leaders is inconsistent with the doctrines of the Church and what have traditionally been the values of the Boy Scouts of America." Gates called for the Scouts to end its ban on gay adults in remarks (PDF) at the organization's national business meeting, held May 21. "The status quo in our movement's membership standards cannot be sustained," Gates said. "Our oath calls upon us to do our duty to God and our country. The country is changing, and we are increasingly at odds with the legal landscape at both the state and federal levels." He said decisions on the Boy Scouts' policy could also be dictated by the courts, and it would be better "to seize control of our own future." However, former Boy Scout leadership team member Jon Langbert told CNN's Carol Costello he believes the new policy isn't a cure-all, since local troops will still be allowed to make the decision on whether to allow gay leaders. "What does that do to folks like me?" asked Langbert, who is openly gay and says he gave up his leadership role when other fathers complained . "If I want to participate with my son, do I now have to start ringing up on the phone and calling around to different troops and saying, 'Do you guys discriminate, or am I a first-class citizen in your troop and I can join?' " Many troops are sponsored by churches and religious organizations, which abide by the guidelines of their affiliation. "It creates a bit of a mess when you don't have one global policy for the Scouts," Langbert added, noting that the national organization allows gay adults as employees. "When you have one branch of an organization doing one thing and another doing another, it creates a lot of stress for folks like me, and I don't think it's sending the right message to the boys, either." Gay youths have been allowed in the Boy Scouts since 2013. "For far too long this issue has divided and distracted us," Gates said. "Now it's time to unite behind our shared belief in the extraordinary power of scouting to be a force for good in the community and in the lives of its youth members." ||||| NEW YORK (AP) — The Boy Scouts of America on Monday ended its blanket ban on gay adult leaders while allowing church-sponsored Scout units to maintain the exclusion for religious reasons. FILE - In this Sunday, June 8, 2014, file photo, a Boy Scout wears his kerchief embroidered with a rainbow knot during Salt Lake City’s annual gay pride parade. The Boy Scouts of America's top policy-making... (Associated Press) FILE - In this May 23, 2014, file photo, former Defense Secretary Robert Gates addresses the Boy Scouts of America's annual meeting in Nashville, Tenn., after being selected as the organization's new... (Associated Press) The new policy, aimed at easing a controversy that has embroiled the Boy Scouts for years, takes effect immediately. It was approved by the BSA's National Executive Board on a 45-12 vote during a closed-to-the-media teleconference. "For far too long this issue has divided and distracted us," said the BSA's president, former Defense Secretary Robert Gates. "Now it's time to unite behind our shared belief in the extraordinary power of Scouting to be a force for good." The stage had been set for Monday's action on May 21, when Gates told the Scouts' national meeting that the long-standing ban on participation by openly gay adults was no longer sustainable. He said the ban was likely to be the target of lawsuits that the Scouts likely would lose. Two weeks ago, the new policy was approved unanimously by the BSA's 17-member National Executive Committee. It would allow local Scout units to select adult leaders without regard to sexual orientation — a stance that several Scout councils have already adopted in defiance of the official national policy. In 2013, after heated internal debate, the BSA decided to allow openly gay youth as scouts, but not gay adults as leaders. Several denominations that collectively sponsor close to half of all Scout units — including the Roman Catholic church, the Mormon church and the Southern Baptist Convention — have been apprehensive about ending the ban on gay adults. The BSA's top leaders have pledged to defend the right of any church-sponsored units to continue excluding gays as adult volunteers. But that assurance has not satisfied some conservative church leaders,' "It's hard for me to believe, in the long term, that the Boy Scouts will allow religious groups to have the freedom to choose their own leaders," said the Rev. Russell Moore, president of the Southern Baptist Convention's Ethics & Religious Liberty Commission. "In recent years I have seen a definite cooling on the part of Baptist churches toward the Scouts," Moore said. "This will probably bring that cooling to a freeze." Under the BSA's new policy: —Prospective employees of the national organization could no longer be denied a staff position on the basis of sexual orientation. —Gay leaders who were previously removed from Scouting because of the ban would have the opportunity to reapply for volunteer positions. —If otherwise qualified, a gay adult would be eligible to serve as a Scoutmaster or unit leader. Gates, who became the BSA's president in May 2014, said at the time that he personally would have favored ending the ban on gay adults, but he opposed any further debate after the Scouts' policymaking body upheld the ban. In May, however, he said that recent events "have confronted us with urgent challenges I did not foresee and which we cannot ignore." He cited an announcement by the BSA's New York City chapter in early April that it had hired Pascal Tessier, the nation's first openly gay Eagle Scout, as a summer camp leader. Gates also cited broader gay-rights developments and warned that rigidly maintaining the ban "will be the end of us as a national movement." The BSA faced potential lawsuits in New York and other states if it continued to enforce its ban, which had been upheld by the U.S. Supreme Court in 2000. Since then, the exclusionary policy has prompted numerous major corporations to suspend charitable donations to the Scouts, and has strained relations with some municipalities that cover gays in their non-discrimination codes. Stuart Upton, a lawyer for the LGBT-rights group Lambda Legal, questioned whether the BSA's new policy to let church-sponsored units continue to exclude gay adults would be sustainable. "There will be a period of time where they'll have some legal protection," Upton said. "But that doesn't mean the lawsuits won't keep coming. ... They will become increasingly marginalized from the direction society is going." Like several other major youth organizations, the Boy Scouts have experienced a membership decline in recent decades. Current membership, according to the BSA, is about 2.4 million boys and about 1 million adults. After the 2013 decision to admit gay youth, some conservatives split from the BSA to form a new group, Trail Life USA, which has created its own ranks, badges and uniforms. The group claims a membership of more than 25,000 youths and adults.
– The Boy Scouts of America has lifted its long-standing ban against adult leaders who are openly gay—with one big exception, the New York Times reports. Today's vote by the Scouts' national executive board will also allow religious-sponsored units to choose leaders they prefer. Yet the Mormon Church, America's biggest Scout-unit sponsor, says Mormon affiliation with the Scouts may still be over. The Church "is deeply troubled by today's vote," according to a Church statement. "When the leadership of the church resumes its regular schedule of meetings in August, the century-long association with scouting will need to be examined." Similarly, a Baptist leader says the shift to allow gay youths in 2013, and now gay leaders, shows "that this is the final word only until the next evolution." Effective immediately, the new policy was also criticized by pro-LGBT groups. The religious exemption "undermines and diminishes the historic nature of today's decision," says Human Rights Campaign President Chad Griffin, per CNN. "Discrimination should have no place in the Boy Scouts, period." NBC News notes that 70% of local Scout units are sponsored by religious groups, but some have opposed banning gay leaders. Today's move, which Boy Scouts of America President Robert Gates called for last month, may also ward off rising anti-discrimination lawsuits and enable the Scouts to reconnect with corporate donors who opposed the anti-gay policy. Today's vote passed by 45-12 in a private teleconference, the AP reports.
in the hierarchical scenarios of the structure formation of the universe , cluster of galaxies are formed through subcluster mergers . both observations ( e.g. bliton et al . 1998 ; owen et al . 1999 ; markevitch et al . 2000 ) and simulations ( e.g. roettiger , loken & burns 1997 ; roettiger , stone & burns 1999 ; bekki 1999 ) show that the merging dramatically change the physical characteristics of the intracluster medium ( icm ) and galaxies , like the temperature , density and magnetic field strength of icm , star formation rate and radio emission of galaxies . however , many processes related to the mergers and their effects are still vague . hence systematic analyses of merging clusters are very helpful to deepen our understanding on this complex dynamical process . here we present recent observations and analysis on the well - known merging cluster a2256 . a2256 is a rich cluster with strong x - ray emission ( l@xmath8 10@xmath9 ergs s@xmath7 ) at the redshift 0.058 . observations ( briel et al . 1991 ) clearly revealed two peaks near the center , separated by about @xmath10 . one of them is roughly at the geometric center of the cluster , while the other one is considered to be a merging subcluster . the previous x - ray observations also showed that the subcluster has a lower temperature than the main cluster ( briel et al . 1991 ; miyaji et al . 1993 ; markevitch 1996 , m96 hereafter ) and temperature generally decreases with the radius ( m96 ; molendi , de grandi & fusco - femiano 2000 , mdf00 hereafter ) . briel & henry ( 1994 ) reported two hot spots ( @xmath11 12 kev ) in this cluster using data . however , they were not confirmed by and observations ( m96 ; mdf00 ) . the velocity dispersion of the galaxies is very large ( @xmath1 1300 km / s ; fabricant , kent & kurtz 1989 ) and the velocity distribution shows substructure ( roettiger , burns & pinkney 1995 ) . the radio properties of a2256 are anomalous ( bridle & fomalont 1976 ; bridle et al . 1979 ; r@xmath12ttgering et al . 1994 , r94 hereafter ) . a remarkable radio relic , with sharp edges and possible filamentary structure , was found northwest of the two x - ray peaks . at least four head - tail sources were found , including one with an exceptional narrow , straight tail extending to at least about 0.7 mpc . r94 suggested that the halo was composed of a few head - tail galaxies that were heavily distorted due to the infalling subcluster . recent observation ( fusco - femiano et al . 2000 ) also showed the existence of nonthermal x - ray emission from the cluster but only significant above 10 kev . in this paper , we present the analysis of the recent observations of a2256 . observations and data reduction are in @xmath132 . the analysis is in @xmath133 . @xmath134 is the discussion and @xmath135 is the summary . throughout this paper , we assume h@xmath14 = 70 km s@xmath7 mpc@xmath7 and q@xmath14 = 0.5 . these cosmological parameters correspond to the linear scale of 1.08 kpc / arcsec at the cluster redshift . a2256 was observed by three times with advanced ccd imaging spectrometer ( acis ) . the information of the observations are listed in table 1 . for each observation , we excluded the known bad columns , hot pixels , chip node boundaries and events with grades 1 , 5 and 7 , as well as bad aspect intervals . the proper gain map file for that period was used . particle background flare periods were excluded based on the x - ray light curves from the outer parts of the field . the back - illuminated ( bi ) chips and front - illuminated ( fi ) chips were filtered separately since they have different background flare levels ( see markevitch et al . 2000 for details ) . the chips used for data analysis are i0 - i3 chips of the acis - i observation and s2 - s4 chips of the acis - s observation . the streak events in s4 chip were removed by the `` destreak '' tool in ciao . the short observation 1521 was dominated by background flares ( over 80% of its exposure time ) and the statistics are quite poor so we do not include that data in our analysis . to subtract the correct background , the blank field background dataset relevant for the period of the observations was used ( markevitch 2001 ) . since the background may change slightly with time , we also checked the background rate using the data at high energy ( 10 - 12 kev in the front - illuminated - fi chips and 9 - 12 kev in the back - illuminated - bi chips ) . it was found that the background normalization should be increased by 8% in fi chips , consistent with the expected long - term background trend . note that although the correction should apply only to the particle component of the total background ( and not the cosmic x - ray background or cxb ) , that component is dominant at energy greater than 2 - 3 kev which is most important for the gas temperature fitting . small rescaling of the cxb component generally has only tiny effect on the spectral fitting . thus we apply this correction across the energy range for simplicity . the systematic uncertainty of the background normalization ( @xmath1510% ) that encompasses this correction was also included in all the confidence intervals reported below . shortly after the launch of , the acis fi chips suffered increasing `` charge transfer inefficiency '' ( cti ) problem . the effect is seen as a decreasing quantum efficiency ( qe ) with increasing distance from the readout node . here the results of the calibration observations of g21.5 - 0.9 were adopted to make cti corrections ( vikhlinin 2000 ) . we also used an additional position - independent correction factor of 0.93 for the acis - i quantum efficiency below 1.8 kev to account for the difference between acis - s3 and acis - i ( vikhlinin 2000 ) . for hot clusters , ignoring it results in spuriously high temperature and the dependence of fitting results on the adopted low energy cut ( markevitch & vikhlinin 2001 ) . ciao(1.1.5 ) , ftools(5.0 ) , xspec(10.0 ) and some our own software were used to do the data reduction . the exposure maps were generated using our own software ( equivalent to the similar one in ciao ) . since vignetting is dependent on the energy , narrow bands were used and weighting spectra were also applied in producing the exposure maps ( the weighting spectra differ from chip to chip based on the fitting result of the integrated spectrum of each chip ) . to produce response files of the spectra in the interested regions ( response matrices - rmf and auxiliary response files - arf ) , the tools calcrmf and calcarf by vikhlinin were used . the arf was calculated by weighting the mirror effective area in the region with the observed cluster brightness distribution in 0.5 - 2.0 kev band . the rmf was calculated by weighting the standard set of matrices within the region by the observed cluster brightness distribution . all the errors in this paper are 90% confidence interval . over 30 point sources were detected in two observations using the ciao wavelet detection tool . a detailed analysis on those point sources is beyond the scope of the paper . in this section , we will mainly discuss the point sources coincident with the member galaxies . three member galaxies ( all ellipticals ) were found to have corresponding x - ray sources in both observations . they are shown in fig . 1 . one is the elliptical galaxy e ( fig . 2 ) which is also a radio head - tail galaxy ( a in r94 ) . only about 50 counts from it in total were collected during the two observations . the statistics of the data do not allow us to constrain the temperature or the photon index well even if its spectrum is that simple . if we simply assumed a 1 kev thermal plasma with normal solar abundance , the derived luminosity is about 8@xmath210@xmath5 ergs s@xmath7 ( 0.5 - 10 kev ) . if a powerlaw with photon index 1.7 is assumed , the derived luminosity is about 3@xmath210@xmath6 ergs s@xmath7 ( 0.5 - 10 kev ) . the other two sources correspond to the nw core of the double galaxy ngc 6331 ( c ) and galaxy d respectively ( fig . only about 20 counts in total were collected from each source in the two observations . their luminosity are estimated to be around 3@xmath210@xmath5 ergs s@xmath7 for the 1 kev thermal plasma assumption , or 10@xmath6 ergs s@xmath7 for the powerlaw ( photon index 1.7 ) assumption ( 0.5 - 10 kev ) . the x - ray emission may come from the low luminosity active galactic nuclei ( llagn ) or thermal halos . we also tried to explore the nature of the brightest two point sources in the field ( fig . 1 ; # 1 : 7.5 c / ks - i3 ; # 2 : 5.4 c / ks - i0 and 8.4 c / ks - s3 ) . # 1 is also marginally seen in both pspc and hri images . # 1 has a very faint optical counterpart in dss ii while # 2 has no counterpart . we extracted and fitted their spectra with a simple powerlaw model . if the absorption is fixed at the galactic value , the photon index for # 1 and # 2 are 1.4@xmath16 and [email protected] respectively . thus , they might be background agns . based on these best - fit , their 0.5 - 10 kev unabsorbed flux are 10@xmath17 ergs s@xmath7 @xmath18 and 7@xmath210@xmath19 ergs s@xmath7 @xmath18 respectively . we also compared the observed source number with the predicted by the log n - log s relation derived in deep fields ( e.g. giacconi et al . 2001 ) and found no significant difference . the 0.5 - 7 kev combined acis - i / s image is shown in fig . 1 overlaid on a dss ii image . the background was subtracted and the image was divided by the exposure map . all the point sources were excluded . the two previously resolved peaks at the center ( p@xmath20 and p@xmath21 hereafter ) by ( briel et al . 1991 ) are prominent and both show internal structures ( fig . it is noticed that the central part of the subcluster ( the western peak ) is elongated along the east - west direction , while the central part of the main cluster ( the eastern peak ) is elongated along the north - south direction ( fig . 1 and 2 ) . these two peaks are separated by about @xmath22 and p@xmath21 is somewhat brighter . in dss ii image , there is no galaxy concentration around p@xmath21 . radio head - tail galaxies e , f and g ( fig . 2 ) and double galaxy ngc 6331 ( c ) all have offsets to p@xmath21 from @xmath23 to @xmath24 . the surface brightness peak of the main cluster is located 0.5@xmath0 north of a big elliptical galaxy ( b ) , while galaxy a , the galaxy with the most extended optical halo in a2256 , is about 50 kpc east of the peak . besides p@xmath20 and p@xmath21 , a new structure was found significantly to the east of p@xmath20 in observations ( fig . 1 and 2 ) . in this paper we call it p@xmath25 or `` shoulder '' from its morphology . this structure is shown as a small clump extended from p@xmath20 but with a clear local maximum and seems to be embedded in the main cluster . the images were re - checked and this structure was also found in both hri and pspc images though it is not very significant . it was also detected in the wavelet analysis on pspc data by slezak , durret & gerbal ( 1994 ) . this structure has interesting spectral characteristics and will be discussed later . we also zoom - in the central part of a2256 to see any substructures around the center ( fig . the wavelet decomposition tool ( vikhlinin , forman & jones 1997 ) was applied to the image and the reconstructed image ( the right one in fig . 2 ) is quite similar to the simply smoothed one ( the left one in fig . both images show complex structures around the center . p@xmath20 and p@xmath21 also show internal structures , which may also be a support for some kind of ongoing dynamical process in this cluster . however , the current data do not allow us to constrain the spectral difference on such small scales . besides the `` shoulder '' and the substructures within p@xmath20 and p@xmath21 , there are still several interesting significant facts . first , there are no x - ray enhancements around the central biggest ellipticals a , b and c ( the 3 @xmath26 upper limits are all about 2@xmath210@xmath6 ergs s@xmath7 at 0.5 - 10 kev , assuming 1 kev thermal spectrum and 0.5@xmath0 aperture size . ) . second , the surface brightness gradient at the south of p@xmath21 is sharper than those in other directions ( more prominent in the unsmoothed image ; also pointed out by briel & henry , 1991 ) . third , there is a protrusion which extends from the center of p@xmath21 to the southwest . as shown in fig . 6 , after removing the main cluster , the southern parts become even sharper . in @xmath133.6 , we will show that it may be something like a cold front . in view of the complex central structure mentioned in @xmath133.2 and the fact that a2256 is very likely a cluster in merging , the knowledge of the temperature distribution can reveal us the nature of the structures and the merging . the resolution of the temperature map one can achieve for a2256 with is limited by statistics and the psf effect can be ignored completely . we used the following method to obtain the temperature map . first , the field was divided into 30 regions ( as shown in fig . 3 ) with similar number of counts , then for each region we fitted the spectrum to get the temperature ( for region # 11 - 30 , we fitted the spectra from two observations simultaneously ) . in view of the uncertainty of the acis low energy response , only 0.9 - 9 kev data were used and the absorption was fixed to the galactic value 4.1@xmath210@xmath27 @xmath18 . the best - fit temperatures are not sensitive to any small excess in absorption . the redshift was fixed at 0.058 and the abundance was fixed at 0.3 solar ( we used the solar abundance table by anders & grevesse 1989 ) , which is the average value of previous observations ( e.g. m96 ; mdf00 ) . the mekal code in xspec was used . second , we shifted the regions half - size along the parallel and perpendicular directions to the orientation of the first set of regions to get two additional similar temperature maps . the final one was obtained by averaging the above three . then the final temperature map was adaptively smoothed a little bit for a better presentation ( fig . 3 ) . two checks were performed . the first is the integrated temperature of the central @xmath28 square ( acis - s3 chip fov , see fig . the result is 6.7@xmath29 kev , consistent with the central temperature ( @xmath1 7 kev ) reported by previous observations ( e.g. miyaji et al . 1993 ; markevitch & vikhlinin 1997b ; white 2000 ; mdf00 ) . the second is that we did spectral fits at the same regions as those in paper ( mdf00 ) . our results generally agree with theirs within 0.5 kev , except in their region 2nw , where spectrum may suffer the contamination from the nearby high temperature regions due to the large point spread function ( psf ) of ( @xmath30 hpd for mecs at 1.5 kev ) . on the 5 arcmin scale , the acis temperature map is in agreement with that of ( m96 ) . it shows moderate temperature variations across the cluster but not as strong as expected in a major merger ( e.g. roettiger et al . 1997 and other simulations ) . combining this temperature map with the image , no shock was found in the field ( a feature that is likely a `` cold front '' will be discussed in @xmath133.6 ) . the subcluster has lower temperature ( @xmath1 4 kev at the coolest region ) than the main cluster and the shape of the low temperature regions strongly suggests that the subcluster entered the main cluster from somewhere west . this temperature map , if we omit the `` shoulder '' , resembles those at the early stages of merging in simulations ( e.g. roettiger et . al 1997 ; takizawa 1999 ; takizawa 2000 ) . it is also interesting that the apparent coolest part of the subcluster is about @xmath24 west of its surface brightness peak . the main cluster appears largely undisturbed , as concluded by markevitch & vikhlinin ( 1997a ) from the results . the central region of the main cluster is generally cooler than its outskirts covered by our fov . this is more likely due to the projection of the cooler subcluster and possibly the `` shoulder '' , rather than the genuine temperature gradient . there is a hot region ( @xmath1 9 kev ; around regions # 5 and # 16 in fig . 3 ) at the north of the main cluster . we notice that it is just in positional coincidence with the eastern lobe of the radio relic ( fig . 1 ) . this might imply some kind of physical relation between the merging events and the radio relics . it is also noticed that there is a hot region near the south edge of the subcluster . in @xmath133.6 , we will show it might be related to a cold front . the two hot spots reported by briel & henry ( 1994 ) are located in the fov of acis - i ( fig . 3 ) . our measurements ( 7.9@xmath31 kev for their ne spot and 5.8@xmath32 kev for the sw one ) do not confirm the existence of these hot spots , consistently with the earlier and conclusions ( markevitch & vikhlinin 1997a ; mdf00 ) . we also checked the existence of any nonthermal component in the spectra , especially at the regions of the radio relic . no clear evidence for such component in acis spectra was found . a simple cooling flow component was also added to the spectral model but that never changed @xmath33 significantly and the obtained mass accretion rates were always very small . the integrated abundance for the central @xmath28 square ( acis - s3 chip fov ) was obtained by simultaneously fitting the spectra of two observations . it is 0.34@xmath34 , which is a little higher than the previous results for the whole cluster , [email protected] by m96 ( we converted their value for the abundance table used here ) and [email protected] by mdf00 ( we converted their 68% confidence error into 90% ) . this is consistent well with the recent finding that the abundance generally falls with radius ( mdf00 ) and their results in the inner radial bins . here we are more interested in the distribution of iron . the statistics of the data do not permit a detailed analysis for the whole field . however , in checking spectra from the regions that we used for the temperature map , at least two with significantly high iron abundance areas were found . the results are shown in fig . 4 . the four solid line regions , from a to d , represent p@xmath20 , p@xmath21 , p@xmath25 and a southern part of the main cluster ( relatively far from the subcluster ) respectively . the results from acis - i and acis - s agree well , so we performed simultaneous fits for each region . as shown in the figure 4 , it is clear that p@xmath21 and p@xmath25 have more iron than p@xmath20 and the southern main cluster region . the joint abundance - temperature 90% confidence regions for a - d are shown in fig . 5 , which again reinforces that a and d are different from b and c. markevitch & vikhlinin ( 1997b ) also obtained a higher iron abundance at the subcluster than that of the main cluster using data but the difference was not significant , which may due to the wide psf of . if we further consider the possible contamination of emission among the three structures due to projection , even higher abundance contrasts would be expected . the iron abundance in other regions ( e.g. the outer parts of the subcluster and the north part of the main cluster ) are all around the average ( @xmath1 0.3 ) but poorly constrained . current data can not allow us to determine whether the high iron abundance is only localized in the `` shoulder '' or also present in its immediate surrounding . no iron abundance enhancement was found to be around galaxies c and e. we also measured the redshifts in several regions from the iron k@xmath35 line blend . the regions we chose among those in fig . 4 are b ( the subcluster ) , c ( the `` shoulder '' ) , and the acis - s3 fov excluding b , c and their @xmath23 surrounding ( the main cluster ) . the results are 0.056@xmath36 , 0.064@xmath37 , and [email protected] respectively . all the values are consistent with a single redshift , suggesting that the main cluster , the subcluster and the `` shoulder '' are possible to be associated . using the log n - log s relation for clusters ( e.g. kitayama , sasaki & suto 1998 ; de grandi et al . 1999 ) , and considering the observed maximal difference of redshifts ( @xmath1 0.03 ) , the possibility that they are a chance superposition is less than 10@xmath38 . therefore we conclude that they are very likely to be associated and interacting as also suggested by the temperature map and the possible cold front ( @xmath133.6 ) . the current data and calibration status do not allow us to constrain the spatial distribution of other elements , like silicon . the integrated silicon abundance in the acis - s3 fov is [email protected] while the result is [email protected] for the whole cluster ( fukazawa et al . 1998 ) . observations reveal complex structures around the center of a2256 . we should realize that due to the projection the observed spectra , especially those around the center , have at least two components entangled the main cluster and the subcluster . it would be very helpful for our understanding if we can separate them . however , this is actually very hard since the merging clusters are not in hydrostatic equilibrium and we do not yet know their relative geometry . here , we just present a very simple way to isolate the subcluster and decompose the spectral components . briefly speaking , we tried to use a @xmath39-model to represent the main cluster ( not the whole cluster ) and examine the residuals . then we can use those information to separate the spectral components in observed spectra . the image ( background - subtracted and exposure - corrected ) shows that the surface brightness is rather spherical in the outer parts , especially after we exclude the subcluster region using a circle with some radius ( @xmath1 7@xmath0 ) centered at the core of the subcluster . thus we can simply measure the surface brightness beyond that subcluster region to make a @xmath39-model fit to the main cluster . the outer contours of the pspc images were used to find the geometric center of the main cluster , which is about @xmath23 south of the apparent peak of the main cluster . it is shown in fig . a @xmath40 radius circle centered at the peak of the subcluster was used to represent the subcluster and a @xmath24 radius circle at p@xmath25 was used to represent the emission from the `` shoulder '' . only radial surface - brightness measurement outside of these circles was made . all the point sources were excluded . acis - i / s and pspc data were used . the derived core radius is [email protected] arcmin ( or [email protected] mpc ) , and @xmath39 is [email protected] . the residual after removing the best - fit @xmath39-model of the main cluster is shown in fig . the `` shoulder '' is more prominent and the morphology of the subcluster is distorted , which is natural for a falling cluster . moreover , the southern surface brightness has become sharper and points sse , appearing more edge - like . then we can try to disentangle the spectral components in the observed spectra . from the temperature map and the observed surface density discontinuity ( @xmath133.6 ) , it is known that the subcluster enters the main cluster from the west . thus we do not expect much projection effect from the subcluster on p@xmath20 . here we only want to investigate p@xmath21 and p@xmath25 ( the two regions in fig . 2 ) . to do that , we first need to assume the temperature of the contaminating gas from the main cluster at p@xmath21 and p@xmath25 . three value , 7 , 8 and 9 kev , were assumed . the abundance of the main cluster gas was fixed at 0.3 . the normalization of the contaminating gas is obtained from the best - fit @xmath39-model . the results are shown in table 2 . as expected , we obtained larger contrasts on temperature and abundance between the main cluster and the others . the abundance of the `` shoulder '' is still high after the decomposition . from the best - fit emission measure , assuming a constant density sphere , its gas mass is around 2@xmath210@xmath3 m@xmath4 if its dimension along the line of sight is not very different from others . 6 shows a clear edge - like structure running along the sse direction . we measured the surface brightness profile in the linear regions parallel to the edge in the co - added acis image ( not the residual ) . the temperature at each side of the edge are also obtained in the regions shown in fig . the results are shown in fig . 7 . at the edge , there is a break in the surface brightness profile that indicates a density discontinuity and a temperature jump from about 4.5 kev to about 8.5 kev . this feature is quite similar to the cold fronts found in a2142 by markevitch et al . ( 2000 ) and a3667 by vikhlinin et al . ( 2001a ) . in a2142 and a3667 , these features delineate the contact surfaces of the cold dense clouds and the surrounding hotter gas , through which they move . in the case of a2256 , the geometry is more complicated and there is projection of multiple structures . therefore , it is not possible to derive the real density and pressure contrast across the edge . the flat shape of the edge may imply that it is only part of the whole moving edge ( the other parts are smoothed out by projection if the moving front is tilted from the plane of sky ) , or the moving front viewed by an angle with the line of sight . the protrusion mentioned in @xmath133.2 might be related to the striping gas of the subcluster or its wake . this discovery , adds another candidate to the list of merging clusters which have cold fronts a2142 , a3667 and rxj1720.1 + 2638 ( mazzotta et al . 2001 ) . more cold fronts could be expected when more data are available . systematic studies of such phenomena will enrich our knowledge on merging of the clusters . observations reveal a new structure - the `` shoulder '' - near the center , a localized feature with size about @xmath22 and gas mass about 2@xmath210@xmath3 m@xmath4 ( assuming a constant density sphere and its dimension along the line of sight is similar as others ) . its spectrum implies that iron is enriched in this structure ( @xmath1 1 after the decomposition ) . two galaxies ( d and h in fig . 2 ) are apparently located in this structure though they are not necessarily associated with it . bridle & fomalont ( 1976 ) found a @xmath41 low - brightness radio relic roughly centered at galaxy d ( also the radio source d in the same paper ) besides the bright nw halo ( fig . the `` shoulder '' is within that diffuse radio relic . although the roughly similar temperature and abundance between this structure and the center of the subcluster may suggest that it is a remaining of the subcluster spread along the path of its infall , its location ( east of p@xmath20 ) is inconsistent with the infalling direction ( somewhere from the west ) of the subcluster suggested by the temperature map , the possible cold front and the general direction of the radio head - tail sources ( e , f and g in fig . since its x - ray redshift suggests it is a feature inside a2256 ( @xmath133.4 ) , only two choices are left : either a new merging component or an internal structure of the main cluster . another merging component seems to be a feasible explanation . in fact , if we ignore the subcluster , the image resembles simulated images for a merger between a massive cluster and a much less massive one at about 0.3 gyr after core crossing ( e.g. takizawa 1999 ; takizawa 2000 ) . the fact that there are two big elliptical galaxies rather than one dominant galaxy around the center also supports the idea that the main cluster has not yet relaxed well . hence , we suggest that a2256 may be a system with three clusters ( or two clusters and one group ) in merging . about 0.3 gyr before the ongoing merger , there was a merger between the massive cluster and a less massive system . the less massive system might be a galaxy group that did not disturb the potential well of the massive cluster much . the `` shoulder '' might be the relic of its core and galaxy b might be once the dominant galaxy of that less massive system since it is 2 times less luminous than a. gonzlez - casado , mamon & salvador - sol ( 1994 ) found such kind of the relic of the core can survive in at least one cluster crossing . the observed iron abundance difference between the `` shoulder '' and p@xmath20 might suggest it was an off - center merger . the less massive system may have fallen into the massive cluster somewhere from the west as implied by the simulations and the relatively sharper surface brightness at the east than other directions . under such scenario , the low - brightness radio relic found by bridle & fomalont ( 1976 ) would be related to this earlier merger event and the much brighter nw one may be related to the ongoing merger . the fading of the radio haloes after the merger was discussed by tribble ( 1993 ) . the typical aging time - scale is at the order of 10@xmath42 years , consistent with our picture here . another possibility is that the `` shoulder '' is an internal structure of the main cluster as the two local dips of the potential around the d galaxies in coma ( vikhlinin et al . the gas mass of the `` shoulder '' is about 2@xmath210@xmath3 m@xmath4 , while cd galaxies in clusters of galaxies often have gas content at the level of 10@xmath3 - 10@xmath43 m@xmath4 ( trimble 2000 ) . the galaxy a , which resembles the cd most in a2256 , is quite near the `` shoulder '' . before the ongoing merger , the galaxy a may be at the center of the `` shoulder '' , which could be the halo of hot gas . when the ongoing merger began , the gases ( the `` shoulder '' ) might lag behind the galaxy a similar to the case discussed in @xmath134.2 . the iron mass excess within it is estimated to be about 10@xmath42 m@xmath4 , while the stellar mass of a is estimated to be about 7@xmath210@xmath3 m@xmath4 assuming m / l = 7(m / l)@xmath4 and l@xmath44 = 10@xmath3 m@xmath4 . thus , an injection of iron about 0.03% of the stellar mass of a is needed to explain the iron excess , which is still possible in the galactic - wind models ( e.g. arimoto & yoshii 1987 ) . fabian & daines ( 1991 ) suggested that the subcluster had a cooling flow based on observations . in @xmath133.5 it is shown that the real temperature of the subcluster is about 4.5 kev around the center . the estimated gas density at the central 80 kpc of the subcluster is about 3.6@xmath210@xmath45cm@xmath45 from the spectral fitting . the cooling time scale is 85 gyr ( n/10@xmath45 @xmath47 ( t/8.6 kev)@xmath48 , where n and t are the density and temperature of the gas . our revised cooling time at the center of the subcluster is 14 - 20 gyr . though the estimated cooling time can not completely rule out the existence of the cooling flow , neither the image nor spectra require any present cooling flow around the center of the subcluster . @xmath133.5 shows that the central 150 kpc of the subcluster has high iron abundance ( @xmath1 0.6 ) , which decreases to about 0.3 at the outer parts . briel & henry ( 1991 ) divided the galaxies in a2256 into two groups using a simultaneous fit of two gaussians to the whole velocity data , though the two redshift distribution components are not spatially separated . based on their fit , they suggested that the subcluster was a poor cluster . indeed , the observed x - ray characteristics of this subcluster resemble those of some poor clusters ( virgo by matsumoto et al . 1996 ; centaurus by fukazawa et al . 1994 ; awm 7 by xu et al . 1997 and ezawa et al . all of them have comparably low ( or lower ) temperature with the subcluster in a2256 and similar abundance gradients . however , all these poor clusters have central cooling flows while as discussed above , currently there is no cooling flow in the subcluster . one possibility is that the cooling flow of the subcluster has been destroyed by the early interaction during the merger while the abundance gradient was kept . it is interesting that there is no galaxy concentration near the surface brightness peak of the subcluster . a natural explanation would involve the different falling velocity of the galaxies ( here they may be c , e , and f ) and the gas . the gas may lag behind the galaxies due to the drag by the main cluster gas , which is implied by the highly distorted shape of the subcluster gas and the apparent contact discontinuity . we investigate whether this picture is applicable to galaxy c , the best candidate of the cd of the subcluster according to its halo size . though both galaxies and gas may undergo spiral - in , deceleration or acceleration , the time from when they began to separate could be simply expressed as 0.1 mpc sin@xmath49 / ( v@xmath50 - v@xmath51 ) , where @xmath52 is the angle with the line of sight , v@xmath50 and v@xmath51 are the velocity of the galaxy c and gas respectively . for a time scale of 0.5 gyr , a velocity difference about 200 sin@xmath49 can explain the observed displacement . since large velocity dispersion ( @xmath1 1300 km s@xmath7 ) was found in this cluster , a velocity difference ( v@xmath50 - v@xmath51 ) around several hundred km s@xmath7 is quite possible . the temperature map and the observed surface density discontinuity imply that the subcluster entered the main cluster from somewhere west . however , current data do not allow us to make a definite conclusion to the moving direction of the subcluster . it has been suggested that mergers may amplify the magnetic field and be responsible for the strong radio haloes and relics ( tribble 1993 ; roettiger et al . the nw bright radio relic in a2256 appears to be located near the moving path of the subcluster , thus it is likely associated with the ongoing merger event . finally , the temperature map excludes any significant disturbance of the cluster , indicating that the merger with the subcluster is at very early stage . observations of the merging cluster a2256 confirm some of our previous understanding on it , but also reveal some new phenomena . our results are summarized below : \1 ) a new structure ( `` shoulder '' ) was found @xmath24 east of the peak of the main cluster . it is shown as an extension from the main cluster peak but with local maximum . several galaxies ( a , d and h in fig . 2 ) are near or apparently within it , but the physical association is hard to be established . the position is also roughly at the center of a low - brightness radio relic . this feature is also characterized by a high iron abundance ( @xmath1 1 after decomposition ) . its temperature is comparable to or somewhat lower than that of the surrounding . the gas mass within this structure is about 2@xmath210@xmath3 m@xmath4 . we suggest that it is either a relic of a prior merger or an internal structure of the main cluster . \2 ) the subcluster was found to have a higher abundance ( @xmath1 0.6 ) , but a lower temperature ( 4.5 - 5 kev ) than the main cluster ( 0.2 - 0.3 and @xmath1 7 kev respectively ) . its characteristics resemble some of the poor clusters . the morphology of the subcluster is distorted , which could be due to the infall . the central part of the subcluster , as well as the main cluster , show some internal structures . \3 ) the brightness profile across the southern edge of the subcluster indicates a density discontinuity and the temperature jump from about 4.5 kev inside the dense gas to about 8.5 kev outside . this probably indicates a contact the subcluster and the main cluster gases and their relative motion , somewhat similar to a2142 and a3667 ( markevitch et al . 2000 ; vikhlinin et al . 2001a ) , although the geometry is more complex . due to the projection , the real moving direction of the whole edge is not easy to be determined and the jumps of density and pressure are hard to be constrained . \4 ) the temperature map shows moderate temperature variation across the cluster , but not as strong as expected in a major merger . the temperature map implies that the subcluster entered the main cluster from somewhere west and the merger is still at the early stage . the main cluster appears not yet largely disturbed by the merger with the subcluster . the two hot spots reported by briel & henry ( 1994 ) are not confirmed . \5 ) no x - ray enhancement is detected around the two brightest galaxies ( a and b in fig . 2 ; the 3 @xmath26 upper limits are about 2@xmath210@xmath6 ergs s@xmath7 at 0.5 - 10 kev ) around the center of the main cluster . over 30 point sources were detected in the observations , but only 1/3 of them have optical counterparts ( usually faint ) in the dss ii image . no significant excess of point sources was found in the field . three member galaxies near the center , including a radio head - tail source and a dumbbell - like big elliptical , were found to have corresponding x - ray point - like emission . their x - ray luminosity were estimated to be several times 10@xmath5 - 10@xmath6 ergs s@xmath7 . the results presented here are made possible by the successful effort of the entire _ chandra _ team to build , launch , and operate the observatory . we are grateful to the referee for the valuable comments to improve the manuscript . we acknowledge helpful discussions with w. forman , c. jones , d. m. neumann , t. clarke and f. durret . this study was supported by nasa contract nas8 - 38248 . varies from 20@xmath53 at the peak to 30@xmath53 near the edges of the image . the contour levels are linear from 0.03 to 0.625 c / ks / pixel ( pixel size : @xmath54 ) . the outermost contour is affected by the edges of the ccd chips . p@xmath20 , p@xmath21 and p@xmath25 correspond to the peak region of the main cluster , the peak region of the subcluster and the `` shoulder '' respectively . the small red circles represent the bright point sources ( @xmath11 20 counts ) detected by acis - i / s , including the three corresponding to member galaxies . # 1 and # 2 are the two brightest point sources ( see @xmath133.1 ) . the green contours show the position of the strongest radio relic in this source ( from nvss 20 cm survey ) . there are more diffuse radio relics around p@xmath21 and p@xmath25 ( bridle & fomalont 1976 ; r94 ) . [ fig1 ] ] @xmath26 gaussian . we used a - h to label the big galaxies around the center . their redshifts are 0.0594 , 0.0564 , 0.0586 , 0.0643 , 0.0587 , 0.0586 , 0.0553 and 0.0508 from a to h respectively ( from nasa extragalactic database ned ) . the arrows for e , f and g ( all radio head - tail sources ) show the rough directions of the head - tails . notice the `` shoulder '' east of a , the southern `` edge '' of p@xmath21 ( see fig . 1 ) and the sw protrusion from p@xmath21 . the dash boxes represent the two substructures around the center and analysis was made to these regions in section @xmath133.6 . the right one : the reconstructed image ( linear - spaced contours ) by wavelet analysis in the same region as the left . this image also reveals the complicated structure at the center with very similar pattern as the left . the surface density discontinuity is enhanced in this wavelet reconstructed image . [ fig2 ] ] square acis - s3 fov 6.7 kev . right panel : the adaptively smoothed image of the final temperature map . the average smoothing scale ( @xmath26 ) is 0.5@xmath0 . this map is the average of the results of three overlapping sets of region described in the text , not simply the smoothing from the results shown on the left . the errors on this temperature map can be estimated from the lower left panel . two sectors shown in white are the `` hot spots '' reported by briel & henry ( 1994 ) . [ fig3 ] ] ; b - p@xmath21 ; c - p@xmath25 ; d - a southern region . region c is same as what we used in fig . 2 for p@xmath25 . regions a and b cover basically same regions as what we used in fig . 2 for p@xmath20 and p@xmath21 but are not crossing the chips . in each small box , the upper spectrum is that of acis - i and the lower one is that of acis - s rescaled by 0.1 . both spectra and fitting results suggest that p@xmath21 and p@xmath25 have more iron than p@xmath20 and the southern region d. the square dash region is the fov of acis - s3 chip in observation 965 . the contours are from acis - i observation 1386 and the gray image is dss ii . [ fig4 ] ] -model . the residual was smoothed by a variable - width gaussian whose @xmath26 varies from 16@xmath53 at the peak to 30@xmath53 near the edges of the image . the contour levels are linearly from 0.03 to 0.98 of the maximum with an interval 0.05 of the maximum . the `` shoulder '' is very prominent pointed by the arrow . the edge at the south of the subcluster peak is very clear . the four regions in line are those that we made temperature measurement in @xmath133.6 . the cross is the derived geometrical center of the main cluster . [ fig6 ] ] @xmath55 uncorrected + @xmath56 after the decomposition ( 7 kev assumption , see text ) + @xmath57 after the decomposition ( 8 kev assumption , see text ) + @xmath58 after the decomposition ( 9 kev assumption , see text ) +
we present observations of the rich cluster of galaxies a2256 . in addition to the known cool subcluster , a new structure ( we call it `` shoulder '' in this paper based on its morphology ) was resolved 2@xmath0 east of the peak of the main cluster . it is shown as a localized feature embedded in the main cluster . its position is roughly at the center of a low - brightness radio relic . spectral analysis shows that the `` shoulder '' has high iron abundance @xmath1 1 ( after the decomposition ) . the gas mass within it is around 2@xmath210@xmath3 m@xmath4 . we suggest that this structure is either another merging component or an internal structure of the main cluster . the previously known subcluster has a low temperature ( @xmath1 4.5 kev ) and high iron abundance ( @xmath1 0.6 ) in the central 150 kpc . the main cluster has the temperature of 7 - 8 kev and the iron abundance of 0.2 - 0.3 around the center . the image shows a relatively sharp brightness gradient at the south of the subcluster peak running south - south - east ( sse ) . a temperature jump was found across the edge , with lower temperature inside the subcluster . this phenomenon is qualitatively similar to the `` cold fronts '' found in a2142 and a3667 . while a simple interpretation is not possible due to the projection , the edge indicates relative motion and contact of the two gas clouds . the temperature map shows only moderate temperature variations across the cluster , but not as strong as those expected in a major merger . if the `` shoulder '' is ignored , the temperature map resembles those simulations at the early stage of merging while the subcluster approached the main cluster from somewhere west . the observed temperature map and the edge - like feature near the south of the subcluster , imply that the ongoing merger is still at the early stage . the x - ray redshifts of several regions were measured . the results are consistent with a single value and all agree with the optical value . at least three member galaxies , including a radio head - tail galaxy , were found to have corresponding x - ray emission with x - ray luminosity from several times 10@xmath5 to 0@xmath6 ergs s@xmath7 . it is found that the observed characteristics ( temperature and iron abundance gradient ) of the subcluster are similar to those of some poor clusters . the absence of galaxies around the peak of the subcluster is proposed to be the result of different falling velocity of the galaxies and the core gas .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Protection of Children From Computer Pornography Act of 1995''. SEC. 2. TRANSMISSION BY COMPUTER OF INDECENT MATERIAL TO MINORS. (a) Offenses.--Section 1464 of title 18, United States Code, is amended-- (1) in the heading by striking ``Broadcasting obscene language'' and inserting ``Utterance of indecent or profane language by radio communication; transmission to minor of indecent material from remote computer facility, electronic communications service, or electronic bulletin board service''; (2) by striking ``Whoever'' and inserting ``(a) Utterance of Indecent or Profane Language by Radio Communication.--A person who''; and (3) by adding at the end the following: ``(b) Transmission to Minor of Indecent Material From Remote Computer Facility, Electronic Communications Service, or Electronic Bulletin Board Service Provider.-- ``(1) Definitions.--As used in this subsection-- ``(A) the term `remote computer facility' means a facility that-- ``(i) provides to the public computer storage or processing services by means of an electronic communications system; and ``(ii) permits a computer user to transfer electronic or digital material from the facility to another computer; ``(B) the term `electronic communications service' means any wire, radio, electromagnetic, photo optical, or photoelectronic system for the transmission of electronic communications, and any computer facility or related electronic equipment for the electronic storage of such communications, that permits a computer user to transfer electronic or digital material from the service to another computer; and ``(C) the term `electronic bulletin board service' means a computer system, regardless of whether operated for commercial purposes, that exists primarily to provide remote or on-site users with digital images, or that exists primarily to permit remote or on-site users to participate in or create on-line discussion groups or conferences. ``(2) Transmission by remote computer facility operator, electronic communications service provider, or electronic bulletin board service provider.--A remote computer facility operator, electronic communications service provider, electronic bulletin board service provider who, with knowledge of the character of the material, knowingly-- ``(A) transmits or offers or attempts to transmit from the remote computer facility, electronic communications service, or electronic bulletin board service provider a communication that contains indecent material to a person under 18 years of age; or ``(B) causes or allows to be transmitted from the remote computer facility, electronic communications service, or electronic bulletin board a communication that contains indecent material to a person under 18 years of age or offers or attempts to do so, shall be fined in accordance with this title, imprisoned not more than 5 years, or both. ``(3) Permitting access to transmit indecent material to a minor.--Any remote computer facility operator, electronic communications service provider, or electronic bulletin board service provider who willfully permits a person to use a remote computing service, electronic communications service, or electronic bulletin board service that is under the control of that remote computer facility operator, electronic communications service provider, or electronic bulletin board service provider, to knowingly or recklessly transmit indecent material from another remote computing service, electronic communications service, or electronic bulletin board service, to a person under 18 years of age, shall be fined not more than $10,000, imprisoned not more than 2 years, or both. ``(4) Three-judge court for civil action.--Any civil action challenging the constitutionality of any provision of this subsection shall be heard and determined by a district court of three judges in accordance with section 2284 of title 28, United States Code.''. (b) Clerical Amendment.--The item relating to section 1464 in the table of sections at the beginning of chapter 71 of title 18, United States Code, is amended to read as follows: ``1464. Utterance of indecent or profane language by radio communication; transmission to minor of indecent material from remote computer facility.''. (c) Report by Attorney General.-- (1) In general.--Not later than 2 years after the date of the enactment of this Act, the Attorney General shall report to the Congress on the state of the technology that would permit parents to block or otherwise filter the transmission of indecent material to minors. (2) Recommendations.--The report shall include recommendations regarding whether the use of blocking or filtering technology by a remote computer facility operator, electronic communications service provider, or electronic bulletin board service provider should be treated as an affirmative defense to prosecution under section 1464(b) of title 18, United States Code, as added by section 2(a)(3).
Protection of Children From Computer Pornography Act of 1995 - Amends the Federal criminal code to prohibit the transmission to minors of indecent material from remote computer facilities, electronic communication services, or electronic bulletin boards. Prohibits: (1) the knowing transmittal or attempted transmittal of indecent material to a person under age 18 (and provides penalties of up to five years' imprisonment, a fine, or both); and (2) a remote computer facility operator, electronic communications service provider, or electronic bulletin board service provider from willfully allowing another individual to transmit indecent material to a person under age 18 (and provides penalties of up to two years' imprisonment, a $10,000 fine, or both). Requires the Attorney General to report to the Congress within two years regarding the state of technology that would permit parents to block or filter the transmission of indecent material to minors.
nonaxisymmetric features are a pervasive and complex aspect of disk galaxies . in normal , relatively non - interacting galaxies , these features are in the forms of bars or spirals . it is well - known that the presence of nonaxisymmetric structures in galaxy disks can impact the evolution of morphology . for example , bars may drive spiral density waves ( kormendy and norman 1979 ) , generate resonance rings of gas ( schwarz 1981 ; buta & combes 1996 ) , impact abundance gradients ( martin & roy 1994 ) , or induce gas inflow that may lead to bar destruction and bulge growth ( norman , sellwood , & hasan 1996 ) . a spiral may trigger shocks , inducing star formation ( roberts , roberts , & shu 1975 ) , or may rearrange stochastically - induced star - forming regions into a more organized pattern ( mccall 1986 ) . it is clear that nonaxisymmetric features , with their associated pattern speeds and resonances , are extremely important to galactic evolution , and understanding how these features develop is one of the principal problems in galaxy formation and dynamics . the source of much of the evolution caused by bars and spirals is gravity torques due to tangential forces . combes & sanders ( 1981 ; see also sanders & tubbs 1980 ) suggested that these forces could provide a useful measure of the strengths of nonaxisymmetric features such as bars , if the potential could be determined . the idea is to derive the maximum value of the ratio of the tangential force to the mean background ( or axisymmetric ) radial force , which would give a single dimensionless number indicating the relative importance of nonaxisymmetry in the potential of a galaxy . this ratio , which is physically the same as the maximum gravitational torque per unit mass per unit square of the circular speed , will be referred to in this paper as @xmath0 , while the method for deriving @xmath0 will be referred to as the gravitational torque method ( or gtm ) . the advent of routine near - infrared imaging of galaxies has made application of the gtm more practical than ever . near - infrared images trace the stellar mass distribution of galaxies , due to their emphasis on the older , dominant stellar populations . potentials can be derived from such images using fast fourier transform techniques in conjunction with assumptions concerning the mass - to - light ratio and the vertical density distribution ( e.g. , quillen , frogel , and gonzlez 1994 , hereafter qfg ) . from this potential , the radial and tangential components of the forces in the plane of the galaxy can be derived , and the combes & sanders ratio can be estimated . recent studies by buta & block ( 2001 ) , block et al . ( 2001 ) , laurikainen , salo , & rautiainen ( 2002 ) , laurikainen & salo ( 2002 ) , and block et al . ( 2002 ) have provided the first attempts to derive the maximum force ratios for significant samples of galaxies . however , in these cases , the samples were either ill - defined statistically , based entirely on relatively short exposure two micron all - sky survey ( 2mass , skrutskie et al . 1997 ) near - infrared images , or used deprojected images that did not allow for the typically rounder shapes of bulges or the most reliable estimates of vertical scaleheights . there are good reasons for trying to derive the maximum force ratio for a large , statistically well - defined sample of galaxies using a refined version of the gtm . firstly , sellwood ( 2000 ) has argued that we could evaluate scenarios of bar formation in disk galaxies if we knew the observed distribution of bar strengths . various bar formation scenarios , such as the natural bar instability " ( miller , prendergast , & quirk 1970 ; hohl 1971 ; sellwood & wilkinson 1993 and other references therein ) or tidal bar formation ( e.g. , noguchi 1996 ; miwa & noguchi 1998 ) , may predict different distributions of maximum relative bar torques , and an observed distribution may distinguish which mechanism is most important . secondly , recurrent bar formation due to accretion of external gas would impact the distribution of maximum force ratios ( bournaud & combes 2002 ) . the idea is that bars can be the engines of their own destruction in the presence of gas ( see , for example , das et al . 2003 ) , but may reform or regenerate later if a galaxy accretes significant quantities of external gas during a hubble time that may cool the disk sufficiently ( see also sellwood & moore 1999 ) . thus , accretion can impact the duty cycle " of bars . this idea was evaluated by block et al . ( 2002 ) using an application of the gtm to the ohio state university bright galaxy survey ( osubgs , eskridge et al . block et al . concluded that the distribution of maximum relative torques favored the idea that galaxies accrete enough gas to double their mass in 10@xmath4 years . in this paper , we re - examine the distribution of maximum relative torques in spiral galaxies based on application of a much refined version of the gtm to basically the same osubgs sample as used by block et al . , supplemented by a few larger galaxies with images from the 2mass database . our goal is to derive a reliable distribution of maximum relative bar and spiral torques in disk galaxies that can be compared with model predictions . the refinements we use account for the shapes of bulges , improved estimates of the galaxy orientation parameters , vertical scaleheights inferred from type - dependent scalings of the radial scalelength , and a statistical evaluation of the impact of dark matter . the @xmath0 values we use are from laurikainen et al . ( 2003 ) . only a few of the technical details connected with these values will be provided here , and we refer the reader to laurikainen et al . ( 2003 ) for a full accounting of our application of the gtm . our approach allows us to derive the most reliable maximum relative torques , and therefore the most accurate distribution of these torques . our sample consists of 158 galaxies from the osubgs having inclinations less than 65@xmath5 and 22 2mass galaxies having a similar inclination limit but which were too large to be in the osubgs . the selection criteria for the osubgs are that the rc3 t index is in the range 0@xmath69 ( s0/a to sm ) , the total magnitude @[email protected] , the isophotal diameter @xmath96@xmath105 , and the declination is in the range @xmath11 ( eskridge et al . 2002 ) . table 1 summarizes several of the mean properties of the sample , based on data from rc3 ( de vaucouleurs et al . 1991 ) . of the 180 galaxies , 177 have family classifications given in rc3 . table 1 shows that in the sample , there are virtually equal numbers of galaxies classified as sa , sab , or sb . table 1 divides the averages according to this classification parameter . the table shows that mean parameters in the sample are similar within these families . the mean hubble type is sb - sbc . average colors , apparent angular size , radial velocities , and distances are similar among the families . there is an indication that , on average , the sa galaxies in the sample are slightly more inclined than the sab and sb galaxies . also , sa galaxies are slightly more luminous and larger than sab and sb galaxies . an inclination effect on the morphological recognition of bars is not unexpected and merely highlights the difficulty of seeing bars which are weak and viewed at high inclination . however , with bulge / disk decomposition and deprojection , as well as near - ir imaging , we can detect some of these lost bars . figures [ histo1 ] and [ histo2 ] show the more detailed distributions of sa , sab , and sb galaxies in the sample versus rc3 type , absolute blue magnitude @xmath12 , the logarithm of the isophotal axis ratio @xmath13 , and corrected color index @xmath14 . absolute magnitudes use @xmath15 from rc3 and distances either from or on the scale of tully ( 1988 ) . although the mean @xmath16 index is nearly the same for the separate families , sb galaxies are asymmetrically distributed towards early types while sa galaxies are asymmetrically distributed towards later types . the distributions by absolute magnitude show the higher luminosities of the sa galaxies compared to sab and sb galaxies . the distribution with @xmath13 definitely emphasizes lower inclinations for sb galaxies , while it is more uniform for sa galaxies to the cutoff . integrated colors are similarly distributed over the three families . for comparison , figures [ histo3 ] and [ histo4 ] show the same histograms for a distance - limited sample of 1264 spirals from the catalog of tully ( 1988 ) . table 1 lists the mean parameters for the same sample . our magnitude- and diameter - limited osu/2mass sample emphasizes earlier hubble types and brighter absolute magnitudes than the tully catalog , the differences being most extreme for sb galaxies . the distributions of color and axis ratio , except for our inclination cutoff , are similar to those for our sample galaxies . thus , our sample is mainly biased against late - type , low - luminosity barred spirals . there is less bias in the sa and sab subsamples because these tend to have fewer late - type , low luminosity examples . a critical issue is that it appears that our sample is not necessarily biased much against _ nonbarred _ spirals . the basic assumptions in the gtm are : ( 1 ) the near - infrared light distribution traces the mass , i.e. , the mass - to - light ratio is constant ; ( 2 ) the vertical density distribution can be simply represented as , for example , exponential with vertical scaleheight @xmath1 ; and ( 3 ) galaxies can be deprojected as thin disks , after allowing for the shape of the bulge . as noted by buta & block ( 2001 ) , the first assumption is probably valid for many galaxies in the bar region , where maximum disks tend to be found ( e.g. , freeman 1992 ) . however , this is still an open question as noted by kranz , slyz , & rix ( 2003 ) , who used the amplitudes of modeled noncircular motions in five spirals to deduce that maximum disks may be valid only if the maximum rotation velocity exceeds 200 km s@xmath17 . in our sample , this would be the case only for galaxies having @xmath18 @xmath19 @xmath2020.8 ( tully et al . 1998 ) . we address this issue further in section 8 using the universal rotation curve " approach of persic , salucci , & stel ( 1996 ) . laurikainen & salo ( 2002 ) showed that the gtm is fairly insensitive to the form of the assumed vertical density distribution . the first refinement we use over buta & block ( 2001 ) is a polar coordinate grid as opposed to a cartesian grid ( laurikainen & salo 2002 ) . buta & block used the qfg method of transforming near - ir images into gravitational potentials , which operates on a two - dimensional image . this approach provides an image of the potential , which can be used to derive a two - dimensional map of the ratio of the tangential to the mean radial force . in such a map , if a strong bar is present , four well - defined maxima or minima are seen in the form of a butterfly pattern . " buta & block defined the bar strength @xmath21 to be the average of the absolute values of the four maxima / minima . laurikainen , salo , & rautiainen ( 2002 ) and laurikainen & salo ( 2002 ) used a polar grid approach as an alternative to qfg to allow the application of the gtm to noisy and rather low resolution 2mass images . fourier components of the light distribution are computed as a function of radius @xmath22 and azimuthal angle @xmath23 , and these fourier light components are individually transformed into potential components . the potential is then reconstructed analytically , and the maximum force ratio , @xmath24 , as a function of radius is computed . in previous gtm studies such as those of buta & block ( 2001 ) and block et al . ( 2001 , 2002 ) , orientation parameters from rc3 were used to deproject most of the galaxy images . however , these orientation parameters are in many cases based on photographic images and can be manifestly improved with modern digital images . we have used the @xmath25-band images from the osubgs to fit ellipses to outer isophotes and derive mean axis ratios and position angles for the outer disks . in the future , these can also be improved upon using two - dimensional velocity fields . the results of the ellipse fits , as well as uncertainties , will be provided by laurikainen et al . ( 2003 ) . although the bulges of some barred galaxies might be as flat as the disk ( kormendy 1993 ) , in many galaxies the bulge is a rounder component than the disk . if this rounder shape is ignored when deprojecting a galaxy , the bulge isophotes will be stretched into a bar - like distortion ( called deprojection stretch " by buta & block 2001 ) , leading to false torques . to deal with this problem we have used two - dimensional photometric decomposition , based on srsic models ( srsic 1968 ) and allowing for seeing effects . the bulge and disk are described as in mollenhff & heidt ( 2001 ) , and in addition a bar component is added to the fit ( ferrer s bar with index n=2 ) , which in some cases is essential for avoiding artificially large bulge models . the technique we used , as well as the derived parameters , will be outlined in more detail by salo , laurikainen , & buta ( 2003 ) . the decompositions allowed us to remove the bulges , deproject the disks , and then add back the bulges as spherical components . thus , our analysis is not affected seriously by bulge deprojection stretch . " the computation of a potential from a near - infrared image requires a value for the vertical scaleheight , which can be directly measured only for edge - on galaxies . buta & block ( 2001 ) and block et al . ( 2001 ) simply assumed that all galaxies had the same vertical exponential scaleheight as our galaxy , @xmath1 = 325pc ( gilmore & reid 1983 ) . however , this approach required knowledge of the distance to each galaxy , which had to be based on radial velocities . here we follow laurikainen , salo , & rautiainen ( 2002 ) and derive @xmath1 ( = 0.5@xmath26 , where @xmath26 equals the isothermal scaleheight ) by scaling values from the radial exponential scalelength , @xmath2 . as shown by de grijs ( 1998 ) , the ratio @xmath2/@xmath1 depends on hubble type , being larger for later types compared to earlier types . values of @xmath2 were provided by our decompositions , and we used the following scalings by type : @xmath3 = 4 for s0/a - sa galaxies , 5 for sab - sbc galaxies , and 9 for sc galaxies and later . we define the maximum relative gravitational torque , @xmath0 , to be the maximum value of the ratio of the tangential force to the mean radial force derived from a plot of @xmath27 versus @xmath22 , based on a quadrant analysis . in some cases , @xmath0 is mostly measuring the maximum torque due to a bar , while in other cases @xmath0 is clearly measuring only spiral torques . in many cases , @xmath0 will be measuring a combination of bar and spiral torques , as shown by buta , block , & knapen ( 2003 ) , who developed a fourier - based bar / spiral separation technique . thus , our analysis can not provide a true distribution of maximum relative bar torques @xmath21 . for the evaluation of accretion models of spirals , block et al . ( 2002 ) noted that this is not a problem because the models often also have spiral torques that contribute to @xmath0 estimates . our main result is shown in figure [ freqs ] , and is compiled as counts @xmath28 and relative frequencies @xmath29 ( = @xmath28/180 ) in table 2 . the distribution of maximum relative gravitational torques is shown for the full sample of 180 galaxies in comparison to the subsamples of sa , sab , and sb galaxies in figure [ distrib ] . the latter plots show again that there is indeed a correlation between maximum torque and de vaucouleurs family classification , but the spread in @xmath0 is very wide for sab and sb galaxies . sa galaxies appear to genuinely select the narrowest range of @xmath0 , while sab and sb galaxies include objects having @xmath0 between 0.05 and 0.7 . thus , except for sa galaxies , the de vaucouleurs family classifications do not tell us much about real gravitational bar torques except in an average sense . in table 1 , the mean values of @xmath0 by family are listed . the mean increases linearly from sa to sb , with maximum relative gravitational torques being 11% for a typical sa galaxy , 22% for a typical sab galaxy , and 33% for a typical sb galaxy . figure [ freqs ] shows an asymmetric distribution of maximum relative gravitational torques , with a `` tail '' extending to @[email protected] . from the histograms in figure [ distrib ] , it is clear that the primary peak in this plot is due mainly to sa and sab galaxies , while the extended tail is due to sab and sb galaxies . the average value of @xmath0 for the full sample is 0.222 with a standard deviation of 0.147 . as we have noted , a similar study of the distribution of maximum relative gravitational torques in the osubgs sample was made by block et al . they selected 163 galaxies from the original sample of 198 having inclinations of 70@xmath5 or less and not members of obviously interacting systems . vertical exponential scaleheights were derived from roughly estimated radial scalelengths ( see below ) as @xmath31 . most importantly , no bulge / disk decompositions were made to allow for the likely rounder shapes of bulges , and approximate orientation parameters from rc3 were used for the deprojections . like us , however , block et al . derive @xmath0 from graphs of @xmath27 versus @xmath22 . for their parameter , but it is not derived in the same manner as the @xmath21 defined by buta & block ( 2001 ) . instead , it is the same as our definition of @xmath0 . ] thus , a comparison between our histogram of maximum relative torques and theirs is appropriate . figure [ compare ] compares the block et al . distribution of maximum gravitational torques with our distribution . the block et al . histogram is not exactly the same as the one published , but is based on a table kindly sent to us by . f. combes . it includes 159 galaxies where the measured @xmath32 . in spite of the similar numbers of objects , the block et al . sample is missing 13 galaxies that are in our sample , and includes 18 galaxies missing from our sample . the differences are in part due to our different inclination cutoffs ( 65@xmath5 in our analysis versus 70@xmath5 used by block et al . ) as well as the different axis ratios used to estimate inclinations ( isophotal fits for our sample versus rc3 @xmath33 for block et al . ) . to make the comparison fair , we use only the 145 galaxies in common between our samples . although both histograms are similar in showing an asymmetric distribution , our distribution shows more galaxies having low maximum relative torques ( @xmath0 @xmath8 0.15 ) . the first two bins in the block et al . histogram are extremely deficient in galaxies , a point used by them to argue that galaxies double their mass by accretion in 10@xmath4 years . the reasons for the differences can be tied directly to a number of causes , highlighted by the histograms in figure [ errors ] . figure [ errors]a shows that without the correction for bulge shape , deprojection stretch can depopulate the first two bins . however , the effect seems less important than might have been expected given that our inclination cutoffs were high in both cases . a more serious effect could be the assumed scaleheights , as shown in figure [ errors]b . in this plot , we allow for the scatter in @xmath3 from de grijs ( 1998 ) and compute @xmath0 for the minimum values of @xmath3 = 1 , 3 , and 5 ( max @xmath1 " case ) and maximum values of 5 , 7 , and 12 ( min @xmath1 " case ) for types s0/a to sa , sab - sbc , and sc and later , respectively . the max @xmath1 " case clearly shows more low @xmath0 values than the min @xmath1 " case . since block et al . ( 2002 ) used @xmath3 = 12 for all galaxies irrespective of hubble type , their analysis favored lower vertical scaleheights and larger values of @xmath0 on average . our use of bulge / disk decompositions and a type dependence to @xmath2/@xmath1 means that on average , our vertical scaleheights are higher than those used by block et al . ( 2002 ) , and hence our gravitational torques will be weaker . for a fairer comparison , we have recomputed @xmath0 for our deprojected images assuming @xmath31 . as expected , this depletes the first two bins but does not account for all the differences seen . the use of improved orientation parameters could also contribute a little to the differences . figures [ errors]c and d show that uncertainties of @xmath345@xmath5 in inclination and @xmath344@xmath5 in major axis position angle do not impact the observed distribution of gravitational torques too seriously . the number of fourier terms to @xmath35=20 ( figure [ errors]f ) also has little impact . figure [ errors]e shows the histograms for those galaxies where @xmath0 is clearly measuring a bar mostly and those where @xmath0 is clearly measuring a spiral . the distinction was made by examining the phase of the @xmath35=2 component in the region of the maximum . if this phase was relatively constant , then the @xmath27 plot was concluded to be bar - dominated at the radius of the @xmath0 maximum . otherwise , it was concluded to be spiral - dominated . both distributions show a wide spread , although spirals are weaker on average than bars . table 3 summarizes the uncertainties in individual estimates of @xmath0 due to inclination , position angle , and vertical scaleheight . the table compiles the average deviation for @xmath365@xmath5 , @xmath374@xmath5 , and the minimum and maximum values of @xmath1 , for three bins of inclination . in table 4 and figure [ incanal ] , we look for any systematic effects due to inclination . figure [ incanal ] shows plots of @xmath0 versus inclination @xmath38 , where @xmath38 is computed using either our mean ellipse - fit axis ratios for the osubgs sample , or log@xmath39 for the 2mass sample . we compute @xmath38 assuming oblate spheroids and an intrinsic axis ratio @xmath40=0.2 . the figure shows no strong systematic effect with inclination . this is verified in table 4 , where we compile the mean @xmath0 values for each sample in figure [ incanal ] divided around the median : 45@xmath56 for the sa sample , 40@xmath57 for the sab sample , 42@xmath56 for the sb sample , and 42@xmath57 for the full sample . except for the sab sample , the high and low inclination samples have the same means within the mean errors . another issue related to uncertainties is the impact of the position angle of the bar relative to the line of nodes . buta & block ( 2001 ) showed that in a case like ngc 1300 , where the bar is oriented nearly along the line of nodes , the maximum torque is very sensitive to the assumed inclination . the same would be true if the bar is viewed end - on . we have investigated how important this might be in our current sample . figure [ barang ] shows a plot of @xmath0 versus relative bar position angle @xmath41 . in this plot , @xmath42 is determined from the phase of the @xmath35=2 component of the potential at the radial location where @xmath27 attains a maximum ; the direction in the disk plane is then projected to the sky plane . analysis of figure [ barang ] indicates that there is indeed a bias in the sense that the average bar strength is weaker for those systems where the bar becomes `` thicker '' in deprojection . the averages are @xmath43 @xmath44 the solid line in the plot shows the running mean of @xmath0 in 15@xmath5 wide bins . the difference is statistically significant , with the probability of having the same true mean values being only 0.0035 . the referee has questioned whether our use of a polar grid approach might cause lower values of @xmath0 to be measured . the idea is that smoothing with a polar grid might reduce the strength of the perturbation , increasing the number of low @xmath0 values . we have checked this by recomputing our @xmath0 values using a cartesian approach with a 128@xmath45128 grid resolution ( covering the whole galaxies usually , but not necessarily the whole image ) . the radial profiles @xmath46 were constructed separately for four image quadrants , and the mean of these profiles was computed . the cartesian @xmath0 was then taken from the peak of the cartesian @xmath46 profile , limited to the radial range around the force maximum found by the polar method . this was done to insure that the cartesian @xmath0 corresponds to the bar region , and does not refer to some spurious force maximum in the outer parts of the images . figure [ whyte ] ( upper panels ) shows the results of the comparison . we find very good agreement between our @xmath0 estimates from the cartesian and polar grid approaches . however , comparison of the same numbers with the block et al . ( 2002 ) values is poorer , as shown by the upper middle and upper right panels of figure [ whyte ] . the upper left panel of figure [ whyte ] does show that some cartesian @xmath0 values are noticeably larger than the polar grid values . however , as discussed in laurikainen & salo ( 2002 ) , the cartesian method can lead to large spurious force values in the noisy outer parts of images , sometimes leading to an overestimate of @xmath0 if the results are automatically collected , without careful inspection of the force profiles . this might account for several very large values of @xmath21 estimated by block et al . ( 2002 ) , seen in the upper panels of figure [ whyte ] . mainly for this reason , we chose the polar grid force evaluation as our standard procedure . the cartesian method is useful as a check of the polar method results . as a further check on how our methods affect the histogram of maximum relative torques , we have analyzed more closely three highly - inclined galaxies in our sample , ngc 3166 , 3338 , and 3675 , trying to duplicate the methods used by block et al . : ( 1 ) use the rc3 position angle and inclination to deproject the galaxies ; ( 2 ) no correction for the shape of the bulge ; ( 3 ) radial scalelength derived from @xmath47 in rc3 assuming all the galaxies follow the freeman ( 1970 ) law , with @xmath1=@xmath2/12 ; and ( 4 ) using a cartesian transformation for the potential . the results are @xmath0 = 0.26 , 0.16 , and 0.15 , respectively , compared with the values of 0.27 , 0.14 , and 0.15 actually derived by block et al . thus , mimicking the block et al . treatment with our codes yields values that fully agree with those obtained by block et al . in contrast , our refined approach gives values of @xmath0= 0.11 , 0.08 , and 0.08 for the same galaxies . the reason for the low @xmath0 values we get compared to theirs is due to our refinements , and not a serious difference in our codes . the idea that galaxies might accrete significant quantities of external gas during a hubble time is certainly intriguing . our revised histogram ( with its extended tail of large @xmath0 values ) still supports this idea , but may favor an accretion rate between the two cases discussed by block et al . ( 2002 ) : the no accretion idea and a rate which doubles the mass in 10@xmath4 years . as shown in this work , the bulge correction , improvements in the orientation parameters , and the larger vertical scaleheights we use considerably increase the number of galaxies with low maximum relative torques . in spite of the differences with block et al . , we still find a deficiency of galaxies in the lowest torque bin , @xmath0 @xmath8 0.05 . truly axisymmetric galaxies appear to be rare in the osubgs and 2mass samples , although we note that because @xmath0 can not be negative , noise could also deplete the first bin to some extent . whyte et al . ( 2002 ) have used the osubgs to compute bar strength using an isophotal analysis . they derived a bar strength parameter , @xmath48 , based on the minimum @xmath49-band isophotal axis ratio , @xmath50 , in the bar region estimated from a moment analysis involving a series of cuts through an image in surface brightness ( abraham & merrifield 2000 ) . the parameter @xmath48 is convenient because it scales the bar strength to the range 0.0 to 1.0 , and also because it stretches the range corresponding to the important small @xmath50 values . block et al . ( 2002 ) used the whyte et al . results to support their findings of few nonbarred galaxies in the osu database , and thus their conclusions concerning the accretion rate in galaxies . the lower panels of figure [ whyte ] show comparisons between our @xmath0 values ( both polar and cartesian ) and @xmath48 and @xmath21(block et al . ) and @xmath48 . the most striking difference is how well @xmath48 correlates with our values of @xmath0 , showing that the shape of the bar does correspond well to the strength of the gravity field . this was also shown by laurikainen , salo , & rautiainen ( 2002 ) for their 2mass sample . in contrast , the comparison between @xmath48 and @xmath21(block et al . ) shows a noticeably larger scatter . in spite of the good agreement between @xmath48 and our @xmath0 values , @xmath48 is by no means a suitable replacement for @xmath0 . @xmath48 is probably determined by the self - consistent response of the bar to the gravitational field that maintains it , and thus it measures the force in an indirect fashion . @xmath0 , on the other hand , estimates this field directly from the luminosity distribution . ideally , the way to assess the impact of dark matter on a torque indicator such as @xmath0 would be to compare an observed rotation curve with a rotation curve predicted from an azimuthally - averaged light profile , preferentially a near - infrared profile corrected for color effects due to a radial stellar population change ( e.g. , bell & de jong 2001 ) . then the signature of the dark component would be how much the observed and predicted rotation curves disagree , especially in the outer parts of the galaxies . however , it is impractical for us to carry out such a comparison for our full sample in a homogeneous way . thus , we have used a more statistical approach . our estimates for halo corrections are based on the extensive analysis of rotation curves and light profiles by persic , salucci , & stel ( 1996 , hereafter pss ) . in this paper the dark halo rotation curves are described by the isothermal sphere law , with a smooth transition to constant core density @xmath51 where @xmath52 is the radius normalized to the optical radius , a fiducial reference radius enclosing 83% of the total blue luminosity./2 , which is specifically valid only for a freeman disk . the error committed for those galaxies that may not be freeman disks is not serious given the approximate nature of these estimates . ] the parameter @xmath53 is the halo core radius , also in units of @xmath54 . pss ( see especially their erratum ) give , based on their sample of 1100 optical and radio rotation curves , @xmath55 and @xmath56 where @xmath57 in the @xmath25-band . near the optical radius we may estimate @xmath58 where @xmath59 includes the rotation velocity due to the disk plus bulge . eqs . ( 1)-(3 ) now define @xmath60 at all radii , as a function of @xmath61 , and the value of @xmath62 at some value near @xmath63 . once @xmath64 is known , the @xmath46 profiles computed under a constant @xmath65 assumption are modified to @xmath66 where @xmath67 and @xmath68 are the radial forces due to visible and dark masses , respectively , and the superscript `` hc '' means `` halo - corrected '' . if the measurements extend to @xmath63 , then @xmath69 has been used , while in the case @xmath70 , then @xmath71 ) was used for fitting @xmath60 . values of @xmath54 were taken from rc3 , and the @xmath25-band luminosities @xmath72 were calculated from @xmath25-magnitudes and galactic extinctions given in ned and distances from tully ( 1988 ) . figure [ allhalo]a shows the distribution of @xmath61 for our sample of 180 galaxies . the distribution peaks near @xmath61 @xmath301 . figure [ allhalo]b shows the distribution of @xmath73 as a function of @xmath61 , indicating how the correction gets more important for less luminous galaxies with more dominant halo components . the deviating point at @xmath74 is ngc 7213 , for which @xmath0 is practically zero and obtained near @xmath54 ( @xmath0 changes from 0.023 to 0.017 ) . finally , figure [ allhalo]c shows the distribution of @xmath0 with and without halo correction . the average value of @xmath0 with the correction is 0.209 compared with 0.222 without the correction , indicating only a marginal ( 6% ) reduction . altogether , the effect of dark halos appears to be weak for the sample , which as we have shown is dominated by fairly luminous systems for which pss models imply halos with rather large core radii and relatively small mass within @xmath54 . therefore , the contribution to @xmath46 is small in the inner parts of the galaxy where maximum @xmath0 s are typically obtained , at least for bars . for spiral forces alone the effect would be more prominent . a potential problem with the fits described above for low luminosity galaxies is that in many cases the measurements probably do not reach far enough , in terms of disk scalengths , to yield reliable outer rotation curves ( truncation of the disk overestimates the disk radial force and thus the rotation velocities ) . for @xmath0 measurements this is not a problem , as noted by laurikainen & salo ( 2002 ) . however , the above procedure uses outer @xmath59 s to estimate @xmath60 s , which therefore might in some cases be overestimated . indeed , strange , strongly rising rotation curves follow for some of the less luminous galaxies when the above procedure is applied ( although they are rising already before inclusion of the halo ) . nevertheless , since this error in all cases overestimates the reduction of @xmath0 due to the inclusion of a halo , it is not important for the present purpose . because the bulge is usually more significant in early - type galaxies , we might expect that maximum relative gravitational torques would be diluted somewhat compared to later - type galaxies . this is because the bulge can be a significant contributor to the mean axisymmetric radial force in the bar regions of early - type spirals . block et al . ( 2001 ) searched for this effect in their combined sample of 75 galaxies but did not detect a measurable type dependence . they argued that the bulge dilution at early types could be partly offset by the shorter bars found at later types ( e.g. , elmegreen & elmegreen 1985 ) . laurikainen , salo , & rautiainen ( 2002 ) also searched for a type dependence in @xmath0 in a 2mass sample of 43 barred galaxies , half of which have agn . in their sample , 19 galaxies have types sa - sb and 21 galaxies have types sbc and later . these authors derived @xmath75 = [email protected] for the early types and [email protected] for the later types , suggesting a possible difference . our refined treatment of bulges and our larger sample compared to these previous studies allows us to re - evaluate this possible effect more reliably . as we have noted , we allowed for the more spherical shapes of bulges using two - dimensional photometric decompositions that took into account , where necessary , the contributions of bars . we also treated bulges as spherical in their potentials , such that the forces in the plain are properly estimated . in buta & block ( 2001 ) and block et al . ( 2001 ) , bulges were assumed to be as flat as disks , which overestimated their radial forces in the plane . figure [ bytype ] shows the correlation of @xmath75 with rc3 revised hubble type in our present sample . the filled circles show the averages with no dark halo correction , while the crosses show the averages with a halo correction . table 5 also summarizes the numerical values for no halo correction . this plot does appear to detect a type - dependence in our measured maximum relative gravitational torques . for early - type spirals ( @xmath16=0 - 3 , or s0/a - sb ) , @xmath75 = [email protected] , while for late - type spirals ( @xmath16=4 - 9 , or sbc - sm ) , @xmath75 = [email protected] . a halo correction reduces these means only slightly , to 0.169 for s0/a - sb and 0.247 for sbc - sm . the difference between early and late - type spirals appears to be significant . as shown in figure [ bytype ] and table 4 , the effect persists even when the sample is divided by de vaucouleurs family , and has the same trend in the sense that early - types have lower average @xmath0 . this suggests that early - type spirals do indeed have diluted maximum relative gravitational torques , an effect which must contribute to the observed scatter of @xmath0 among the three de vaucouleurs families . in interpreting this result , the first question one might ask is how reliable the bulge decompositions are . since we used a sophisticated two - dimensional decomposition allowing for a bulge , a disk , and a bar in the fit , we believe the decompositions are as good as we will be able to make them . the referee argues that bulge subtraction is delicate and not unique , and that if the bulge participates in the bar instability ( as in the box / peanut shape ) , then its impact may not be reliably treated . this is a valid concern . however , laurikainen & salo ( 2002 ) have tested a radius - dependent scaleheight that simulates a peanut - shaped distribution in the sense that the vertical scaleheight increases towards the outer parts of the bar by an amount similar to that observed in real galaxies . this was found to affect @xmath0 estimates by only about 5% . another important question is how our assumptions concerning the vertical scaleheight contribute to the observed type dependence . our estimates of @xmath0 have utilized the findings of de grijs ( 1998 ) to infer @xmath1 from @xmath2 , assigning larger values of @xmath1 to early - types compared to late - types . if we assume instead that @xmath1 = @xmath2/12 for all types , we get the results shown in figure [ bytype12 ] . our assumption of a type dependence to @xmath3 does indeed enhance the measured type - dependence in @xmath0 . however , the assumption of a constant value of @xmath3 is inconsistent with studies of edge - on galaxies and favors our approach . figure [ bytype ] shows that @xmath75 is type - dependent , but it does not prove unequivocably that this means bars are relatively weaker in early - type spirals than in late type spirals . this is because @xmath0 is also affected by spiral arm torques . to try and approximately separate the two phenomena , we use the bar / spiral discriminations from figure [ errors]e and discussed in section 6 . if we compute @xmath75 as a function of type for these subsamples separately , we get the results in figure [ splitype ] . surprisingly , it appears that both bars and spirals are relatively weaker in early - types as compared to late - types . for bars especially , the type dependence is remarkably well - defined . a type - dependence in bar strength is also found in the whyte et al . ( 2002 ) analysis , although it is smaller than found for @xmath0 . figure [ whyte2 ] shows @xmath76 vs rc3 type index @xmath16 . just as for @xmath0 , early - type spirals have lower average @xmath48 than late - types . for 49 s0/a - sb galaxies in the whyte et al . sample , @xmath76 = 0.190 @xmath34 0.013 , while for 76 sbc and later galaxies , @xmath76 = 0.213 @xmath34 0.011 . the effect is marginal but is still in the same sense as found for @xmath0 . note that on the basis of theoretical models , one might expect early - type galaxy bars to have stronger maximum torques simply because the bars are longer than those in later types ( elmegreen & elmegreen 1985 ) . apparently , bulge dilution is a more dominant effect , so that late - type galaxy bars are stronger in a relative sense . note that this result refers mainly to sbc - sc galaxies as late - types , as our sample has few galaxies of types scd and later . this is a result of our sample biases . a distance - limited sample would provide more reliable results for the very late - type spirals . eskridge et al . ( 2002 ) used the @xmath49-band images in the osubgs to estimate near - ir classifications of galaxies within the revised hubble framework of de vaucouleurs ( 1959 ) and sandage and bedke ( 1994 ) . these classifications include the family ( sab or sb and plane s for nonbarred galaxies ) , and the stage from s0 to sm . we converted the @xmath49-band stages , estimated as if the images were blue light images , to the rc3 numerical @xmath16 index scale . eskridge et al . ( 2002 ) note that the apparently increased bulge - to - disk ratio and the greater degree of smoothness of structure biases near - ir classifications towards earlier types on average . for galaxies where these effects changed the type from a spiral classification to s0 or sb0 , we have used the index @xmath16=@xmath202 . table 6 summarizes the mean values by stage and family from the near - ir classifications . as noted by eskridge et al . ( 2000 ) , near - ir classifications from the osu sample show twice as many strongly - barred ( sb ) types as in the optical . however , table 6 shows that the eskridge et al . sab and sb classifications have slightly lower @xmath75 than the corresponding rc3 families . rc3 sb galaxies in our sample have @xmath75 = 0.331 @xmath34 0.019 ( m.e . ) , while eskridge et al . sb galaxies in our sample have @xmath75 = 0.290 @xmath34 0.015 . the likely reason for this difference is that near - ir images not only make weak bars more evident , but also make stronger bars more obvious . thus , near - ir imaging does not necessarily change the rankings of bars much . there is no new category for a @xmath25-band sb spiral to be placed into even though it looks stronger in the near - ir . however , a @xmath25-band sab spiral can be placed into the sb category if it looks stronger in the near - ir . since the real rankings are not changed much , the mean @xmath0 for the near - ir families is decreased because of inclusion of weaker bars . figure [ esbytype ] shows that when @xmath75 is plotted against the numerically coded near - ir stages , a strong trend with type is seen that extends into the near - ir s0 class . the trend is smoother than that found using rc3 types , but has about the same amplitude from s0/a to sm . the improved correlation is probably not unexpected since the appearance of the spiral arms helped to determine the near - ir type , and the strength of the arms can impact @xmath0 . for example , the spiral arms in some of the osu galaxies is virtually invisible in the near - ir , leading to a classification of s0 . however , the implication once again is that maximum relative torques are weaker in early - type disk galaxies than in late - type disk galaxies . we have derived an accurate distribution of maximum relative gravitational torques in a sample of 180 osubgs and 2mass galaxies . the sample is representative of bright galaxies , but is biased against late - type , low - luminosity barred spirals . it is not biased against nonbarred galaxies . the distribution is more accurate than previous studies because of the refinement of the gravitational torque method . we have used two - dimensional bulge / disk / bar decomposition to eliminate the impact of bulge deprojection stretch on the calculated torques , and to derive reliable radial scalelengths that can be scaled to vertical scaleheights using the type dependence of @xmath3 derived by de grijs ( 1998 ) . we have also used orientation parameters based on isophotal ellipse fits to the blue - light images in the osubgs , which will be an improvement over previously published values for many of the galaxies . with these refinements , we find a higher relative frequency of low maximum relative torque galaxies compared to block et al . ( 2002 ) . the implications for the amount of accreted matter advocated by block et al . ( 2002 ) remain to be evaluated , but we expect that the revised distribution will favor less accretion once the models account for the same refinements the observations have accounted for . this will be addressed in a future paper . we have discussed in detail the uncertainties and biases in our distribution of gravitational torques . because the sample emphasizes high - luminosity systems , corrections for dark matter appear to be small . in the future , further improvements could be made by obtaining two - dimensional velocity fields of the galaxies in question . this would facilitate the derivation of kinematic orientation parameters , and improved deprojections . we find a significant dependence of the mean maximum gravitational torque on revised hubble type . the effect persists even when the sample is divided into bar - dominated and spiral - dominated subsamples , and when near - infrared types from eskridge et al . ( 2002 ) are used in place of rc3 types . both bars and spirals tend to have weaker average relative torques in early - type spirals compared to late - type spirals . the likely cause of this is torque dilution due to the stronger bulges in early - type spirals . dark matter has only a marginal impact on this effect . we thank the referee , f. combes , for valuable comments on our paper and for sending a file with her estimates of @xmath0 for the osu sample . we also thank l. whyte for sending her table of @xmath48 values . rb acknowledges the support of nsf grant ast-0205143 to the university of alabama . el and hs acknowledge the support of the academy of finland , and el also from the magnus ehrnrooth foundation . funding for the osu bright galaxy survey was provided by grants from the national science foundation ( grants ast-9217716 and ast-9617006 ) , with additional funding from the ohio state university . this publication also utilized images from the two micron all - sky survey , which is a joint project of the university of massachusetts and the infrared processing and analysis center of the california institute of technology , funded by the national aeronautics and space administration and the national science foundation . this research has also made use of the nasa / ipac extragalactic database ( ned ) which is operated by the jet propulsion laboratory , california institute of technology , under contract with the national aeronautics and space administration . lrrrrrr @xmath28 & 58 & 57 & 62 & 291 & 364 & 609 + @xmath77 & 0.15 & 0.10 & 0.17 & 0.19 & 0.17 & 0.21 + @xmath78 & 0.17 & 0.15 & 0.15 & 0.29 & 0.21 & 0.30 + @xmath79 & 3.67 & 3.83 & 3.61 & 4.45 & 5.18 & 6.46 + @xmath80 & 1.64 & 1.66 & 1.62 & 1.58 & 1.51 & 1.46 + @xmath81 & 0.62 ( 51 ) & 0.61 ( 49 ) & 0.60 ( 52 ) & 0.58 ( 209 ) & 0.56 ( 219 ) & 0.53 ( 286 ) + @xmath82 & 0.06 ( 41 ) & 0.05 ( 39 ) & 0.04 ( 46 ) & 0.00 ( 169 ) & @xmath200.03 ( 175 ) & @xmath200.07 ( 256 ) + @xmath83 ( km s@xmath17 ) & 1467 & 1322 & 1536 & 1564 & 1622 & 1543 + @xmath84 ( mpc ) & 21.0 & 19.0 & 20.8 & 22.3 & 23.6 & 21.6 + @xmath85 & @xmath2020.37 & @xmath2020.21 & @xmath2020.22 & @xmath2019.8 & @xmath2019.5 & @xmath2018.7 + @xmath86 ( kpc ) & 26.7 & 25.5 & 25.4 & 24.3 & 22.9 & 18.5 + @xmath87 & [email protected] & [email protected] & [email protected] & ..... & ..... & ..... + crc 0.025 & 10 & 0.056 + 0.075 & 32 & 0.178 + 0.125 & 29 & 0.161 + 0.175 & 27 & 0.150 + 0.225 & 17 & 0.094 + 0.275 & 16 & 0.089 + 0.325 & 14 & 0.078 + 0.375 & 12 & 0.067 + 0.425 & 10 & 0.056 + 0.475 & 2 & 0.011 + 0.525 & 6 & 0.033 + 0.575 & 0 & 0.000 + 0.625 & 2 & 0.011 + 0.675 & 3 & 0.017 + lrcccc sa & 58 & [email protected] & 0.080 & [email protected] & 0.046 + sab & 57 & [email protected] & 0.141 & [email protected] & 0.094 + sb & 62 & [email protected] & 0.148 & [email protected] & 0.148 + full & 180 & [email protected] & 0.151 & [email protected] & 0.143 + lccccr s0/a & 0 & 0.195 & 0.131 & 0.038 & 12 + sa & 1 & 0.125 & 0.108 & 0.028 & 15 + sab & 2 & 0.155 & 0.124 & 0.030 & 17 + sb & 3 & 0.205 & 0.129 & 0.023 & 32 + sbc & 4 & 0.242 & 0.140 & 0.022 & 39 + sc & 5 & 0.246 & 0.155 & 0.025 & 38 + scd & 6 & 0.321 & 0.180 & 0.050 & 13 + sd & 7 & 0.224 & 0.137 & 0.056 & 6 + sdm & 8 & 0.331 & 0.258 & 0.149 & 3 + sm & 9 & 0.328 & 0.066 & 0.038 & 3 + & & & & & + s0/a - sb & 0 - 3 & 0.177 & 0.126 & 0.014 & 76 + sbc - sm & 4 - 9 & 0.258 & 0.153 & 0.015 & 102 + & & & & & + sa0/a - sab & 0 - 3 & 0.068 & 0.038 & 0.008 & 24 + sabc - sam & 4 - 9 & 0.139 & 0.064 & 0.011 & 34 + & & & & & + sab0/a - sabb & 0 - 3 & 0.145 & 0.073 & 0.017 & 19 + sabbc - sabm & 4 - 9 & 0.260 & 0.124 & 0.020 & 38 + & & & & & + sb0/a - sbb & 0 - 3 & 0.274 & 0.118 & 0.021 & 33 + sbbc - sbm & 4 - 9 & 0.395 & 0.152 & 0.028 & 29 + lcccr s0 & 0.103 & 0.070 & 0.022 & 10 + s0/a & 0.147 & 0.095 & 0.024 & 15 + sa & 0.191 & 0.124 & 0.025 & 24 + sab & 0.238 & 0.121 & 0.029 & 18 + sb & 0.220 & 0.143 & 0.027 & 28 + sbc & 0.269 & 0.168 & 0.037 & 20 + sc & 0.284 & 0.152 & 0.044 & 12 + scd & 0.320 & 0.200 & 0.067 & 9 + sd & 0.361 & 0.177 & 0.056 & 10 + sdm & 0.318 & 0.111 & 0.045 & 6 + sm & 0.297 & 0.063 & 0.032 & 4 + & & & & + s0-sb & 0.159 & 0.110 & 0.016 & 49 + sbc - sm & 0.265 & 0.158 & 0.016 & 97 + & & & & + & & & & + s & 0.116 & 0.082 & 0.014 & 32 + sab & 0.174 & 0.112 & 0.022 & 26 + sb & 0.290 & 0.147 & 0.015 & 98 +
the maximum value of the ratio of the tangential force to the mean background radial force is a useful quantitative measure of the strength of nonaxisymmetric perturbations in disk galaxies . here we consider the distribution of this ratio , called @xmath0 , for a statistically well - defined sample of 180 spiral galaxies from the _ ohio state university bright galaxy survey _ and the _ two micron all - sky survey_. @xmath0 can be interpreted as the maximum gravitational torque per unit mass per unit square of the circular speed , and is derived from gravitational potentials inferred from near - infrared images under the assumptions of a constant mass - to - light ratio and an exponential vertical density law . in order to derive the most reliable maximum relative torques , orientation parameters based on blue - light isophotes are used to deproject the galaxies , and the more spherical shapes of bulges are taken into account using two - dimensional decompositions which allow for analytical fits to bulges , disks , and bars . also , vertical scaleheights @xmath1 are derived by scaling the radial scalelengths @xmath2 from the two - dimensional decompositions allowing for the type dependence of @xmath3 indicated by optical and near - infrared studies of edge - on spiral galaxies . the impact of dark matter is assessed using a universal rotation curve " parametrization , and is found to be relatively insignificant for our sample . in agreement with a previous study by block et al . ( 2002 ) , the distribution of maximum relative gravitational torques is asymmetric towards large values and shows a deficiency of low @xmath0 galaxies . however , due to the above refinements , our distribution shows more low @xmath0 galaxies than block et al . we also find a significant type - dependence in maximum relative gravitational torques , in the sense that @xmath0 is lower on average in early - type spirals compared to late - type spirals . the effect persists even when the sample is separated into bar - dominated and spiral - dominated subsamples , and also when near - infrared types are used as opposed to optical types .
Media playback is unsupported on your device Media caption Speaker John Bercow said addressing Parliament was "not an automatic right" House of Commons Speaker John Bercow would be "strongly opposed" to US President Donald Trump addressing the Houses of Parliament during his state visit to the UK, he has said. Mr Bercow told MPs that "opposition to racism and sexism" were "hugely important considerations". Labour and the SNP praised him but critics said he should stay neutral. President Trump was invited to make a state visit after meeting Theresa May in Washington last month. A petition to withdraw the invitation - and another one backing the visit - will be debated by MPs later this month. Responding to a point of order in the Commons, Mr Bercow set out his opposition to a Parliamentary address as part of the state visit. He told MPs that addressing the Lords and the Commons was "an earned honour", not an "automatic right". He said he was one of three "key-holders" for Westminster Hall, and referred to the US president's controversial travel ban. "Before the imposition of the migrant ban, I would myself have been strongly opposed to an address by President Trump in Westminster Hall," he said. "After the imposition of the migrant ban I am even more strongly opposed to an address by President Trump in Westminster Hall." The Speaker said he would also be involved in any invitation to address Parliament's Royal Gallery. 'An unprecedented rebuke' Eleanor Garnier, BBC political correspondent It was an unprecedented and extraordinary rebuke. A diplomatic snub that in effect means President Trump will not be invited to address MPs in Parliament. John Bercow's comments were applauded by MPs on the opposition benches - but critics have said he's abused his position and spoken out of turn. Mr Bercow's decision risks undermining the prime minister's very public effort to create a new special relationship with the Trump administration. He added: "I would not wish to issue an invitation to President Trump to speak in the Royal Gallery. "We value our relationship with the United States. If a state visit takes place, that is way beyond and above the pay grade of the Speaker. "However, as far as this place is concerned, I feel very strongly that our opposition to racism and sexism and our support for equality before the law and an independent judiciary are hugely important considerations in the House of Commons." Image copyright Getty Images Image caption Donald Trump was invited to make a state visit to the UK after Theresa May's trip to Washington Mr Bercow said the other "key holders" were the Speaker of the House of Lords, Lord Fowler, and the Lord Great Chamberlain, a hereditary peer in charge of certain parts of the Palace of Westminster. A House of Lords spokeswoman said: "The Lord Speaker was not consulted by Mr Bercow on his statement. "The Lord Speaker will make his own statement tomorrow to the Lords." As Speaker, Mr Bercow is the highest authority of the House of Commons and despite having been elected as a Conservative MP, must remain politically impartial. He is in charge of maintaining order in the Commons and calling MPs to speak. Other leaders' speeches International leaders are sometimes invited to address both Houses of Parliament when they visit the UK Recent examples include Colombian President Juan Manuel Santo last year, Chinese President Xi Jinping in 2015 and German Chancellor Angela Merkel in 2014 Mr Trump's predecessor, Barack Obama, made a speech in Westminster Hall in 2011 The intervention was welcomed by Labour leader Jeremy Corbyn, who has called for the state visit to be postponed, while Lib Dem leader Tim Farron said Mr Trump was "not welcome". But former UKIP leader Nigel Farage said Mr Bercow had "abused his position" and that to have expressed his opinions in the way he did "devalues this great office". Media playback is unsupported on your device Media caption Nigel Farage accuses John Bercow of "abusing his position" over Trump Prime Minister Theresa May, who has criticised the president's travel ban affecting people from seven mainly Muslim countries, has defended the decision to invite him to make a state visit. An address to Parliament has not been formally proposed, and no date has been set for the visit. Downing Street said: "We look forward to welcoming the president to the UK later this year. "The dates and arrangements for the state visit will be worked out in due course." ||||| FILE - A Wednesday, June 4, 2014 file photo of Britain's Speaker of the House of Commons John Bercow as he walks through Central Lobby before Britain's Queen Elizabeth II delivered the Queen's Speech... (Associated Press) FILE - A Wednesday, June 4, 2014 file photo of Britain's Speaker of the House of Commons John Bercow as he walks through Central Lobby before Britain's Queen Elizabeth II delivered the Queen's Speech at the State Opening of Parliament at the Palace of Westminster in London. The Speaker of Britain's... (Associated Press) FILE - A Wednesday, June 4, 2014 file photo of Britain's Speaker of the House of Commons John Bercow as he walks through Central Lobby before Britain's Queen Elizabeth II delivered the Queen's Speech at the State Opening of Parliament at the Palace of Westminster in London. The Speaker of Britain's... (Associated Press) FILE - A Wednesday, June 4, 2014 file photo of Britain's Speaker of the House of Commons John Bercow as he walks through Central Lobby before Britain's Queen Elizabeth II delivered the Queen's Speech... (Associated Press) LONDON (AP) — The Speaker of Britain's House of Commons said Monday that he strongly opposes letting U.S. President Donald Trump address Parliament during a state visit to the U.K. Speaker John Bercow's unusual public intervention makes it unlikely Trump will be given the honor during his trip later this year. Bercow told lawmakers he would have been against extending the invitation even before Trump's temporary ban on citizens of seven majority-Muslim nations entering the U.S. He said that after the migrant ban was issued, "I am even more strongly opposed." Courts in the U.S. have suspended the ban, prompting furious tweets from the president. Bercow's comments were unusual because speakers in the British Parliament are expected to remain above the partisan fray. He is one of the parliamentary officials who would have to agree on inviting a foreign dignitary to address lawmakers and peers. World leaders given the honor of making a speech to both houses of Parliament include Nelson Mandela and Trump's predecessor, President Barack Obama. Bercow was cheered by opposition lawmakers when he said that, although Britain values its relationship with the U.S., "our opposition to racism and to sexism, and our support for equality before the law and an independent judiciary, are hugely important considerations." Trump is due to visit Britain later this year as the guest of Queen Elizabeth II. The invitation was announced by Prime Minister Theresa May when she visited Trump in Washington last month. But some Britons are critical of the government's apparent rush to get close to the divisive president. An online petition opposing Trump's state visit has more than 1.8 million signatures and will be debated by lawmakers on Feb. 20 — though they will not hold a binding vote on it. May's office said the dates and itinerary for the state visit have not yet been finalized.
– The Speaker of Britain's House of Commons said Monday that he strongly opposes letting US President Donald Trump address Parliament during a state visit to the UK, the AP reports. Speaker John Bercow's unusual public intervention makes it unlikely Trump will be given the honor during his trip later this year. Bercow told lawmakers he would have been against extending the invitation even before Trump's temporary ban on citizens of seven majority-Muslim nations entering the US. He said that after the migrant ban was issued, "I am even more strongly opposed." Bercow's comments were unusual—the BBC calls his rebuke "unprecedented and extraordinary"—because speakers in the British Parliament are expected to remain above the partisan fray. Bercow is one of the parliamentary officials who would have to agree on inviting a foreign dignitary to address lawmakers and peers. World leaders given the honor of making a speech to both houses of Parliament include Nelson Mandela and Trump's predecessor, President Barack Obama. Bercow was cheered by opposition lawmakers when he said that, although Britain values its relationship with the US, "our opposition to racism and to sexism, and our support for equality before the law and an independent judiciary, are hugely important considerations." Trump is due to visit Britain later this year as the guest of Queen Elizabeth II. The invitation was announced by Prime Minister Theresa May when she visited Trump in Washington last month. May's office said the dates and itinerary for the state visit have not yet been finalized.
more than two decades ago , when quantum optics was young , the quantum dynamics of collective spin systems interacting with a single bosonic degree of freedom was a major research problem . the model arose as an attempt to describe the interaction between a collection of two level atoms and a single mode of the radiation field . walls and co workers@xcite were among the first to realise that such models provided ideal examples of the role of quantum fluctuations in the nonlinear interaction between matter and light . quantum fluctuations were shown to drastically change the predictions of semiclassical theory in such systems . this phenomenon has appeared more recently in the discovery of quantum phase transitions in quantum spin glasses@xcite and other many body quantum systems . while the collective spin models did not directly apply to achievable experiments at the time , they did provide insight that subsequently proved important for many other quantum optical experiments including anti - bunching , squeezing@xcite , and cavity qed@xcite . in this paper we show that the models of a collective spin interacting with one or more bosonic modes can now be experimentally realised in modern ion trap systems of the kind proposed for quantum computation@xcite . an enormous effort has gone into making such systems work at the quantum level , with little interference form classical sources of noise , and a number of such experiments exist today . it would thus appear worthwhile to reconsider the collective spin models , and the associated quantum many - body effects exhibited by such systems , with a view to direct experimental realisation . in particular we consider the tavis - cummings ( tc ) model@xcite , which can be realised in a linear ion trap of @xmath0 ions with the bosonic degree of freedom appearing as the quantised collective centre - of - mass motion . if each ion is coupled to the vibrational motion using an identical external ( classical ) laser detuned to the first red - sideband transition , the symmetry is such that the electronic degree of freedom for the ions can be described as a collective spin ( @xmath0 ) and the reversible dynamics is well described by the tc model . the tc model is known to exhibit important nonlinear quantum effects including a quantum phase transition@xcite in which the ( zero temperature ) ground state undergoes a morphological change as a parameter is varied and averages of intensive quantities undergo a bifurcation . the interaction hamiltonian for n ions interacting with the centre of mass vibrational mode can be controlled by using different kinds of raman laser pulses . a considerable variety of interactions has already been achieved or proposed @xcite . in this paper we consider the first red - sideband transition . the ion is assumed to be in a three dimensional anisotropic harmonic potential . two dimensions are very tightly bound and are neglected . in the remaining dimension , an external laser couples the electronic state to the vibrational motion . if the vibrational frequency is large enough and the lamb - dicke limit@xcite applies the motional sidebands of the absorption of the electronic transition can be resolved and a laser detuned below the electronic resonance by one unit of the trap frequency can excite the electronic transition by absorbing one vibrational phonon , the additional energy required being made up by the laser . we will assume that the laser ( or lasers if a raman process is used ) is sufficeintly strong that it can be treated classically . under these assumptions the hamiltonian , in the interaction picture , is @xmath1 where the coupling constant is @xmath2 where @xmath3 is the lambe - dicke parameter with @xmath4 the recoil kinetic energy of the atom , @xmath5 is the trap vibrational frequency , and @xmath6 is the effective mass for the centre - of - mass mode . the lamb - dicke limit assumes @xmath7 , which is easily achieved in practice . the frequency , @xmath8 is the effective rabi frequency for the electronic transition involved . the raising and lowering operators for each ion are defined by @xmath9 and @xmath10 . this sideband transition can be used to efficiently cool the ions to the collective centre - of - mass ground state , thus preparing the system in the vibrational ground state@xcite . if the external laser field on each ion is identical ( in amplitude and phase ) the interaction hamiltonian is @xmath11 where we have introduced the bosonic annihilation operator @xmath12 for the centre - of - mass vibrational mode and where we have used the definition of the collective spin operators , @xmath13 where @xmath14 . identical laser fields could easily be obtained by splitting a single , stabilised laser into multiple beams . the interaction hamiltonian in eq ( [ tc ] ) specifies the tavis - cummings model@xcite . this model first appeared in quantum optics where the bosonic mode is the quantised field in a cavity . however this realisation is difficult to achieve experimentally . in contrast the vibrational mode realisation should be readily achieved . the dynamics resulting from this hamiltonian is quite rich . collective spin models of this kind were considered many decades ago in quantum optics@xcite . in much of that work however the collective spin underwent an irreversible decay . in the case of an ion trap model however we can neglect such decays due to the long lifetimes of the excited states . on the other hand heating of the vibrational centre - of - mass mode can induce irreversible dynamics in the system in a manner that has not been previously considered , and that is reminiscent of thermal effects in condensed matter physics . we are interested in the driven tavis - cummings model in which the vibrational mode is subject to a linear forcing term which can easily be achieved by a suitable combination of raman laser pulses , or by appropriate ac voltages applied to the trap electrodes@xcite . in this case the hamiltonian , in the interaction picture , is given by @xmath15 this may be written in terms of the hermitian canonical oscillator variables @xmath16 , @xmath17 , and the canonical angular momentum variables @xmath18 , @xmath19 , @xmath20/2 $ ] . it takes the form @xmath21 with @xmath22 and we have scaled the hamiltonian by @xmath23 . this indicates that time is measured in units of @xmath24 . alsing@xcite has shown that the ground state of this system , for weak driving , is a product state in which the bosonic mode is squeezed and the electronic states are rotated in the angular momentum space . we provide a direct proof of this statement below . however it is first useful to consider the dynamics of the equivalent semiclassical model as many of the results in the quantum case can be interpreted in terms of the features of the semiclassical model . the tavis - cummings model represents an interaction between a simple harmonic oscillator and a linear top for which there is a classical model which we now define . we choose the classical model so that the equations of motion are of the same form as the heisenberg equations of motion for the quantum model . the classical hamiltonian is defined as @xmath25 where @xmath26 are respectively the canonical oscillator position and momentum variables with the canonical poisson bracket @xmath27 , while @xmath28 are the three components of angular momentum for a classical top with the canonical poisson brackets @xmath29 . the equations of motion for a canonical coordinate @xmath30 is given as usual by poisson bracket with the hamiltonian @xmath31 . the equations of motion are , @xmath32 note that these equations have a conservation law @xmath33 . we now justify this choice of classical hamiltonian by noting that the heisenberg equations of motion for the hamiltonian eq([driventc ] ) have the same form as the semiclassical equations of motion with all variables replaced by the corresponding operators . we thus see that the semiclassical equations result form taking moments of the heisenberg equations and factorising all product moments . the factorisation assumptions ignores correlations which scale as @xmath34 for the scaled operators @xmath35 . the conservation law @xmath36 is a reflection of the operator relation @xmath37 which in the semiclassical limit indicates that @xmath38 . the classical equations have one nontrivial fixed point at @xmath39 and @xmath40 , @xmath41 . however as the conservation law requires that @xmath42 we see that we must have @xmath43 which corresponds to an energy of @xmath44 . we will refer to this as the _ below threshold _ case . as @xmath45 is increased from zero , the fixed point for the angular momentum system rotates about the @xmath46 direction eventually reaching the equatorial plane at @xmath47 at the threshold condition . the oscillator system always has zero amplitude below threshold . if we linearise around this fixed point we discover that it is an unstable hyperbolic point with time constant proportional to @xmath48 . note that this time constant goes to infinity as the fixed point is approached as is typical for a hyperbolic fixed point . we now consider the _ above threshold _ case @xmath49 clearly the value of @xmath50 can not increase above @xmath51 . indeed there is no fixed point above threshold . however there is a special solution curve that continuously joins to the below threshold case for phase curves with @xmath44 . to see this we consider making a canonical transformation by a rotation in both the @xmath52 plane and in the @xmath53 plane ( see figure [ fig_one ] ) . the canonical transformations are @xmath54 the hamiltonian then takes the form @xmath55 the phase curves with @xmath44 now correspond to either @xmath56 or @xmath57 these phase curves smoothly join the fixed point at threshold if @xmath47 which implies @xmath58 these solutions are illustrated in figure [ fig_one ] . note that as @xmath59 we have that @xmath60 eventually points in the direction of @xmath61 while phase curve in the oscillator phase space points along the @xmath62 axis , indicating that for large driving the system is essentially a particle in a linear potential which accelerates at constant rate . these results were first obtained by alsing and carmichael@xcite . first note that the ground state when there is no driving is @xmath63 with a zero eigenvalue . this ground state corresponds to the fixed point of the semiclassical model with zero oscillator amplitude and angular momentum pointing in the @xmath64 direction . we postulate that as the driving is increased form zero the ground state of the hamiltonian eq([driventc ] ) is given by @xmath65 where @xmath66 corresponds to all ions in the ground state and the vibrational mode in the ground state . the operator @xmath67 is a squeezing operator defined by @xmath68 with @xmath69 . the rotation operator @xmath70 is defined by @xmath71 and corresponds to a rotation of @xmath72 around the @xmath73 axis . consider now @xmath74 if we now transform the hamiltonian and require that @xmath75 we find the following conditions , @xmath76 which requires that @xmath77 and the ground state energy is taken to be @xmath78 . the ground state is thus a product of a squeezed state for the vibrational mode and a rotated angular momentum state , rotated about the @xmath73 axis . the above results are consistent with the semiclassical approximation . the mean amplitude of a squeezed vacuum state is zero , corresponding to the semiclassical fixed point at @xmath79 while the rotation around the @xmath73 axis corresponds to the semiclassical fixed point at @xmath80 . if we continue to increase @xmath45 above the threshold value the system adiabatically follows a zero energy state , although this is no longer a ground state . in fact the canonical transformation used in the semiclassical analysis can be applied to the quantum operator valued hamiltonian . the result is the same as the semiclassical case , eq ( [ sc_ham ] ) with all variables replaced with the corresponding operators . the zero energy state then corresponds to the zero energy eigenstate of @xmath81 with @xmath82 . this is of course just a rotated , infinitely squeezed state . the electronic state is likewise a angular momentum eigenstate rotated from @xmath83 in the equatorial plane ( orthogonal to @xmath84 ) . thus above threshold the zero energy eigenstate deforms continuously from the state at threshold . let us summarise these results . for no driving the ground state corresponds to the oscillator in the ground state and all ions in the ground state . as the driving is increased , but kept below threshold , this state deforms to a squeezed oscillator state while the collective spin system begins to rotate about the @xmath73 axis . note that the mean oscillator amplitude @xmath85 remains zero as does the mean of the @xmath86-component of the collective spin . as the driving increases through the threshold value , this state changes its character so that a non zero value of @xmath73 is acquired and the oscillator is infinitely squeezed in a direction at an angle @xmath82 to the below threshold squeezing . this morphological change of the state as the driving passes the semiclassical critical point is a quantum phase transition . the quantum phase transition can be seen in the mean value for @xmath73 and @xmath84 as shown in figure [ fig_two ] . below threshold the scaled mean values are given by @xmath87 and above threshold we have @xmath88 where @xmath89 . what are the experimental manifestations of this transition ? needless to say no one is ever going to observe an infinitely squeezed state in an experiment . so what does happens at @xmath90 when the electronic state is the @xmath91 eigenstate @xmath92 and the vibrational mode appears to be infinitely squeezed ? is such a state physically possible ? suppose for example we begin in the ground state of the hamiltonian with no driving ( @xmath93 ) which is simply @xmath63 , and adiabatically increase the driving strength . it would appear that the system would then adiabatically evolve into the squeezed vibrational state described above . if we were ever able to reach the case @xmath90 we would have reached an infinite energy state for the vibrational mode at a finite driving strength . clearly this is not possible and to understand why it is useful to reconsider the semiclassical dynamics for this model . the adiabatic approximation requires that we vary the driving strength on a time scale slower than all other time scales in the system . the key time scale for the ground state variation is just the time scale associated with the hyperbolic unstable fixed point , @xmath94 , which goes to infinity as we approach @xmath95 . thus the adiabatic increase of the driving must proceed infinitely slowly , that is it must be switched to the finite value @xmath96 in an infinite amount of time . this pumps an infinite amount of energy into the system and results in infinite squeezing in the centre of mass vibrational mode . obviously in practice this can not be achieved so the totally squeezed ground state is not possible . however it will still be possible to achieve some squeezing of the vibrational mode at smaller values of the driving . this would make an interesting observation for current ion trap experiments even with only a few ions . the squeezing of the vibrational mode can be observed using the dynamical method of reference @xcite in current ion trap experiments , laser cooling techniques allow the centre of mass mode to be prepared in the ground state . unfortunately it does not stay there . heating due to a variety of sources , including fluctuating linear potentials , lead to an irreversible evolution away from the ground state . if such heating is present during the coupling of the electronic and vibrational motions , irreversible dynamics will be spread to the collective spin degrees of freedom as well . as an example we consider what happens if we use the tavis - cummings interaction ( excitation on first red sideband ) in the presence of strong heating . heating of the centre - of - mass mode due to fluctuating liner potentials may be described in the interaction picture by the master equation , @xmath97+\frac{\gamma}{2}\left ( { \cal d}[a]+{\cal d}[a^\dagger]\right ) w\ ] ] where @xmath98 is the density operator for the spin and vibrational degrees of freedom and the superoperator @xmath99 is defined by @xmath100\rho=2a\rho a^\dagger - a^\dagger a\rho-\rho a^\dagger a\ \ . $ ] the irreversible term corresponds to two point processes in which phonons are removed or added from centre of mass mode at the rates @xmath101 and @xmath102 respectively . this does not change any first order moments , however it does lead to a diffusion in energy as @xmath103 the effect of heating can be included in the semiclassical analysis by adding an appropriate stochastic term . in the ito calculus@xcite the effect is to add to the equations for @xmath26 terms of the form @xmath104 where @xmath105 are independent wiener processes . if the heating rate is small enough these terms can be neglected . however if they are large new steady states can occur in the semiclassical and quantum descriptions which will be described in a future publication . d.f.walls , p.d.drummond,s.s.hassan and h.j.carmichael , progress of theoretical physics , supplement no . 64 h. reiger and a.p . young , in _ complex behaviour of glassy systems _ edt . rubi , m. , perez - vicente , c. , proceedings of the xiv sitges conference , sitges , barcelona , spain , 10 - 14 june . , lecture notes in physics ( springer verlag , berlin 1996 ) , ( lanl arhive ; cond - mat/9607005 ) . d.f.walls and g.j.milburn , _ quantum optics _ , ( springer , heidelberg , 1994 ) q. a. turchette , c. j. hood , w. lange , h. mabuchi , and h. j. kimble , phys . lett , * 75 * , 4710 , ( 1995 ) . wineland , c. monroe , w.m . itano , d. leibfried , b.e . king , and d.m . meekhof , journal of research of the national institute of standards and technology * 103*,259 ( 1998 ) . hughes , d.f.v . james , j.j . gomez , m.s . gulley , m.h . holzscheiter , p.g . kwiat , s.k . lamoreaux , c.g . peterson , v.d . sandberg , m.m . schauer , c.m . simmons , c.e . thorburn , d. tupa , p.z . wang , and a.g . white , fortschritte der physik * 46 * , 329 ( 1998 ) . m.tavis and f.w . cummings , phys . rev . * 170 * , 379 ( 1968 ) . mark s.gulley , andrew white , and daniel f.v.james , `` a raman approach to quantum logic in calcium - like ions '' , submitted to j. opt . b ( 1999 ) . p.d.drummond , phys . a * 22 * , 1179 ( 1980 ) , and references therein . r.h.dicke , phys . rev . * 93 * , 99 ( 1954 ) . private communication . p.alsing and h.j.carmichael , quantum optics , * 3 * , 13 ( 1991 ) . d. m. meekhof , c. monroe , b. e. king , w. m. itano , and d. j. wineland , phys . rev.lett.,*76 * , 1796 ( 1996 ) . c.w.gardiner , _ handbook of stochastic methods _ ( springer - verlag , berlin , 1983 ) .
we show that the quantum phase transition of the tavis - cummings model can be realised in a linear ion trap of the kind proposed for quantum computation . the tavis - cummings model describes the interaction between a bosonic degree of freedom and a collective spin . in an ion trap , the collective spin system is a symmetrised state of the internal electronic states of n ions , while the bosonic system is the vibrational degree of freedom of the centre of mass mode for the ions .
a carcinoma ex pleomorphic adenoma ( cxpa ) is an epithelial malignancy arising in or from a benign pleomorphic salivary adenoma . minor salivary gland cxpas of the nasal cavity are exceedingly rare , with only 6 documented in the literature . we present a 7th case : an unusual pedunculated intranasal cxpa , which had a favorable outcome after a wide endoscopic excision and the longest follow - up period reported to date . the clinical features , immunohistochemical characteristics , treatment choices , and disease outcomes of the intranasal cxpas reported in previous studies are also reviewed . this case demonstrates the importance of considering the possibility of cxpa in the differential diagnosis of minor salivary gland malignancies in the nasal cavity . a carcinoma ex pleomorphic adenoma ( cxpa ) is an uncommon epithelial malignancy that develops in a preexisting pleomorphic adenoma ( pa ) . the carcinoma may arise from the epithelial or myoepithelial component ( or both ) of a pa . the most frequently affected site is the parotid gland , followed by the submandibular gland . the signs and symptoms most frequently encountered are a painless mass that exhibits little change over a long duration , and then exhibits sudden , rapid growth . the exact etiologic factors associated with the malignant transformation of a benign pa remain unclear . exposure to radiation and the development or accumulation of genetic instabilities within a long - standing tumor are considered to be potential factors in this transformation . the incidence of malignant transformation of persistent or recurrent pa increases from 1.5% after 5 years to 10% after 15 years . cxpas rarely affect the minor salivary glands , constituting approximately 2.6% of all minor salivary gland tumors and 18% of all minor salivary gland malignancies . minor salivary gland cxpas often develop in the oral cavity and the oropharynx ; the soft and hard palate are the most common sites . in addition to these locations , cxpas have been reported in the buccal mucosa , the upper lip , lacrimal gland , and nasal cavity . unusual sites of cxpas , such as the trachea and breast , have also been described . tumors arising from the seromucinous glands of the sinonasal region are histologically similar to tumors of the major salivary glands ; however , the majority of them are malignant . minor salivary gland tumors of the sinonasal tract generally have a favorable overall survival rate , despite a high rate of recurrence . nevertheless , minor salivary gland malignancies of the sinonasal tract have a poorer prognosis compared with their oral cavity counterparts because of their different biological behavior , a higher incidence of more aggressive adenoid cystic carcinoma , the complex anatomy of their location , and a relative delay in their diagnosis . similar lesions in the nasal cavity have been regarded as medical curiosities . only 6 cases , we add a unique case of cxpa that arose in the nasal cavity of a 46-year - old female patient , namely a pedunculated presentation of cxpa that had undifferentiated and squamous carcinomatous components as malignancies . this case entailed a 24-month follow - up after a wide endoscopic excision and had a favorable clinical outcome . a 46-year - old woman presented to an otorhinolaryngology department with intermittent left - sided epistaxis for 1 month . no enlarged cervical lymph nodes were noted , and the remaining physical examination was unremarkable . nasal endoscopy revealed a well - defined , solitary , irregular - surfaced , friable mass that exhibited contact bleeding and was located in the left nasal cavity . 1 ) . computed tomography revealed an expansile soft tissue mass 1.8 cm in diameter at the anterior part of the left nasal cavity without destruction of the surrounding bone structure ( fig . the patient underwent an endoscopic excision . the surgical specimen ( 2.2 1.5 1.3 cm ) was a pedunculated , well - circumscribed , encapsulated , myxoid , rubbery mass that exhibited a fleshy cut surface upon sectioning ( fig . microscopic examination produced biphasic epithelial images that showed an admixture of surface neoplastic squamous epithelial cells and underlying mixed components of spindle- to polygonal - shaped pleomorphic myoepithelial cells and myxoid stroma ( fig . the surface squamous epithelium showed hyperchromatic and pleomorphic cells with the immunoreactivity for p63 , which were identified as squamous cell carcinoma . the underlying mixed components showed that some foci of the epithelial cells abruptly transitioned to neoplastic cells that had an increased nuclear - cytoplasm ratio , a markedly elevated mitotic rate ( 45/10 high - power field ) , and highly pleomorphic hyperchromatic nuclei arranged in either solid or loose hemorrhagic patterns ( fig . the surface neoplastic squamous epithelium was immunoreactive for p63 and bcl-2 but the underlying different patterns of neoplasms were immunoreactive for ck7 , vimentin , p63 , bcl-2 , gfap , and sma ( fig . because of the mixed epithelium and myoepithelium and the presence of the myxoid stroma , cxpa was diagnosed . we performed a wide endoscopic excision of the remaining mucoperichondrium of the nasal septum and a subsequent left middle turbinectomy . a residual tumor was found in the nasal septum , measuring 0.2 0.2 0.15 cm , with a depth of 0.15 cm . the surgical margins were microscopically negative for tumor cells ; the carcinoma extended to within 0.2 cm of the closest margin . the patient received no adjuvant therapies and no signs of recurrence or distant metastasis were observed after 24 months of follow - up ( fig . axial and coronal noncontrast computed tomographic image of the nose and paranasal sinuses showing an expansile soft tissue mass in the left nasal cavity ( white arrows ) . ( b ) microscopic examination revealed the heterogeneity of the tumor ( hematoxylin and eosin , 40 ) . ( a ) the surface squamous epithelium showed hyperchomatic and pleomorphic neoplastic cells ( 200 ) . ( b ) myxoid stroma with relatively bland - looking nuclei was noted in the adjacent part of the neoplasm ( 200 ) . ( c ) the cellular solid area of the underlying carcinoma ex pleomorphic adenoma was composed of hyperchromatic cells ( 100 ) . ( d ) some foci displayed a loose hemorrhagic pattern with occasional enlarge pleomorphic neoplastic cells ( 200 ) . ( a ) ck7 , ( b ) vimentin , ( e ) gfap , and ( f ) sma were negative in the surface squamous carcinoma but positive in the underlying cxpa . however , ( c ) p63 and ( d ) bcl-2 were immunoreactive in both the surface squamous carcinoma and underlying cxpa . the term malignant mixed tumor can refer to 3 different subtypes , namely cxpas , true malignant mixed tumors ( carcinosarcomas ) , and metastasizing mixed tumors . most cases , such as that of our patient , belong to the first category . a cxpa is a malignant epithelial neoplasm arising in a benign tumor ( pa ) . tumors of this variety constitute approximately 3.6% to 4% of all salivary gland neoplasms and 12% of all salivary gland malignancies . although most cxpas occur in the major salivary glands , typically in the parotid gland , cases arising from the minor salivary glands of the oral cavity and oropharynx account for approximately 17.5% of the armed forces institute of pathology series . cxpas are subclassified into 3 main categories by the world health organization on the basis of the degree of invasion of the carcinoma beyond the pa capsule , namely invasive ( > 1.5 mm invasion from the tumor capsule into adjacent tissues ) , minimally invasive ( 1.5 mm penetration of the malignant component into extracapsular tissue ) , and noninvasive . di palma proposed an alternative classification of cxpas into 2 clinically relevant categories : early cxpas and widely invasive cxpas . widely invasive cxpas include any cxpa with an invasion of more than 6 mm . a cxpa is diagnosed by examination under a microscope , in addition to the consideration of patient history . the presence of an infiltrative and destructive growth pattern of carcinoma in juxtaposition with a pa is the diagnostic criterion for cxpa . in approximately 75% of cases , cxpas arise in a pa that is apparent in the surgical specimen . however , the proportion of the malignant component varies widely , and in certain instances , ascertaining the location of the original benign pa is difficult . diagnosis of a cxpa depends on a careful sampling of the tumor after resection to locate any coexisting benign adenomatous component . the malignant components most commonly observed in cxpas are adenocarcinomas ( not otherwise specified ) . almost all other malignant varieties of salivary gland tumors have been described ( e.g. , undifferentiated carcinoma , squamous cell carcinoma , mucoepidermoid carcinoma , salivary duct carcinoma , adenoid cystic carcinoma , small cell carcinoma , and myoepithelial carcinoma ) . until the past decade , the genuine malignancies of these early changes in pas were confirmed using immunohistochemical and molecular genetic analysis of the human epidermal growth factor receptor 2 ( her-2 ) and tp53 genes . her-2 and tp53 genes and proteins are involved in the early stages of the malignant transformation of pas . furthermore , the immunohistochemical overexpressions of her-2 , p53 protein , and the mib-1 proliferation marker could be used as targets to identify malignant areas in pas . cxpas of the nasal cavity are exceedingly rare , with only 6 prior cases documented in the literature ( table 1 ) . the most common presenting symptoms in the reported cases are intermittent nasal bleeding and unilateral nasal obstruction . the average age of the affected patients was 55.7 years , approximately a decade older than the average age of patients with pa of the nasal cavity . all reported cases have involved a surgical procedure as an initial treatment modality , such as a wide excision through lateral rhinotomy , partial or medial maxillectomy , or craniofacial resection . however , the unique presentation of a pedunculated , well - encapsulated nasal mass in our patient resulted in our utilizing a wide endoscopic excision . in a recent meta - analysis , rawal et al demonstrated that the overall 2- and 5-year survival rates associated with endoscopic removal of sinonasal malignancies were comparable to , and sometimes greater than , those published for the open resection of sinonasal malignancies . the survival rates of endoscopic endonasal resection seemed to correlate more strongly with cancer grading than with cancer staging . on the basis of a retrospective study , chen et al proposed that surgery followed by postoperative radiation therapy should be the standard of care for patients with cxpa of the parotid gland . however , this conclusion may not apply to intranasal cxpas because of the different biological behavior of the sinonasal tract and a high degree of heterogeneity in cancer subtypes . therefore , only 2 out of the 7 documented cases of intranasal cxpas have involved adjuvant radiotherapy . because of the short follow - up period of the reported cases , the role of postoperative radiotherapy in the prognosis of intranasal cxpas is inconclusive . however , although the number of cases is small , notable trends can be observed . cxpas mainly affect women , and 6 out of the 7 reported cases of intranasal cxpas , including that of our patient , presented in women . most cases ( 5 out of 7 ) of intranasal cxpa involved the nasal septum , and only 1 case originated in the lateral nasal wall ; the other case originated in the nasal floor . moreover , the malignant component of cxpas found in the nasal cavity apparently tends to consist of various combinations of differentiations . for instance , freeman et al reported a case of intranasal cxpa involving the presence of adenoid cystic and squamous carcinomatous differentiation . chimona et al presented another case of nasal cxpa that exhibited squamous and mucoepidermoid carcinoma . in the present case , we identified both undifferentiated and squamous carcinomatous components in the surgical specimen . cxpas with double differentiation of the carcinomatous component have also been reported in minor salivary gland cxpas arising from the palate and buccal mucosa . histopathological factors relating to the prognosis of cxpas were discussed by weiler et al . however , whether intranasal cxpas share the same prognosis significance is uncertain , because an insufficient number of cases with long - term follow - up have been reported to lead to any meaningful conclusions about their clinical behavior . because of close margins and concerns about local recurrence , postoperative radiation therapy was advised for our patient ; nevertheless , she chose close follow - ups instead of receiving adjuvant radiation therapy . no evidence of local recurrence or distant metastasis was found after 24 months of follow - ups . although cxpa was diagnosed in the present case , the mixed patterns and immunoreactive results were notably unusual . the main differential diagnosis was polymorphous low - grade adenocarcinoma ( plga ) because of the different patterns in the tumor . the morphological patterns typically include lobular solid nests admixed with cribriform , trabecular , and focal papillary cystic areas . the tumor stroma is composed of fibrous tissue that shows varying degrees of hyalinization and myxoid change . although many factors of cxpa and plga overlap , the presence of myxoid stroma , immunoreactivity for gfap , and an apparent mixture of surface carcinoma types contributed to the final diagnosis of cxpa . in summary , we present a rare case of pedunculated intranasal cxpa with a favorable outcome after a wide endoscopic excision and the longest follow - up period published to date . this case demonstrates the importance of considering the possibility of cxpa in the differential diagnosis of minor salivary gland malignancies in the nasal cavity . we identified a predominance of cases in women , a tendency to originate in the nasal septum , and the characteristic of double differentiation of the carcinomatous component of the reported intranasal cxpas . because of their extremely low incidence , information about the natural course , prognosis , and treatment of intranasal cxpas can currently be extrapolated from the published data regarding tumors of the same type that are located in the major salivary glands .
abstractbackground : a carcinoma ex pleomorphic adenoma ( cxpa ) is an epithelial malignancy arising in or from a benign pleomorphic salivary adenoma . the parotid gland is the most common location of cxpas . minor salivary gland cxpas of the nasal cavity are exceedingly rare , with only 6 documented in the literature.methods and result : we present a 7th case : an unusual pedunculated intranasal cxpa , which had a favorable outcome after a wide endoscopic excision and the longest follow - up period reported to date . the clinical features , immunohistochemical characteristics , treatment choices , and disease outcomes of the intranasal cxpas reported in previous studies are also reviewed.conclusion:this case demonstrates the importance of considering the possibility of cxpa in the differential diagnosis of minor salivary gland malignancies in the nasal cavity .
teeth are complex anatomic structures that encounter with developmental anomaly in various aspects like defects in the structure , shape , size and number . fusion is a rare developmental anomaly with a complex morphology that can give rise to reduced esthetics , misalignment , dental caries and periodontal problems . the purpose of this article is to report a rare case of bilateral fusion of mandibular second premolar with supernumerary tooth . a 25-year - old male patient reported to the chettinad dental college hospital with complaint of pain on the left lower posterior tooth ( 35 ) which was decayed and had bizarre morphology similar to its counterpart on the right side ( 45 ) . both the teeth were larger on all aspects than the adjacent normal first premolar and had a pronounced buccal groove [ figures 1 and 2 ] . radiographic examination of 35 revealed pulpal involvement associated with a complex coronal and radicular pulpal anatomy . the orthopantomograph showed no evidence of missing tooth in the mandibular arch but presented bilateral impaction of third molars [ figure 3 ] . tooth 35 showing abnormal crown morphology and pronounced buccal groove partially erupted 45 showing abnormal crown morphology orthopantamograph showing abnormal tooth morphology in 35 and 45 along with impaction of 38 and 48 fusion is a developmental dental anomaly caused by the union of two tooth germs to form a single tooth . fused teeth are usually united by enamel and dentin , whereas the pulp chambers and pulp canals are either unified or separated . the etiology of fusion is still unknown ; however , the crowding of the tooth germs during their development can be an important factor . local metabolic disturbances or developmental aberrations of ectoderm and mesoderm during morphodifferentiation of tooth bud can be considered as etiological factors . they do not show any sex predilection but are more frequent among japanese population and american indians . fusion of teeth is more common in the deciduous dentition than the permanent dentition with a prevalence rate of 0.52.5% and 0.1% , respectively . literatures reveal the prevalence of bilateral fusion in primary teeth to be 0.010.04% and in permanent teeth as 0.05% . invariable of the type of dentition , majority of them are associated with the incisors and canine than the posterior arch . fusion of posteriors is infrequent in secondary teeth with a prevalence range from 0.08% to 0.5% . in general , the present case is a rarity as it occurred in the mandibular arch , in the posterior region with bilateral presentation , caused by the union of second premolar and a supernumerary tooth which was seldom reported . the number of teeth present is usually reduced in fusion , but is normal if the anomaly occur between a regular and supernumerary tooth . in contrast , gemination results in an apparent increase in the number of teeth , as they are caused due to the division of a single tooth germ to form two separate teeth . in these situations , even though there is no variation in the treatment plan , an attempt can be made to differentiate both the anomalies by performing a thorough clinical and radiographic examination . fusion between two teeth usually results in space gain or diastema but may not be the case when the anomaly involves a supernumerary tooth as seen in our case . the portion of the fused tooth that is formed by the supernumerary tooth is conical in morphology and is smaller when compared with other fused component that is separated by a developmental groove , whereas in gemination both the halves are mirror images of each other . the present case also revealed two disproportionate components with dissimilar morphology separated by a groove . according to which if the anomalous tooth is counted as two teeth and teeth count is normal in the arch , then it is considered as fusion . in cases where the anomalous tooth is counted as two teeth and if an extra tooth is present in the region , then it is regarded as gemination or a fusion between a normal and a supernumerary tooth . he also suggested referring teeth joined together by dentin as fused teeth . brook and winter proposed the neutral term intraoral radiographs can be considered but may not be confirmatory in the differentiation of double teeth as geminated teeth have a single large root canal , whereas fused teeth may have separate or fused root canals . in the present case , the anomalous teeth revealed an increased mesiodistal width , a single pulp chamber ; single root canal and normal teeth count in the respective quadrant as evident radiographically . . the buccal and lingual grooves may be deep and extend subgingivally favoring plaque accumulation leading to dental caries and periodontal diseases . the complex tooth morphology and pulpal anatomy , tooth position , and difficulty in rubber dam placement may negate endodontic treatment and necessitate surgical removal of the affected tooth . fusion of primary teeth may lead to hypodontia , malformation , impaction , delayed , or altered path of eruption of permanent successors . in conclusion , fusions are rare developmental anomaly and need to be recorded during routine clinical examination . the abnormal morphology demands prophylactic and early interceptive treatment in order to avoid the complicated pulpal and periodontal treatment related to these teeth . surgical extraction of the affected primary teeth can be an option to avoid its delayed exfoliation and the subsequent delayed or ectopic eruption of the successor .
fusion is the union of two normally separated tooth germs resulting in the formation of a single large tooth . the prevalence of this anomaly is less than 1% and most common in the primary dentition , in the incisor - canine region . fusions are almost always unilateral , but few cases of bilateral fusions have been reported . the purpose of this article is to report a rare case of bilateral fusion of mandibular second premolar with supernumerary tooth .
Crawl of outlinks from wikipedia.org started March, 2016. These files are currently not publicly accessible. Properties of this collection. It has been several years since the last time we did this. For this collection, several things were done: 1. Turned off duplicate detection. This collection will be complete, as there is a good chance we will share the data, and sharing data with pointers to random other collections, is a complex problem. 2. For the first time, did all the different wikis. The original runs were just against the enwiki. This one, the seed list was built from all 865 collections. ||||| On May 15, the WNBA suspended Brittney Griner and Glory Johnson—two married all-stars for the Phoenix Mercury and Tulsa Shock, respectively—for seven games apiece, calling the behavior of both parties equally “unacceptable” in a statement. However, contrary to what was originally reported, evidence provided by Johnson's attorney, Howard Snader, suggests that Johnson was the target rather than the perpetrator of the incident. On April 22, Griner and Johnson were arrested in Goodyear, Ariz., after police were called to a residence for a domestic dispute. Many of the details of the incident remain murky, but in a medical evaluation conducted two days after Johnson was arrested—according to records provided by Johnson’s lawyer—Phoenix-based orthopedic doctor Thomas C. Fiel noted that Johnson had been struck twice “on the back of her head by a hard carrying case.” A CT scan corroborated that Johnson had experienced head trauma and suffered a concussion. The CT scan also found evidence of spinal trauma. Griner, according to the police report, suffered only minor scratches. Attorney Jane Bambauer, who is a professor at Arizona’s James E. Rogers College of Law and teaches and writes about criminal procedure, wrote in an email to SI.com that the police report and Johnson’s medical reports clearly indicate to her that Griner “was the aggressor” even though each woman was referred to as "The Victim" in separate probable cause statements taken at the time of their arrests. “If I’m being fought,” Johnson said during an exclusive interview with SI.com last Thursday. “I’m not just gonna sit back … there’s probably a better way to handle it. But at the time … you’re just thinking of protecting yourself and doing what you need to do to stand up for yourself.” Despite having access to all of the legal and medical information, the WNBA still decided to punish both spouses equally. Photo: Maricopa County Sheriff's Office via AP “[The WNBA] definitely knew about it,” Johnson said, referring to her injuries and how they occurred. “And that’s another reason it surprised me that they came up with the same conclusion. I’m not going to throw Brittney under the bus … and she’s not going to throw me under the bus … [but] what the [ ] did not say in the statements they released was that I pled not guilty … So for them to release a statement saying that we were both guilty in the situation, it’s not right. It’s not correct … Brittney pled guilty … Brittney understands why I pled not guilty, and I understand why she pled guilty … she was even willing to speak to whoever she needed to, to get the point across.” WNBA Johnson, who is 6'4" said police officers told her she was being arrested along with the 6'8" Griner (despite the fact that neither wanted to press charges) due to official policy. When domestic disputes occur between a man and a woman, the officers said, it's not automatic to arrest either party, but that “when it’s two women … they take both.” Lieutenant Scott Benson of the Goodyear police department says that the same-sex policies conveyed to Johnson were either "misunderstood or misrepresented. There's not anything with male or female in domestic violence laws," he says. "When there's a dual arrest made, [it's because] we can't determine who the aggressor is." According to Stacey Long Simmons, director of public policy and government affairs at the National LGBTQ Task Force, the dual arrest policy Benson describes is part of a larger problem in police protocol. "While we are unable to comment on the facts of [Johnson's] case, we find that local police departments still lack sufficient knowledge and cultural competency of LGBTQ couples and their families," Simmons wrote in an email to SI.com. "For example, in cases related to intimate partner violence involving same-sex couples, local officers still continue to arrest both parties." Bambauer agreed that dual arrests create problems for victims like Johnson. “After looking at these [documents],” Bambauer wrote, “I suspect the ‘primary aggressor’ issue is pretty relevant and important. If police (or the WNBA, for that matter) do not put in the work to figure out who the first, or most dominating, aggressor is, victims are doubly punished—first by their partner, and then by the state.” From a legal standpoint, Bambauer wrote, “the treatment of Johnson is a real source of liability for the Goodyear prosecutors. They, and the WNBA, deserve some criticism over their handling of this sad incident." Photo: Shane Bevel/NBAE via Getty Images Tulsa Shock issued a statement agreeing with the A popular player among fans, Tulsa’s Facebook, and Twitter pages featured photos of Johnson up until her arrest. Images of Johnson are noticeably absent from these pages now. ​TheWNBA’sWhen asked whether his position on the suspensions had changed in light of the circumstances surrounding Johnson's concussion, Steve Swetoha, president of the Shock, said that he stood by the statement he made after the WNBA's decision was released. Meanwhile, a representative from the Mercury told SI.com, "We're a little bit confused about it because [this new information] wasn't in the findings of the police report. It wasn't in the findings of the WNBA investigation." Griner's agency did not return calls seeking a comment.Instagram Although Johnson admits to being confused and disappointed by the league’s decision, her feelings about Griner haven’t changed. Throughout last Thursday’s phone conversation, she laughed whenever Griner’s name came up—there was a lightness in her voice, even when she was describing something difficult between them. She remains committed to her spouse, and said that one of the stressors leading to their altercation—on top of the strain of moving, buying a house, planning a wedding, and dealing with health crises in both of their families—was that the two were also meeting with fertility doctors to begin planning their own family. “A lot of people know we were considering the process—a lot of friends, anyway. But with two women, you know, it’s not like you can get somebody pregnant overnight. It’s a very huge process. "A lot of people were telling us that we rushed the wedding," she continued. "But if we had done it Brittney's way, we would have gone somewhere without telling anybody, and we would have done it way before anybody knew—which is something that I really like about her. She doesn't care what people think." In spite of the media storm surrounding her arrest, Johnson said she is enjoying being a newlywed and looking forward to playing against her spouse this season. "Our schedules are really hectic, and we might meet up maybe four times the entire season. It's tough at times, so [I need to] take advantage of each time [I] see her." ||||| WNBA's Brittney Griner OUR MARRIAGE IS OVER Files for Annulment (Update) Brittney Griner -- OUR MARRIAGE IS OVER ... Files for Annulment (Update) EXCLUSIVE 4:31 PM PT -- In court docs obtained by TMZ Sports, Griner says she has NO biological connection to the baby ... and she doesn't know any of the key details about Johnson's pregnancy. In short, it seems like Griner is suggesting they were not on the same page when it came to the pregnancy. Griner also says the marriage to Johnson is based on "fraud and duress" -- because Griner was "pressured into marriage under duress by [Johnson's] threatening statements." Griner does not specify the nature of the threats. Griner also says there was fraud -- but does not say why. It seems to be related to the pregnancy. As for the fetus, Griner does not say if she wants to have any type of parenting arrangement with the child. IT'S OVER -- WNBA superstar Brittney Griner has officially filed papers seeking to annul her 28 day marriage to her pregnant new wife Glory Johnson ... TMZ Sports has learned. Griner and Johnson tied the knot on May 8th in a small ceremony in Arizona ... just weeks after they were both arrested in a domestic violence incident at their home. The move is especially shocking considering Johnson just announced she's pregnant. Johnson and Griner have been together for a while -- and even appeared on an episode of "Say Yes to the Dress: Atlanta" together back in January. As we previously reported, Griner and Johnson were taken into custody for domestic violence on April 22nd. Both women say it was mutual combat after a heated argument got physical. Griner later pled guilty to disorderly conduct and was ordered to complete a 26 week domestic violence counseling program. Johnson's case is still pending. Both women were hit with 7 game suspensions from the WNBA ... though it's unclear how the league will handle the situation with Johnson, considering she's missing the entire 2015 season due to pregnancy. ||||| Glory Johnson Brittney Griner Is a Liar ... She Blindsided Me Glory Johnson -- Brittney Griner Is a Liar ... She Blindsided Me EXCLUSIVE Glory Johnson says she was "blindsided" by Brittney Griner's decision to file for an annulment today -- telling TMZ Sports she's "extremely hurt" by Brittney's actions. We spoke with Johnson's sports marketing agent, D.J. Fisher, who tells us "Glory was unaware of the filing and still loves and cares for Brittney." TMZ Sports broke the story ... Griner filed the court docs Friday -- saying Johnson essentially threatened her into getting married ... and claims the whole thing was a fraud. Griner issued a statement saying, "Last Wednesday, Glory and I agreed to either legally separate, get divorced, or annul our marriage. In the week prior to the wedding, I attempted to postpone the wedding several times until I completed counseling, but I still went through with it. I now realize that was a mistake." But Johnson's camp says Brittney's full of crap -- saying they NEVER agreed to annul the marriage. Johnson's rep adds, "Glory loves Brittney and made a huge sacrifice to carry a child, put her career on hold, to invest in their relationship and their future. As a result she won't be playing this season." "Glory wouldn't intentionally do anything to hurt Brittney and has tried her best to protect her and their marriage. Obviously this marriage was about them starting their life together. Glory is the sweetest thing in the world and she was dedicated to their partnership." "She knows how important marriage is and made a lifetime commitment and decision to spend the rest of her life with Brittney."
– This looks messy: WNBA players Brittney Griner and Glory Johnson seem headed for a breakup just 28 days into their budding marriage, People reports. The move comes right after Johnson announced her pregnancy, and six weeks after the 24-year-olds were arrested and got league suspensions for getting in a fight at home. "Last Wednesday, Glory and I agreed to either legally separate, get divorced, or annul our marriage," Griner says in a statement; she filed papers to annul their marriage on Friday, TMZ reports. Hours later, Johnson posted an Internet meme about "unperfect people refusing to give up on each other," but deleted it soon after and said Griner's move blindsided her. Johnson revealed her pregnancy Thursday in an Instagram photo of a bun going into a cake shaped like an oven, but Griner says the pair agreed to call it quits Wednesday. She also claims to know very little about the pregnancy. On Friday, Johnson posted on Instagram, "One day until I'm reunited with my wife @brittneygriner. . . This is about to be one CRAZY SUMMER!!!" All of this follows a Sports Illustrated interview with Johnson published Tuesday, in which she claims Griner targeted her in their Goodyear, Arizona, domestic dispute. Medical records say Johnson was hit twice "on the back of her head by a hard carrying case," giving Johnson spinal trauma and a concussion, while Griner escaped with minor injuries. Adding to the mix, Griner now says Johnson threatened her into getting married in the first place, but doesn't dish on details, notes TMZ.
in contrast to qcd at finite temperature , rather little is known about qcd at finite density . technical difficulties ( such as the complex fermionic determinant at finite chemical potential ) make lattice monte - carlo simulations very difficult . even though some techniques are being developed to overcome these problems ( for example , the glasgow method@xcite or the technique of imaginary chemical potential@xcite ) , they are not yet able to provide unambiguous results . however , models of qcd seem to indicate a rich phase structure in high - density quark matter . in particular , much attention has recently been devoted to so - called color superconductivity"@xcite : an arbitrarily weak attractive force makes the fermi sea of quarks unstable at high density with respect to diquark formation and induces cooper pairing ( diquark condensation ) . although this phenomenon had been studied earlier@xcite , the large magnitude of the superconducting gap found in the more recent studies@xcite ( more than 100 mev ) suggested that this was much more important than had been thought previously and has generated an extensive literature . in refs . @xcite an instanton model was used to calculate the gap at finite density and zero temperature . berges and rajagopal @xcite extended this work and calculated the phase diagram of strongly interacting matter as a function of temperature and baryon - number density in the same model . identical results were found in the nambu jona - lasinio model @xcite . the existence of such a color - superconducting gap could have important consequences for the physics of neutron stars or even for heavy ion collisions@xcite . ( for a review of the field , see:@xcite . ) previous studies of color superconductivity have focused on instabilities of the quark fermi sea with respect to diquarks only . of course , it is known that at lower densities ( of the order of nuclear matter density ) three - quark clusters nucleons are the dominant degrees of freedom . here we address the question of the possibility of a competition between diquark condensation and three - quark clustering at finite density . to answer such a question fully would require a three - particle generalization of the bcs treatment , a complicated task . as a first step towards this goal , we look for instabilities of the quark fermi sea with respect to three - quark clustering by studying the nucleon binding energy in quark matter with finite density . a bound nucleon at finite density would be a signal of instability with respect to clustering ( in the same way that a bound diquark at finite density is a signal of instability with respect to diquark condensation ) . a comparison of the magnitudes of the binding energies of the diquark and the nucleon can give some idea of the relevance of these degrees of freedom . very recently , beyer _ et al . _ have studied a similar clustering problem for three nucleons in nuclear matter@xcite . to perform this study we solve the bethe - salpeter equation for the diquarks and the faddeev equation for the nucleon ( and also , for completeness , for the @xmath0 ) within the framework of a nambu jona - lasinio ( njl ) model . the use of a separable interaction simplifies the treatment of the faddeev equation considerably . the form of the equation reduces to that of a bethe - salpeter equation describing the interaction between a quark and a diquark . at zero density , several groups have used this faddeev approach to study baryons in the njl model ( for example , refs . more recent papers@xcite have extended this treatment to incorporate a mechanism for confinement . in this work we will mainly follow the formalism developed by ishii , bentz and yazaki@xcite , generalizing it to finite density . in this type of approach , it is crucial to include the axial - vector diquark channel in addition to the scalar channel , as otherwise the nucleon is very weakly bound . in previous preliminary studies , keeping only the scalar channel , we have found that binding of a nucleon in matter is only possible at very low densities , less than 10% of nuclear matter density@xcite . the paper is organized as follows : in the next section , we briefly describe the njl model and its application to quark - quark interaction . the parameters used in our study are also discussed . in sec . 3 the bethe - salpeter equations in the scalar and axial - vector diquark channels are solved . the form of the faddeev equation for the nucleon is presented in sec . 4 . in sec . 5 , we describe the numerical techniques used and in sec . 6 we present our results for the nucleon and the @xmath0 at finite density . finally , we draw some conclusions in sec . the njl model provides a simple implementation of dynamically broken chiral symmetry , based on a two - body contact interaction@xcite . in spite of the fact that the model does not incorporate confinement , it has been successfully applied to the description of mesonic properties at low energy . ( for reviews , see refs . @xcite . ) the model lagrangian has the form @xmath1 where @xmath2 is the interaction lagrangian . in the present work we consider only the chiral limit , setting the current quark mass @xmath3 to zero . several versions of the njl interaction can be found in the literature . the original version@xcite is @xmath4.\ ] ] one can also work with a color - current interaction , @xmath5 where @xmath6 ( @xmath7 ) are the usual gell - mann matrices . whatever version of the model is chosen , a fierz transformation should be performed in order to antisymmetrize the interaction lagrangian . this allows @xmath2 to be brought into a form where the interaction strength in a particular channel can be read off directly from its coefficient in the lagrangian . for the @xmath8 channel , one just rewrites @xmath2 into the form @xmath9 where @xmath10 is the fierz transformed form of @xmath2 . we shall need here only the scalar and pseudoscalar terms of the @xmath8 interaction . these have the same form as eq . ( [ orig ] ) , but with a coupling constant @xmath11 which is related to the original coupling constant @xmath12 of the lagrangian by a coefficient given by the fierz transformation . for example , one has @xmath13 for the model defined by eq . ( [ orig ] ) and @xmath14 for eq . ( [ oge ] ) . to study the nucleon we also need to rewrite the interaction lagrangian in the form of a @xmath15 interaction . this is done by a fierz transformation to the @xmath15 channels , which allows the interaction to be expressed as a sum of terms of the form @xmath16 , where the matrices @xmath17 and @xmath18 are overall antisymmetric in dirac , isospin and color indices . ( we use the dirac representation for the @xmath19-matrices and follow the conventions of itzykson and zuber@xcite . ) as our three - quark state must be a color singlet , the diquark channels of interest are color anti - triplet . for a local interaction the relevant channels are the scalar ( @xmath20 ) and axial - vector ( @xmath21 ) ones . these are also the channels in which more realistic interactions ( including , for example , one - gluon exchange ) are expected to be most attractive . explicitly , we have @xmath22 for the scalar channel and @xmath23 for the axial - vector one . the matrices @xmath24 for @xmath25 project onto the color @xmath26 channel and @xmath27 is the charge conjugation matrix . the coupling strengths @xmath28 and @xmath29 are again related to the original @xmath12 by a coefficient given by the fierz transformation . in the following we do not choose a specific version of the njl lagrangian but instead treat the physical couplings @xmath30 as independent parameters . the gap equation for the constituent quark mass @xmath31 reads @xmath32,\ ] ] where the quark propagator is @xmath33 the integral ( [ gap ] ) diverges and so has to be regularized . there are various regularization schemes at our disposal : pauli - villars , proper - time , and 3- or 4-momentum cut - off . in this work we use a sharp cut - off @xmath34 on the 3-momentum to regularize the loop integrals , since this is conveniently applied to systems of finite density . although a 3-momentum cut - off is not lorentz invariant , this is less relevant at finite density where there is a much larger , physical , breaking of lorentz invariance due to the presence of a quark fermi sea . in any case , we shall show that physical observables depend only weakly on the choice of regulator . the two parameters @xmath35 and @xmath34 can be fitted to a given value of the constituent quark mass and to the pion decay constant @xmath36 mev . this last quantity is evaluated from @xmath37 in the following calculations we use two different values for the constituent mass @xmath38 mev and @xmath39 mev , both of which have been chosen to be large enough that the mass of the @xmath0 lies below the three - quark threshold . in table [ para ] we list the values of @xmath35 and @xmath34 for the corresponding constituent masses . as one can see , the values for the cut - off are relatively low and decrease with increasing constituent mass . one can not push the constituent mass to higher values as otherwise the cut - off would approach ( or , worse , become smaller than ) the quark mass . .[para ] values of the parameters @xmath35 and @xmath34 for the two values we use for the constituent quark mass @xmath31 . for each @xmath31 , the @xmath15 coupling ratios @xmath40 and @xmath41 are determined by fitting the @xmath42 and @xmath0 masses . in each case we also list , in parentheses , the minimal value of the coupling ratio required to produce bound diquarks . + [ cols="^,^,^,^,^ " , ] the effects of finite density on the constituent mass are taken into account by introducing the fermi momentum @xmath43 as a lower cut - off on the integral in the gap equation ( [ gap ] ) . in figure [ mass ] we show the evolution of the constituent mass as a function of the fermi momentum for the two sets of parameters corresponding to @xmath38 and 500 mev . for these parameters , the restoration of chiral symmetry occurs at @xmath44 mev and @xmath45 mev respectively . ( for comparison , nuclear matter density corresponds to a quark fermi momentum of @xmath46^{1/3}=270 $ ] mev ) . the diquark @xmath48-matrix is an essential building block for the faddeev equation . it is obtained by solving the bethe - salpeter equation in the ladder approximation , @xmath49 in the scalar diquark channel , the interaction kernel @xmath50 is @xmath51 this interaction is momentum - independent and so eq . ( [ bs ] ) can be solved easily to get the diquark @xmath48-matrix in the scalar channel@xcite : @xmath52 with @xmath53 and @xmath54.\ ] ] if the @xmath48-matrix ( [ tmasca ] ) has a pole , this gives us the mass of the bound scalar diquark . note that , if one replaces the coupling @xmath28 by @xmath35 , the denominator of ( [ tmasca ] ) is the same as that in the pion @xmath55 channel@xcite . this means that for @xmath56 the scalar diquark and pion are degenerate , and have zero mass in the chiral limit . since diquarks do not condense in the vacuum , this puts an upper limit to the choice of the scalar coupling @xmath28 . in the axial - vector channel , the kernel is @xmath57 and the solution of ( [ bs ] ) can be shown to be @xmath58 with @xmath59.\ ] ] here , the axial polarization tensor , @xmath60,\ ] ] has been decomposed in the form @xmath61 again , a bound axial - vector diquark corresponds to a pole in the @xmath48-matrix ( [ tmaax ] ) . the minimum values of the coupling ratios @xmath40 and @xmath41 required to get bound diquarks at zero density can be found in table i. note that when the loop integrals are regulated with a simple cut - off on either the 3- or 4-momentum , the longitudinal polarizability in this channel , @xmath62 does not vanish , in contrast to the hybrid method involving dimensional regularization used by ishii _ et al._@xcite . however , since there is no conserved current coupled to these states , this does not violate any physical symmetries . in the presence of quark matter , manifest lorentz invariance is broken and the structure of the polarization tensor @xmath63 becomes more complicated . for a diquark momentum in the @xmath64-direction , @xmath65 , it can be written @xmath66 the use of a 3-momentum cut - off does lead to deviations from the lorentz covariant structure shown in ( [ axpol ] ) even in the vacuum case , but we have checked that these are small . the results for diquarks at zero density are qualitatively similar to those in ref . @xcite despite a different choice of regulator and the use of a nonzero current - quark mass in that work . for a constituent quark mass of @xmath67 mev , as used in that work , we find that the minimum value of @xmath40 for diquark binding is 0.4 compared with 0.33 in ref . @xcite . for @xmath68 and @xmath69 we get scalar diquark masses of 699 and 507 mev respectively , compared with 627 and 446 mev ( cf . table 2 of ref . @xcite ) . to bind the axial - vector diquark , we find a minimum coupling strength @xmath70 . the corresponding value given in ref . @xcite is 2.0 , but this should in fact be divided by a factor of 4 @xcite , and so we again have qualitative agreement with that work . because of the separability of the njl interaction , the ladder approximation to the faddeev equation for the three - body system can be reduced to an effective two - body bethe - salpeter equation . this can be thought of as describing the interaction between a quark and a diquark , although it is not necessary that the diquark be bound . in our derivation of this equation , we have followed the procedure described in @xcite . we reproduce here only the main steps of this derivation ; the details can be found in the original papers @xcite . in the case of a purely scalar @xmath15 interaction , a thorough discussion of the faddeev equation can also be found in ref . @xcite . = 2.5 cm we denote the scattering amplitude of a quark on a diquark by @xmath71 . the indices @xmath72 label the diquark and , following the convention of ref . @xcite they take the values 5 ( for the scalar diquark ) and 0 , 3 , + 1 and @xmath73 ( for the components of the axial - vector diquark in a spherical basis ) . these diquark indices will be written as subscripts or superscripts to indicate that the components of the axial diquark are covariant or contravariant respectively . the dirac indices @xmath74 label the quark , taking the values 1 to 4 . the amplitude obeys the integral equation , @xmath75 corresponding to the diagram shown in fig . [ fadfig ] . the piece of the kernel containing the propagator of the exchanged quark is @xmath76 where @xmath77 and @xmath78 are the two - body vertex functions , already used in eqs . ( [ kersc ] ) and ( [ kerax ] ) . the propagator of the spectator quark is @xmath79 , and @xmath80 is the two - body amplitude for the two interacting quarks . the separable nature of the two - body interaction means that this amplitude can be thought of as a diquark propagator , but one should remember that it describes the propagation of all two - quark states , not just the bound states ( if they exist ) . it can be written as @xmath81 where @xmath82 and @xmath83 are the scalar and axial - vector diquark propagators " given by eqs . ( [ tmasca ] ) and ( [ tmaax ] ) respectively . to find bound states of the three - body system , we solve the homogeneous version of equation ( [ fad1 ] ) for the effective two - body vertex function of a quark and a diquark . this vertex function , @xmath84 , is related to the amplitude @xmath71 by @xmath85 where @xmath86 is the mass of the bound baryon . from ( [ fad1 ] ) one obtains the equation @xmath87 for the vertex function . this equation needs to be projected onto states of definite color , spin and isospin . projecting the kernel @xmath88 onto a color - singlet state gives @xmath89 for the isospin-@xmath90 channel , and @xmath91 for the isospin-@xmath92 channel . next , a projection onto good spin and parity must be carried out . like ishii _ et al._@xcite we use the helicity formalism of jacob and wick@xcite , constructing first a basis of states with definite helicity by acting with a rotation operator on helicity eigenstates whose momentum @xmath93 lies along the @xmath64-axis : @xmath94 lies in a general direction given by the euler angles @xmath95 . both the helicity @xmath96 and the intrinsic parity @xmath97 of the quark state are specified by the label @xmath98 ,4 , with @xmath99 , @xmath100 , @xmath101 and @xmath102 . the helicity @xmath103 of the diquark is specified by the label @xmath104 , with @xmath105 , @xmath106 and @xmath107 . the corresponding intrinsic parities are @xmath108 and @xmath109 . note that our use of these labels differs slightly from that of ishii _ et al._@xcite . the rotation matrices appearing eq . ( [ proj1b ] ) are @xmath110 and @xmath111 where the @xmath112 are wigner @xmath113-functions in edmonds convention@xcite . these basis states must now be projected onto good angular momentum : @xmath114 . the resulting faddeev equation for states of spin @xmath115 has the form @xmath116 the expression for the kernel is given by the formula ( d.9 ) of ref . we reproduce its form in our notation here , @xmath117 s(p/2 + p ) \bigr\}_{\alpha'\alpha}\nonumber\\ & & \qquad\qquad\times\tau_b^a(p/2 -p).\end{aligned}\ ] ] in eq . ( [ kernelb ] ) , the asterisk denotes complex conjugation with respect to the explicit factor of @xmath118 in the spherical - basis components of vectors only . the matrices @xmath119 and @xmath120 are the ones appearing between the phase factors in eqs . ( [ rotaq ] ) and ( [ rotadi ] ) respectively . in the spin-@xmath90 channel , the numerical coefficients @xmath121 are the ones appearing in eq . ( [ prker1 ] ) , with @xmath122 equal to + 1 if both indices refer to scalar diquarks , @xmath73 if both refer to axial diquarks , and @xmath123 if they are mixed . finally the faddeev equation has to be projected onto positive parity states . the resulting equation has the same form as ( [ fad3b ] ) , but with the positive - parity kernel , @xmath124 where the quark index @xmath125 is defined such that @xmath126 and @xmath127 , and the diquark index @xmath128 is defined similarly . the phase factor is @xmath129 where @xmath130 and @xmath131 are the intrinsic parities of the quark and diquark , and their intrinsic spins are @xmath132 , @xmath133 and @xmath134 . the parity projection cuts the set of 20 coupled equations down to 10 . of these , two describe only states with spin-@xmath92 and so decouple from the spin-@xmath90 channel to leave 8 coupled equations . a very similar set of equations can be derived for spin-@xmath92 states . the kernel has the same form as given in eq . ( [ kernelb ] ) ; only the numerical coefficients differ from spin-@xmath90 case . from eq . ( [ prker3 ] ) we see that @xmath135 is equal to @xmath136 if both indices refer to axial diquarks and zero otherwise . since the components involving scalar diquarks do not contribute , the number of coupled equations is again reduced to 8 . we solve the faddeev equation ( [ fad3b ] ) in the rest frame of the nucleon ( or @xmath0 ) , where @xmath137 . to avoid the singularities of the kernel , we perform a wick rotation on the energy variables @xmath138 and @xmath139 : @xmath140 with @xmath141 where @xmath142 is the lower - energy pole in the diquark @xmath48-matrices , ( [ tmasca ] ) and ( [ tmaax ] ) ( i.e. the mass of the lighter diquark ) . because of the wick rotation , we have to solve 8 complex coupled integral equations . we do so using the iterative method of malfliet and tjon @xcite . the set of equations may be written schematically as @xmath143 where the kernel @xmath144 depends nonlinearly on the energy eigenvalue @xmath145 . rather than solve this directly , we solve instead the linear eigenvalue problem @xmath146 for a fixed value of @xmath145 . this is done iteratively , by acting with @xmath144 on some initial guess for the vector @xmath147 of vertex functions to obtain a new vector . this is repeated until the vectors of functions in successive iterations are simply proportional to each other . the proportionality constant is then equal to @xmath148 . we search on @xmath145 until we find a value for which @xmath149 and so the solution @xmath147 satisfies the original equation ( [ fadschem ] ) . in the present problem , we use a simple initial guess consisting of a gaussian for the real part of each of the 8 components of @xmath147 , and the derivative of a gaussian for each of the imaginary parts . we find that 4 or 5 iterations are generally enough to determine @xmath148 . the parameters of the model that describe the interactions in the @xmath15 channels are fixed using the solutions to the faddeev equation at zero density . the values of the scalar and axial - vector couplings @xmath28 and @xmath29 that reproduce the masses of the nucleon and the @xmath0 are listed in table i for our two parameter sets . before describing our calculations at finite density , we should compare our zero - density results with those in ref . note that apart from the difference in cut - off scheme and the use of current quark masses , a further difference between our approach and theirs is that ishii _ do not try to reproduce the nucleon and @xmath0 masses exactly , but investigate the range of parameters @xmath40 and @xmath41 that can give bound states with reasonable masses . when only the scalar coupling is included ( @xmath150 ) , we find that the minimum value of @xmath40 to get a bound nucleon is 0.8 compared with 0.5 in ref . @xcite , both for a quark mass of @xmath67 mev . although these values are rather different , one should note that the nucleon is very weakly bound in the absence of the axial @xmath15 coupling . for example , ishii _ et al . _ find a nucleon binding energy of only about 40 mev for @xmath151 . it is therefore not too surprising that the point at which the nucleon becomes bound is rather sensitive to details , such as the choice of regulator or the use of the chiral limit . with both scalar and axial couplings the nucleon is more strongly bound and one might hope that results are less sensitive , but in this case it is harder to make meaningful comparisons with the results of ref . @xcite because of the problem with the erroneous factor of 4 in the axial channel mentioned above . for a quark mass of @xmath152 mev we are able to fit the nucleon and @xmath0 masses with @xmath153 and @xmath154 . the value for the axial - vector coupling is smaller than the minimum value to bind the @xmath0 of 0.44 in ref . the value for the scalar coupling is rather somewhat larger than the range considered in that work . as in the case of the diquarks , there are indications that our approach tends to give less attraction in the scalar channel , compared with that of ishii _ et al . _ , but more attraction in the axial channel . overall , though , the results are qualitatively similar . the faddeev equation at finite density is solved using the same methods . in this case the constituent quark mass is density dependent , and the 3-momentum of the valence quarks must be restricted to be larger than the fermi momentum @xmath43 . this acts as a lower cut - off of the momentum integrals : @xmath155 for the three - momenta @xmath156 of all three quarks . a similar wick rotation is performed on the energy variables , but with @xmath157 where @xmath158 is the lower - energy pole in the diquark @xmath48-matrices at finite density . we have solved the faddeev equation at finite density for the energy @xmath159 of a nucleon at rest . in figs . [ bindnuc](a ) and [ bindnuc](b ) we show the binding energy of the nucleon @xmath160 as a function of the fermi momentum @xmath43 for our two sets of parameters . for these parameter sets the scalar diquark is more strongly boundthan the axial , and so the nucleon binding energy is defined as @xmath161 with respect to the quark - diquark threshold . this is the relevant threshold since the njl model does not exhibit confinement . we compare @xmath160 to the binding energy of the scalar diquark @xmath162 the binding with respect to the three - quark threshold is given by the sum of @xmath160 and @xmath163 . one can see that the behaviors of the nucleon and the scalar diquark at finite density are quite different . the binding energy of the diquark is very large and initially tends to increase with density , only decreasing when the fermi momentum approaches the cut - off . in contrast , the binding of the nucleon decreases rather quickly , so that it is only marginally bound at nuclear matter density ( @xmath164 mev ) . we have also calculated the binding energy of the @xmath0 as a function of @xmath43 and this is shown in fig . [ binddel ] . in this case the threshold is determined by the binding energy of the axial diquark , which is also shown in fig . [ binddel ] . the behavior is similar to that found in the case of the nucleon . since the axial coupling is smaller than the scalar coupling , the axial diquark is less strongly bound than the scalar one . nonetheless it remains bound over the whole range of densities considered , while the @xmath0 becomes unbound near nuclear matter density . chiral symmetry restoration occurs at @xmath44 mev and @xmath45 mev for the parameter sets with @xmath38 mev and @xmath39 mev respectively , which corresponds to the cusps in the diquark binding in figs . [ bindnuc ] and [ binddel ] . another perspective can be gained by plotting the total energy of the nucleon instead of its binding energy , as is done in fig . [ enernuc ] . for comparison , we have also plotted the quark - diquark threshold and the three - quark threshold . one can see that the nucleon energy increases only slightly with density , while the quark - diquark threshold decreases more and more quickly as one approaches the density of chiral restoration ( which is basically a consequence of the vanishing of the constituent quark mass at the transition ) . the density dependence of the @xmath0 energy shows the same qualitative behavior . to answer the question raised in the introduction about the possible competition between diquark and three - quark clustering , we have also plotted the quantity @xmath165 . if @xmath166 then a system of six quarks will prefer to form two nucleons , while if @xmath167 it will form three diquarks . [ enernuc ] shows that nucleons are more stable than diquarks only for fermi momenta smaller than about @xmath168 mev , which corresponds to @xmath169 of nuclear matter density . for our other parameter set , with @xmath39 mev , nucleons remain more stable than diquarks up to @xmath170 mev , or @xmath171 of nuclear matter density . these results clearly cast doubt on the validity of the model in this regime . we find that the njl model predicts that , except well below nuclear matter density , it is energetically much more favorable to form three diquarks than it is to form two nucleons , and that there are no nucleon instabilities for the high densities where color superconductivity is expected to occur . however we should not conclude on the basis of these results that there is no competition between nucleon formation and diquark condensation . since this model predicts that nuclear matter should consist of diquarks , a result clearly at odds with what is actually observed , some elements of reality are missing . the most obvious of those is the lack of explicit confinement in the njl model , something that is not easily remedied . it poses a rather deep question about the interpretation of the njl model at finite densities . in much of the recent literature it has been used to model an instanton - induced interaction between the quarks in high - density matter beyond the chiral / deconfining phase transition . we have tried to follow a more traditional approach , where the model is interpreted as a density model for hadron structure in the vacuum . in that case the model contains _ unphysical _ diquark degrees of freedom , which may be ignored at zero density as being `` irrelevant '' . both of these approaches have flaws . in the first ( high - density ) interpretation , we have no clue as to what remnants of confinement may play a role in the interaction . in the second ( low - density ) interpretation we can not even describe nuclear matter properly . standing back and looking at our results in the light of these problems with the model , we note from fig . [ enernuc ] that the nucleon energy is roughly independent of density . it is mainly the increase in the diquark binding that renders the nucleon unstable . one could naively add to the njl model a three - body force that provides attraction in the color - singlet channel , to incorporate an approximate description of confinement . alternatively one might modify our treatment to use confined rather than free quark and diquark propagators @xcite . both of these choices have some appeal , but neither really deals with the underlying mechanism of confinement . one should also remember that our results have all been obtained using the `` rainbow - ladder '' approximation to the diquark bethe - salpeter equation and the faddeev equation . in the context of models with nonlocal interactions between the quarks , it has been shown @xcite that this approximation over - predicts diquark binding . with a toy model for the gluon propagator which leads to quark confinement , diquark condensation has been shown to occur even though diquarks are no longer bound at zero density @xcite . hence another way to approach this problem may be to study the faddeev equation within a less restrictive calculational scheme . we are very grateful to noriyoshi ishii for providing us valuable details of the derivation of the bethe - salpeter and faddeev equations . this work was supported by the epsrc under grants gr / j95775 ( manchester ) and gr / l22331 ( umist ) . i. barbour , s. morrison , e. klepfish , j. kogut , m .- lombardo , nucl . 60a * , 220 ( 1998 ) . m. alford , a. kapustin and f. wilczek , phys . d * 59 * , 054502 ( 1999 ) . m. alford , k. rajagopal and f. wilczek , phys . b * 422 * , 247 ( 1998 ) . r. rapp , t. schfer , e. shuryak and m. velkovsky , phys . lett . * 81 * , 53 ( 1998 ) . d. bailin and a. love , phys . rep . * 107 * , 325 ( 1984 ) . m. iwasaki and t. iwado , phys . b * 350 * , 163 ( 1995 ) . j. berges and k. rajagopal , nucl . b538 * , 215 ( 1999 ) . d. i. diakonov , h. forkel and m. lutz , phys . b * 373 * , 147 ( 1996 ) . t. schwarz , s. klevansky and g. papp , phys . c * 60 * , 055205 ( 1999 ) . m. alford , k. rajagopal and f. wilczek , nucl . a638 * , 515c ( 1998 ) . k. rajagopal , hep - ph/9908360 . t. schfer , nucl - th/9911017 . m. beyer , w. schadow , c. kuhrts and g. rpke , phys . c * 60 * , 034004 ( 1999 ) . a. buck , r. alkofer and h. reinhardt , phys . b * 286 * , 29 ( 1992 ) . n. ishii , w. bentz and k. yazaki , phys . b * 318 * , 26 ( 1993 ) . h. meyer , phys . b * 337 * , 37 ( 1994 ) . s. huang and j. tjon , phys . c * 49 * , 1702 ( 1994 ) . n. ishii , w. bentz and k. yazaki , nucl . a587 * , 617 ( 1995 ) . c. hanhart and s. krewald , phys . b * 344 * , 55 ( 1995 ) . g. hellstern , r. alkofer , m. oettel and h. reinhardt , nucl . phys . * a627 * , 679 ( 1997 ) ; m. oettel , g. hellstern , r. alkofer and h. reinhardt , phys . c * 58 * , 2459 ( 1998 ) . m. c. birse , j. a. mcgovern , s. pepin and n. r. walet , nucl - th/9905032 . y. nambu and g. jona - lasinio , phys . rev . * 122 * , 345 ( 1961 ) ; * 124 * , 246 ( 1961 ) . s. klevansky , rev . phys . * 64 * , 649 ( 1992 ) . t. hatsuda and t. kunihiro , phys . * 247 * , 221 ( 1994 ) . c. itzykson and j .- b . zuber , _ quantum field theory _ , ( mcgraw - hill , new york , 1980 ) . n. ishii , private communication . r. edmonds , _ angular momentum in quantum mechanics _ , ( oxford university press , oxford , 1957 ) . m. jacob and g. c. wick , ann . phys . * 7 * , 404 ( 1959 ) . r. a. malfliet and j. a. tjon , nucl . phys . * a127 * , 161 ( 1969 ) . g. rupp and j. a. tjon , phys . c * 37 * , 1729 ( 1988 ) . a. bender , c. d. roberts and l. van smekal , phys . b * 380 * , 7 ( 1996 ) . g. hellstern , r. alkofer and h. reinhardt , nucl . phys . * a625 * , 697 ( 1997 ) . j. c. r. bloch , c. d. roberts and s. m. schmidt , nucl - th/9907086 .
we study the instabilities of quark matter in the framework of a generalized nambu jona - lasinio model , in order to explore possible competition between three - quark clustering to form nucleons and diquark formation leading to color superconductivity . nucleon and @xmath0 solutions are obtained for the relativistic faddeev equation at finite density and their binding energies are compared with those for the scalar and axial - vector diquarks found from the bethe - salpeter equation . in a model with interactions in both scalar and axial diquark channels , bound nucleons exist up to nuclear matter density . however , except at densities below about a quarter of that of nuclear matter , we find that scalar diquark formation is energetically favored . this raises the question of whether a realistic phase diagram of baryonic matter can be obtained from any model which does not incorporate color confinement .
the insulator - metal ( i m ) transition and the role of electron - electron interaction in this transition is a problem of permanent interest , both theoretical and experimental . it has been shown@xcite that in the systems with strong disorder the interaction is in favor of delocalization because electrons may help each other to overcome the random potential . in clean systems the role of the interaction is opposite . it may create the so - called correlated insulator in a system which would be metallic otherwise . the wigner crystal ( wc ) is a good example of such insulator . wc in continuum is not an insulator itself , since it can move as the whole and carry current . however , due to shear modulus it can be pinned by a small disorder . the ground - state energy of the continuum wc and its zero - temperature melting was widely studied in the recent years both with and without magnetic field.@xcite in contrast to the continuum case , the wc on a lattice can be an insulator without any disorder due to the umklapp processes in a host lattice . the wc on a lattice does not have any sound or soft plasma modes and its excitation spectrum has a gap . the great majority of the efforts made recently to study correlated particles on a lattice were restricted to the hubbard model or @xmath2 model ( see review ref . ) . the so - called extended hubbard model with short - range and long - range interactions has been mostly considered for bosons in connection to the insulator - superconductor transition.@xcite in these papers supersolid and superfluid phases have been found . the bosons with infinite on - site repulsion are called hard - core bosons . in the case of the nearest - neighbor interaction the hard - core boson problem maps into ising - heisenberg spin hamiltonian . spinless fermions are similar to the hard - core bosons . in both systems the number of particles on a site is either zero or one . in 1d - case these two systems are equivalent if the interaction between particles does not permit them to penetrate through one another.@xcite the 1d problem with the nearest - neighbor interaction at half filling is exactly soluble.@xcite this instructive solution shows that the transition is not of the first order and that the i m transition , as detected by the stiffness constant , appears at the same point as the structural transition.@xcite very few works exist on the extended hubbard model for 2d fermions . pikus and efros@xcite have performed a computer modeling for 2d spinless fermions with coulomb interaction on a square lattice at filling factors @xmath3 and @xmath4 . they argue that the lifting of the ground - state degeneracy with increasing @xmath5 is a very good diagnostic of the structural phase transition . they have also found a similarity between the systems of spinless fermions and hard - core bosons near the transition . in this paper we study structural and i m transitions for spinless fermions at @xmath1 and 1/6 . to detect these transitions we use the ground - state splitting and the flux sensitivity@xcite respectively . the purpose of the work is to take advantage of the exact diagonalization technique and to study the modification of the low - energy part of the spectrum in a wide interval of the hopping amplitude @xmath5 all the way from the classical wc to the free fermion limit . our results for long - range and short - range interactions suggest a simple picture of the transition . the transition is related mainly to the modification of the two lowest branches of energy spectrum . at small @xmath5 these two branches are the wc state and the energy band of the defect with the lowest energy . the paper is organized as follows . in sec . ii we describe our numerical technique and present some general results . iii contains the results of computations and their discussion . we suggest the mechanism of the transition , analyze the role of the size effect in finite - cluster computations , and discuss the possibility that the delocalized phase above the i m transition is superconducting . the better to illustrate the mechanism of the transition we present the dependence of the total energy on the quasimomentum , @xmath6 , for a 1d system with @xmath7 interaction . we consider spinless fermions at @xmath0 on the 2d square lattice described by the following model hamiltonian @xmath8 here @xmath9 , the summation is performed over the lattice sites * r * , * r*@xmath10 and over the vectors of translations * s * to the nearest - neighbor sites . we consider long - range ( lr ) coulomb potential @xmath11 and short - range ( sr ) strongly screened coulomb potential @xmath12 with @xmath13 in the units of lattice constant . we study rectangular clusters @xmath14 with the periodic boundary conditions . the dimensionless vector potential @xmath15 in the hamiltonian is equivalent to the twist of the boundary conditions by the flux @xmath16 , @xmath17 . the energy spectrum is periodic in @xmath18 and @xmath19 with the period @xmath20 . as a basis for computations we use many - electron wave functions at @xmath21 in the coordinate representation : @xmath22 . the total size of the hilbert space is @xmath23 , where @xmath24 is the area of a system , and @xmath25 is the number of particles . the basic functions @xmath26 can be visualized as pictures , which we call _ icons_. some lowest energy icons are shown in fig . the energy of each icon is calculated as a madelung sum , assuming that the icon is repeated periodically over the infinite plane with a compensating homogeneous background . the icon with the lowest energy is a fragment of the crystal . the icons with higher energies represent different types of defects in wc . the hamiltonian eq . ( [ ham ] ) is translationally invariant . for each icon @xmath27 there are @xmath28 different icons that can be obtained from it by various translations . these icons are combined to get the wave function with total quasimomentum * p * : @xmath29 the summation is performed over @xmath28 translations @xmath30 . this transformation reduces the effective hilbert space size by approximately @xmath31 times . for the icons with periodic structures the number @xmath28 of different functions @xmath32 is smaller than @xmath31 . for example , the icon @xmath33 of the wc with one electron per primitive cell generates @xmath34 different values of * p*. these values are determined by the conditions @xmath35 here @xmath36 are the primitive vectors of the wc , and @xmath37 are the numbers of fermionic permutations necessary for translations on these vectors . these conditions can be easily understood . if translation on a vector @xmath36 is applied to eq . ( [ func ] ) , the right - hand side acquires a factor @xmath38 , while for a function with given * p * this factor must be equal to @xmath39 . if @xmath37 are even for both @xmath36 , the allowed * p * form the reciprocal lattice of the wc . however , in the case when one or both of @xmath37 are odd , the lattice is shifted by @xmath40 in the corresponding directions . in such case @xmath41 is forbidden . the complete set of @xmath28 nontrivial values of * p * can be obtained by restricting * p * to the first brillouin zone of the background lattice . one wc is represented by a number of icons obtained from each other by the point - group transformations of the background lattice . note that the total number of allowed values of * p * for the wc is the property of the wc and it remains finite at infinite cluster size . contrary , an icon representing a point defect in a wc generates all vectors * p*. their total number is equal to the volume @xmath31 of the first brillouin zone of the background lattice . in the macroscopic system all the states generated by the wc icon form the ground state degenerate at small @xmath5 . this degeneracy appears because the effective matrix elements which connect translated wc s are zero in the macroscopic limit . the total energy as a function of quasimomentum * p * has identical minima at all * p * generated by the wc icons . the spectra of excitations in the vicinity of these minima are also identical . the charge density for the state with given quasimomentum * p * ( see eq . [ func ] ) is always the same at all sites of the host lattice . however , at small @xmath5 the correlation function indicates a long range order . any small perturbation , which violates translational invariance , splits the degeneracy in such a way that the ground state describes a single wc with a strong modulation of the charge density . the lifting of the ground state degeneracy at some critical value @xmath42 indicates a structural phase transition and restoration of the host lattice symmetry . the flux sensitivity of a macroscopic system is zero at small @xmath5 . it becomes non - zero at some finite value of @xmath5 which might be different from @xmath42 . we associate this transition with the i m transition@xcite . for the finite system the following results can be obtained directly using the perturbation theory with respect to @xmath5 : \(i ) the ground state and the lowest excited states have a large common negative shift which is proportional to @xmath43 and to the total number of particles @xmath25 . this shift is the same for all low - lying states and does not affect the excitation spectrum of the system ; \(ii ) at @xmath1 the splitting of the ground state appears in the @xmath25-th order and is proportional to @xmath44 . at other filling factors the degeneracy of the ground state at @xmath21 is larger than two . the splitting is determined by matrix elements which are proportional to @xmath45 . for each matrix element the value of @xmath46 is equal to the number of hops necessary to obtain one crystalline structure from another and is proportional to @xmath25 ; \(iii ) the flux dependence of the ground state for the flux in @xmath47-direction appears in the @xmath48-th order and is proportional to @xmath49 in 2d case . in 1d the flux dependence appears in the @xmath25-th order and is proportional to @xmath44 . thus , we conclude that both lifting of the ground - state degeneracy and appearance of the flux sensitivity occur very sharply and they can be used as convenient criteria for the structural and the i m transitions respectively . note that the correlation function is a less sensitive criterion for small clusters@xcite since it does not exhibit sharp behavior in the transition region . [ fig2]a , b shows the results of diagonalization for cluster @xmath50 with 12 electrons for the lr ( a ) and sr ( b ) interactions . the total energy @xmath51 is shown as a function of @xmath5 . the ground state energy is taken as a reference point for @xmath51 . here and below , the unit of energy is the lr interaction energy between nearest neighbors . at @xmath21 the values of @xmath51 coincide with the energies of the icons shown in fig . we define @xmath52 as the gap between the ground and first excited states at @xmath21 . note that @xmath52 in the lr case is almost exactly 10 times larger than in the sr case ( see fig . [ fig1]a , b ) . at large @xmath5 the energy @xmath51 is linear in @xmath5 . thus , we can conclude that with increasing @xmath5 in this interval we go all the way from classical icons to free fermions . the ground state is almost degenerate at small @xmath5 and it splits into two states with increasing @xmath5 . as we have discussed above , this is a manifestation of the structural transition . the quasimomenta of these two states , @xmath53 and @xmath54 , are those generated by the wc icon . in fig . [ fig2]a , b they are denoted as ( 0,3 ) and ( 2,0 ) , where @xmath55 stands for quasimomentum with projections @xmath56 , @xmath57 . the other branches are the bands of defects . [ fig3]a , b shows flux sensitivity @xmath58 , computed for the ground state for two directions of the vector potential . here @xmath59 stands for the total energy as a function of @xmath18 or @xmath19 . in accordance with perturbation theory ( see sec . ii b ) , the flux sensitivity at small @xmath5 obeys the laws @xmath60 and @xmath61 for the direction of the vector potential along the short and long sides of the cluster respectively . the energy splitting between the lowest states with @xmath62 and @xmath54 is also shown . at small @xmath5 the splitting is proportional to @xmath63 ( 12 is the number of particles ) , as it follows from the perturbation theory . at large @xmath5 the flux sensitivity is linear in @xmath5 and coincides with the free - fermion value . note that for free fermions at @xmath1 the flux sensitivity @xmath64 is size independent for large clusters.@xcite the intervals @xmath65 where computational curves for @xmath64 make a crossover from one asymptotic to another are pretty narrow . in what follows we assume that these are the critical intervals for the i m transition , smeared in a finite cluster . these critical intervals can be fairly well defined for each cluster and should shrink into a transition point with increasing cluster size . the vertical bars in fig . [ fig3]a , b show the estimated critical interval @xmath65 for the i m transition which is approximately 0.150.25 for the lr interaction and 0.0150.025 for the sr interaction in a cluster @xmath50 . the behavior of the ground state splitting ( gss ) at large @xmath5 is more complicated . in the free - fermion approximation the gss is zero . considering interaction as perturbation one can show that in a @xmath66-cluster the gss @xmath67 as @xmath68 . in a @xmath69-cluster the gss @xmath70 for the sr interaction and the gss @xmath71 for the lr interaction as @xmath68 . these analytical calculations are in a good agreement with the computational results at large @xmath5 given in fig . [ fig3]a , b . in the case of sr interaction the gss curve has maximum . it is reasonable to assume that the crossover from wc to free fermions occurs in the vicinity of this maximum . there is no maximum in the case of lr interaction and the crossover region can be estimated using the sharp maximum of the second derivative . we conclude that within accuracy of our computations , limited by the finite cluster sizes , the i m transition detected by the flux sensitivity and structural transition detected by the gss occur simultaneously . comparison of fig . [ fig3]a and [ fig3]b shows that the dependencies @xmath72 for the lr and sr potentials are almost indistinguishable if all the energy scales for one of them are adjusted 10 times . this factor is just the ratio of zero-@xmath5 gaps @xmath52 for these two cases . thus , we come to a conclusion that @xmath42 depends on the type of interaction potential mostly through the value of @xmath52 . the same applies to the general structure of the low - energy spectrum of the system in the transition region as can be seen from comparison of fig . [ fig2]a and [ fig2]b . [ fig4]a , b shows the data for @xmath73 and lr interaction . [ fig4]a looks more complicated than fig . [ fig2]a , b . the wc for @xmath73 is shown in the first icon in fig . there are four such wcs which can be obtained from each other by point - symmetry operations . each wc generates six different values of * p*. thus , at small @xmath5 the ground state of the system is 24-fold degenerate . the degeneracy is high , however it remains the same in the infinitely large cluster . the primitive vectors of the wc at @xmath73 can not be obtained from each other by any symmetry operation on the host lattice . this means that the wc phase belongs to a _ reducible _ representation of the symmetry group of the host lattice . following landau and lifshitz@xcite , the symmetry reduction in the second - order phase transition should be such that the low - symmetry phase belongs to an irreducible representation of the symmetry group of the high - symmetry phase . we conclude that the single second order phase transition is forbidden in this case . however , it can occur as a series of transitions , each reducing the symmetry one step further . in fact , fig . [ fig4]a reminds the picture of multiple transitions . we think that each splitting of the energy levels generated by the wc icon manifests a structural transition . the cluster @xmath74 is too small to distinguish the critical intervals for each of these transitions . we can only conclude from fig . [ fig4]a , b that the critical interval @xmath65 for all of the structural transitions and i m transition is 0.010.03 . this interval is shown by vertical bars in fig . our data suggest the following mechanism of the transition . the width of the band of the lowest defect in the wc increases with @xmath5 such that its lowest edge comes close to the energy of the ground state@xcite ( see figs . [ fig2]a , b , [ fig4]a , and [ fig5 ] ) . strong mixing between the crystalline and defect states with the same quasimomentum occurs at this point . the avoided crossing appears between the ground state and the states in the defect band . one can interpret the avoided crossing in terms of the ground state which acquires a large admixture of defect states . this interpretation reminds the idea of zero - point defectons proposed by andreev and lifshitz.@xcite in principle , one can imagine that the state with a quasimomentum * p * different from those generated by the wc icon becomes the ground state via a branch crossing . however , in all cases we have considered , we observe the avoided crossing between the crystalline state and the state in the defect band with the same * p*. assuming that this is the case for larger clusters , we conclude that the phase transition is not of the first order . the proposed mechanism of the transition can be illustrated by the dependence @xmath75 at given @xmath5 . unfortunately , in 2d case the number of discreet values of * p * along any line in the first brillouin zone is small even for the largest 2d system we study . to clarify our understanding of the transition it is instructive to analyze the data for 1d systems . we have considered 1d systems with the nearest and next - nearest neighbor interaction@xcite and the system with lr interaction . in the latter case we study hamiltonian eq . ( [ ham ] ) at filling factor @xmath1 and @xmath76 . in 1d we switch from the homogeneous background to the chain with @xmath77 charges for the empty and occupied sites respectively . fig . [ fig6 ] shows the results for the flux sensitivity vs. @xmath5 for different system sizes @xmath78 . the sharp exponential behavior indicates that the system becomes an insulator at small @xmath5 . this result clearly contradicts to the statement by poilblanc et al . @xcite that 1d coulomb system is metallic at all @xmath5 . an extrapolation to @xmath79 shown in the inset gives a rather wide interval for @xmath42 of the i m transition between 0.17 and 0.3 . [ fig7 ] shows few lowest eigenvalues for each quantized value of @xmath80 for a cluster of 28 sites with 14 particles . note that the spectrum has nontrivial symmetry around the points @xmath81 . this symmetry appears for even @xmath25 at @xmath1 as a result of the particle - hole symmetry . for even @xmath25 the wc icon generates two states with quasimomenta @xmath81 , which are degenerate at all @xmath5 . as one can see from fig . [ fig7 ] , at @xmath82 these states are separated by a gap from the continuum of states , generated by the icon of the point defect . at @xmath83 the defect band broadens and , as a result , the gap decreases . however , the lowest eigenvalue at @xmath81 is still separated from the defect band , whereas the second eigenvalue belongs to it . at this point an avoided crossing starts to develop and the width of the gap remains almost unchanged from @xmath83 to @xmath84 . in the latter case , the lowest eigenvalue is no longer a separated point , but rather can be ascribed to the band . at @xmath85 it becomes quite clear that the lowest eigenvalue belongs to the continuum spectrum . finally , the picture at @xmath86 is almost a picture for free fermions with the fermi momentum @xmath87 and with the lowest branch @xmath88 close to @xmath89 . the proposed mechanism of the transition implies that critical value of @xmath5 is determined by the energy @xmath52 of the lowest defect at @xmath21 . our 2d results are summarized in the table 1 . it shows for comparison the middle point @xmath90 of the critical interval @xmath65 and the zero-@xmath5 gap @xmath52 for all cases we have studied . one can see that both @xmath90 and @xmath52 changes by a factor of 10 depending on the filling factor and the type of the interaction potential . however their ratio @xmath91 is almost constant and is close to 0.5 in all cases . since we assume that @xmath92 with increasing cluster size , this implies an empirical rule for @xmath42 : @xmath93 where @xmath94 is some number which is close to 0.5 , and @xmath95 is the smallest energy necessary to create a point defect in an infinitely large system . as can be expected from table 1 , the energy @xmath52 of the lowest excited state at @xmath21 may have much influence on @xmath90 . we show here that @xmath52 may have a strong size dependence in small clusters . this kind of a size effect can be called `` classical . '' the sr and lr potentials are very different in this aspect . in the case of sr interaction the defect with the lowest energy is the point defect ( see fig . [ fig1]b ) . the weak dependence of @xmath52 on the cluster size is only due to the interaction of the defect with its images , which appear as a result of the periodic boundary conditions . in the case of lr interaction the energy @xmath52 depends strongly on the size of the cluster for relatively small clusters . this dependence becomes stronger for smaller filling factors . one can see in fig . [ fig1]a that in the cluster @xmath50 at @xmath1 the point defect appears only as the fifth icon . at @xmath73 the five lowest energy icons shown in fig . [ fig1]c do not contain a point defect at all . we have studied thoroughly the low - energy spectrum for lr interaction at @xmath21 . the square clusters with different sizes @xmath78 and filling factors 1/2 , 1/3 , 1/4 , and 1/6 were analyzed using classical monte - carlo technique . the results are presented in fig . [ fig8 ] . at @xmath3 and 1/6 new low - energy types of dislocations appear with increasing the cluster size . these dislocations are restricted by the periodic conditions in smaller clusters . as a result , @xmath52 decreases with size for small clusters . however , for large enough clusters new dislocations seize to appear , so that @xmath52 does not decrease . since the energy of a dislocation is proportional to the size of the cluster , the point defect should win the competition in large enough clusters . for @xmath1 and 1/3 the point defect becomes the lowest excited state starting with the sizes @xmath74 and @xmath96 respectively . for @xmath97 and @xmath4 we are unable to find this critical size . however , the increase of @xmath52 with @xmath78 assures that the point defect should eventually become the lowest excited state . our conclusion is that in the case of lr interaction one should expect a significant size effect in @xmath90 due to the classical size effect in @xmath52 . since the classical size effect is negligible for the sr interaction we can analyze the `` quantum '' contribution to the size effect in @xmath90 comparing the results for different clusters . the size dependence of @xmath90 for the sr potential can be estimated from fig . [ fig3]b . we study only the clusters with the dimensions commensurate with the primitive vectors of the wc . otherwise the periodic continuation destroys the crystalline order . for @xmath1 this requires that both @xmath48 and @xmath98 are even . since we can study clusters up to 16 particles this condition restricts our options to clusters @xmath99 , @xmath50 , and @xmath100 . the low - energy spectrum for the cluster @xmath99 is shown in fig . [ fig5 ] . in this case the wc icon generates quasimomenta @xmath101 and @xmath102 . the flux sensitivity and the ground state splitting are shown in fig . [ fig3]b for all three clusters studied . one can see that the data do not show any pronounced systematic size dependence of @xmath90 , suggesting that for the sr potential @xmath42 is within the interval 0.0150.025 . thus , we have found that the size effect at a given value of @xmath52 is small . assuming this result to be independent of the type of potential , one can suggest that the `` classical '' contribution is the major for the lr interaction . then one can use eq . ( [ cond ] ) to estimate @xmath42 for the lr potential using the classical energy @xmath103 . say , for @xmath1 we get @xmath104 ( see fig . [ fig8 ] ) resulting in @xmath105 . to get a reliable estimate for this case from the quantum computations one should consider at least @xmath74 cluster since the point defect becomes the lowest excited state starting with this cluster size . now we analyze the gap between the split ground state and the excited states which belong to the defect band . this gap is clearly seen in figs . [ fig2]a , b , [ fig4]a , and [ fig5 ] . at large @xmath5 the branches have a form of beams with different slopes . these slopes definitely come from the confinement quantization of free fermions . the large number of states in each beam reflects high degeneracy of the free - fermion ground state at @xmath1 . say , in fig . [ fig5 ] all lines which are horizontal at large @xmath5 are the states that are degenerate for the free fermions . the splitting of these states is a result of interaction . the gap between the split ground state and the bunch of the states in the same beam can be easily calculated in mesoscopic region of large @xmath5 , where @xmath106 . the picture of beams is valid in the same region and it does not imply the existence of a gap at large @xmath5 in a macroscopic system . on the other hand , the gap @xmath52 at @xmath21 is the energy of defect and it has a non - zero limit in macroscopic system . thus , an important question arises , whether or not the gap has a non - zero limit right after the i m transition . the non - zero gap would mean that the state after the transition is superconducting . we have made a lot of computational efforts to answer this question but the results are still inconclusive . our best achievement is shown in fig . [ fig5 ] where we compare the results for @xmath99 and @xmath100 clusters . the confinement quantization would prescribe that the gap decreases in half . we have found that the gap for the @xmath100 cluster is less than for @xmath99 cluster but the ratio is significantly larger than 0.5 . we have performed a numerical study of the structural and i m phase transitions in 2d fermionic systems with hamiltonian eq . ( [ ham ] ) . the structural transition has been detected by studying the splitting of the ground state , degenerate in the crystalline phase . simultaneously we studied the i m transition by computing the sensitivity of the ground - state energy to the boundary conditions . in 2d case we have studied the systems with lr and sr interactions at different filling factors . within the accuracy determined by the size effect the i m transition occurs simultaneously with the structural transition . we argue that the structural transition on a lattice is not of the first order in all cases considered . we think that the origin of the transition is an avoided crossing of the ground state and the defect states in the wigner crystal with the same total quasimomentum . this simple picture implies that the critical value of @xmath5 is determined by the defect with the lowest energy @xmath52 at @xmath21 . to illustrate our point the data for 1d system with coulomb interaction are also presented . the possibility of the delocalized phase above the transition to be superconducting is discussed . we have found out that the size effect is not very strong for @xmath42 in the case of the sr interaction . for the lr interaction it is strong because of the size dependence of the defect energy @xmath52 . we argue that a reliable estimate for @xmath42 from finite - cluster computations in this case can be obtained with the use of the empirical rule eq . ( [ cond ] ) .
we consider 2d gas of spinless fermions with the coulomb and the short range interactions on a square lattice at @xmath0 . using exact diagonalization technique we study finite clusters up to 16 particles at filling factors @xmath1 and 1/6 . by increasing the hopping amplitude we obtain the low - energy spectrum of the system in a wide range from the classical wigner crystal to almost free gas of fermions . the most efforts are made to study the mechanism of the structural and insulator - metal transitions . we show that both transitions are determined by the energy band of the defect with the lowest energy in the wigner crystal .
Barely four years after Wall Street's wrong-way bets plunged the world into a financial crisis, JPMorgan Chase & Co. admitted it lost $2 billion from a trading portfolio that was supposed to have helped the bank manage credit risk."These were egregious mistakes," said Chief Executive Jamie Dimon , who is considered one of the world's savviest bankers. "We have egg on our face, and we deserve any criticism we get."The announcement stunned the financial industry, in part because it came from such a highly regarded bank. Dimon had navigated JPMorgan through the crisis in good shape by clamping down on some of the excessive risks that torpedoed rivals.Dimon told analysts that the bank racked up $2 billion in trading losses during the last six weeks, and that could "easily get worse." He said JPMorgan could suffer an additional $1-billion loss from the portfolio during the second quarter."My jaw is on the table," said Nancy Bush of SNL Financial. "I never expected this right now — not in a million years."The losses stemmed from derivative bets that backfired in the company's Chief Investment Office. This part of the bank was in charge of trading to balance the company's assets and liabilities, although it had been criticized by some analysts for operating more like a hedge fund.There had been media reports that a single JPMorgan trader in Europe, known in the bond market as "the London whale," was making massive bets that were influencing prices in the $10-trillion market.Investors bailed out of JPMorgan stock in after-hours trading, a sign it will open sharply lower in New York on Friday. The stock fell 6%, and rivals such as Citigroup Inc. Wells Fargo & Co., andBank of America Corp. also posted modest declines.The blowup at the nation's largest bank came amid a heated debate in Congress over how much regulation is needed to rein in the risk-taking that caused the near-meltdown of the financial system in 2008.The crux of the argument had been whether the so-called Volcker rule, which limits how much federally insured banks can risk in trading for their own accounts, had gone too far.Indeed, Dimon acknowledged that the trading losses might lead to more calls for stronger banking regulations."It's very unfortunate, plays right into all the hands of a bunch of pundits out there, but that's life and I'll have to deal with that," he said.Critics of Wall Street lost no time in calling for regulators to proceed with cracking down on big banks such as JPMorgan, which began the year with $863 billion in federally insured domestic deposits."The enormous loss JPMorgan announced is just the latest evidence that what banks call 'hedges' are often risky bets that so-called 'too big to fail' banks have no business making," U.S. Sen. Carl Levin (D-Mich.) said in a statement.This "is a stark reminder of the need for regulators to establish tough, effective standards to protect taxpayers from having to cover such high-risk bets," he said.Bank lobbyists had argued that the biggest U.S. banks need more flexibility if they are to compete against global financial giants. But the latest debacle provided new fodder for critics.At a minimum, JPMorgan's admission shows that large and unforeseen losses can erupt at any time despite the banks' efforts to limit risk-taking."It demonstrates that even at an institution like JPMorgan, which has done a remarkable job at staying out of trouble compared to other banks, a bolt out of the blue can come at any time," said Anthony Sabino, a law professor at St. John's University in New York.The problems at JPMorgan stem from the trading of synthetic credit products, which are derivatives whose values are tied to a portfolio of underlying bonds. The bank lost money when it was trying to unwind these exotic instruments, which were originally intended to hedge JPMorgan's credit exposure. ||||| Bloomberg News Jamie Dimon, chief executive officer of JPMorgan Chase & Co. The question for markets Friday: What kind of waves will the London Whale trading losses make for other banks, if any? J.P. Morgan unveiled it has taken $2 billion in trading losses in the past six weeks, and could take an additional $1 billion in second-quarter losses due to big bets on derivatives gone wrong. The losses represent a monumental misstep for J.P. Morgan, which emerged from the financial crisis in better shape than rivals. The news comes as large banks are fighting efforts by regulators to rein in risky trading. J.P. Morgan Chief Executive Jamie Dimon on Thursday said “egregious and self-inflicted mistakes” were made with trades that were “poorly executed and poorly monitored.” The revelations will likely provide more ammunition for proponents of the Volcker rule, to limit bank proprietary trading. Fairly or not, every big bank will be faced with questions regarding their trading practices. Mr. Dimon maintained on the call the specific trading at issue wouldn’t be covered by the Volcker rule. J.P. Morgan’s announcement is “just the latest evidence that what banks call ‘hedges’ are often risky bets that so-called ‘too big to fail’ banks have no business making,” Senator Carl Levin (D., Mich.) said in a statement. “Today’s announcement is a stark reminder of the need for regulators to establish tough, effective standards… to protect taxpayers from having to cover such high-risk bets.” When asked if any other banks will have similar trouble, Mr. Dimon replied: “Just because we were stupid, doesn’t mean anyone else was.” But that’s not how the market is reacting. J.P. Morgan shares tumbled more than 6.2% to $38.20 in after-hours trading. Shares of Citigroup, Goldman, BofA and Morgan Stanley all fell more than 2% in late trading. Indeed, Goldman Sachs revved up its exposure to Italy’s sovereign debt during the first quarter. In a filing, Goldman said its exposure to short-term Italian government debt more than doubled to $8.22 billion by the end of March 31, from $3.05 billion at the end of last year. The developments come as Wall Street is already grappling with a slew of issues. Earnings season is over, the economy is muddling along, Europe is teetering again and the stock market has struggled to build momentum over the last few months. Add J.P. Morgan’s plank to the problem pile.
– JPMorgan Chase has stunned the financial world by disclosing trading losses of $2 billion since the beginning of April, caused by what CEO Jamie Dimon calls "errors," "sloppiness" and "bad judgment." The losses came from bad trades made by a unit that was supposed to help America's biggest bank hedge against risk. A single trader, nicknamed the "London whale," made huge bets in the derivatives market that backfired, the Los Angeles Times reports. The bank says it could lose another $1 billion from the portfolio in the next quarter. "These were egregious mistakes," says Dimon, who admits that the huge loss will probably lead to calls for greater banking regulation. "We have egg on our face, and we deserve any criticism we get." Asked if other banks would have similar problems, Dimon said: "Just because we were stupid, doesn’t mean anyone else was," but the market is reacting differently, notes the Wall Street Journal. JPMorgan shares sank more than 6% in after-hours trading, and Citigroup, Goldman, BofA, and Morgan Stanley all fell more than 2%.
It's a good thing you didn't drop $40,000 on a seat at Sarah Jessica Parker's fund-raiser for President Obama because the event was disappointingly refined. For that kind of money, the 50 guests should have been treated to scenes of co-host Anna Wintour dressing down the cleaning lady for her improper Swiffering technique, but the biggest news to come out of the party is that the president inflated the attendees' already-bursting egos a bit more. "You're the tie-breaker," he said. "You're the ultimate arbiter of which direction this country goes." (Also, Matthew Broderick skipped the presidential fund-raiser that took place in his own house because he's starring in the show Nice Work If You Can Get It. Presumably, we'll learn all about the ensuing spousal showdown in this week's In Touch.) Outside, it was a different story. ||||| President Barack Obama paid a visit to "Sex and the City" star Sarah Jessica Parker's house for a star-studded, high-dollar fundraiser on Thursday. Co-hosting with Vogue editor Anna Wintour, Parker said the guests gathered "hopefully, with enormous enthusiasm." Parker also praised first lady Michelle Obama, calling her "our radiant and extraordinary first lady” and said she had been doing “amazingly important things these last 4 years.” (Also on POLITICO: Top 5 Obama Celebrity Fundraisers) Parker's husband, actor Matthew Broderick, could not attend the fundraiser. “Matthew had a show,” Obama explained. Other notable celebrity attendees included Bravo's Andy Cohen, actress Meryl Streep and fashion designer Michael Kors. During Obama's stump speech, Parker's young son interrupted Obama with applause. “He wanted to fire up the crowd,” Obama joked. “He knows an applause line,” Cohen said, roaring. “Right on cue!” Approximately 50 guests attended the fundraiser, paying $40,000 for the evening. The only exception was Robin Hunt, a project administrator at Johns Hopkins Hospital in Baltimore, who won the monthly Obama campaign win-a-dinner contest. ||||| Meryl Streep, Project Runway host Michael Kors, Vogue's Anna Wintour and Bravo's Andy Cohen were among the famous faces who attended a fundraiser for President Obama held Thursday at Sarah Jessica Parker's Manhattan home. Related Topics • Politics Parker introduced Obama at the event, giving him a hug and kiss and saying those in attendance were gathered "hopefully, with enormous enthusiasm." She also called first lady Michelle Obama "radiant and extraordinary" and said she had been doing "amazingly important things these last four years." PHOTOS: The 20 Biggest Political Players in Hollywood In his remarks, Obama said that Parker's husband, Matthew Broderick, wasn't in attendance because of his starring role in Broadway's Nice Work If You Can Get It. He noted the Obamas are "great friends" with Parker and Broderick. (The actress sat by Michelle Obama at the event and recently appeared in a campaign ad for the president.) Obama also joked that he is the fifth or sixth in the "hierarchy in the White House," behind the first lady, their two daughters, their dog Bo and his mother-in-law. He also quipped that the fundraiser marked the couple's "date night." But it wasn't all light-hearted at the event. In his remarks, Obama criticized the Republican presidential campaign ads, which he said argue that Americans are frustrated and discouraged and that "it's the fault of the guy in the White House. It's an elegant message; it happens to be wrong. But it's crisp. You can fit it on a bumper sticker." VIDEO: Sarah Jessica Parker's Ad for Obama He added: “We’ve got as fundamental a choice this time out than we have had in 30, 40 or 50 years. What we are going to have to do is present very clearly to the American people that choice." Obama also listed his accomplishments and health care and said he remains committed to investing in science and technology efforts. “That’s why I’m running for a second term, because our work is not yet done," he said. Obama added that the economy is recovering because of the determination of Americans. “Because of their resilience, we’ve begun to come back,” he said. PHOTOS: Young Hollywood's Secret Breakfast With Obama After Obama noted that "GM is now back on top," Parker's young son began clapping. "He wanted to fire up the crowd," Obama quipped. Added Cohen: “He knows an applause line. Right on cue!” Guests at the event sat at two long tables in two long rooms, with the dividing doors open. Flowers and votives adorned the tables in the home, which was decorated with ample art, floor-to-ceiling bookshelves and two large marble fireplaces. Following the event at Parker's home, Obama is attending a fundraiser hosted by Mariah Carey at New York's Plaza Hotel.
– Aretha Franklin paid the $40,000 ticket price to attend Sarah Jessica Parker's Obama fundraiser last night, but left after just 20 minutes. That means she spent $2,000 per minute to be there, TMZ helpfully calculates. Apparently it was worth it, because she described the food ("chicken with a mustard sauce, diced tomatoes, and lots of relishes on the side of the plate") as "very tasty." In an entertaining response on Gawker, Caity Weaver wonders, "Do you think Sarah Jessica Parker fretted and fretted over the menu before convincing herself (incorrectly) that, since Obama probably eats fancy food all the time, what he'd really enjoy is her famous 'Chicken a la Mustard with a lot of relishes on the side'?" The fundraiser, co-hosted by Anna Wintour, was attended by all sorts of other boldface names, Politico notes, including Meryl Streep and designer Michael Kors but not including SJP's husband Matthew Broderick, who was performing in a Broadway show, according to the Hollywood Reporter. Their son was there, however, and once interrupted the president with applause, leading Obama to joke, "He wanted to fire up the crowd." Another amusing story, per the New York Post: Wintour was apparently not happy with SJP's "shabby chic" furniture, and thus oversaw a total overhaul of her house before the fundraiser began. Daily Intel rounds up tweets describing the mayhem outside the fundraiser.
surface active agents ( saas , surfactants ) are a group of compounds with specific chemical composition of their molecules ( one part soluble in polar medium : hydrophilic and second in nonpolar medium : hydrophobic ) . the main classification of surfactants is based on charge of hydrophilic part of their molecules : cationic , anionic , and nonionic compounds . occurrence of polar and nonpolar parts in the saas molecules gives them special properties against different medium . another property of the saas is ability to association in solution and formation of micelles . during the process of formulation micelles surface active agents are adsorbed at boundary phases to remove hydrophobic parts from water to reduce energy of system . due to the specific chemical structure of surfactants molecules they are applied in different areas of human activity . during formulation of households or industrial products compounds from the group of surfactants are used because their presence leads to improving efficiency of the following processes : wetting / waterproofing , de- or foaming , de- or emulsification , dispersion or flocculation of solids particles in liquid phases , solubilization of non-/sparingly soluble reagents in solvents , increase or decrease of viscosity of solution phases . wetting / waterproofing , de- or emulsification , dispersion or flocculation of solids particles in liquid phases , solubilization of non-/sparingly soluble reagents in solvents , increase or decrease of viscosity of solution phases . in table 1 the expected growth of production of that class of compounds is forecasted as 2.8% annually to 2012 . approximately 65% of total production corresponds to compounds classified as anionic surfactants , second and third places in global production corresponds to nonionic and cationic compounds , respectively . after use , compounds from the group of surfactants are emitted to various elements of the environment ( gas , liquid , and solid phases ) , where they can undergo numerous physical and chemical processes . therefore , the specific properties of those chemical compounds cause increasing their mobility and unrestrained circulation in the environment . those processes might significantly contribute to disrupting the water cycle within various ecosystems ; hence , it is essential to obtain answers to what levels of concentrations surfactants are present in the environment . investigation of environmental fate of surface active agents can help increase level of knowledge about pollutants migration pathways and better protect living organisms or different ecosystems . in recent years the interest of saa has increased and scientists have started to estimate the possible effects of those compounds on the environmental balance [ 5 , 6 ] . in this paper , basic information about surface active agents ( classification , their properties , and areas of use ) and their fate after discharging to wastewater treatment plants are mentioned . the brief review of sorption and degradation processes of surfactants in water systems is presented . moreover , the analytical protocols used for determining total concentration of surfactants or individual analytes from particular group of saa in environmental samples are described . this work contains overview of contamination of different ecosystems caused by surfactants ( also research data of levels of saa in atmospheric deposition samples collected in urban and nonurban areas ) . possible impact of compounds from the group of saa on biotic and abiotic elements of the environment ( especially as a result of their occurrence in atmospheric waters ) is presented . surface active agents are one of the most common applied compounds in industrial , agricultural , and household activities and after use a huge number of surfactants ( and/or their degradation products ) are discarded to wastewater - treatment plants ( wwtp ) . in nonurban areas ( there is no wwtp ) wastewaters that contained various classes of surface active agents are discarded directly to surface waters and they might be dispersed into different elements of environment . in wastewater treatment plants compounds from group of saas are completely or partially removed by a combination of different processes ( mainly by sorption and aerobically biodegradation ) and their degradation pathways were investigated [ 7 , 8 ] . chemical compounds from this group are degraded during secondary treatment processes and under optimized conditions about 9095% of initial saas concentration contained in influent streams can be eliminated ( depending on efficiency of wwtps ) . in should be noticed that considerable part of pollutants is removed as sewage sludge ( from 15% to more than 90% ) [ 10 , 11 ] . moreover , some surface active agents can be transformed to more toxic degradation products ( e.g. , degradation products of compounds from the group of alkylphenol ethoxylates ( apeo ) ) . after appropriate processes in wwtp effluents and sewage sludge , in which different types of surfactants or their degradation products ( several g / l or g / kg ) can occur , are discharged into surface waters or used as fertilizer in agricultural areas , respectively . such practices lead to emission of surfactants and their metabolites into different parts of the environment ( soils , ground waters , surface waters , and living organisms ) [ 9 , 12 ] . in recent years the amount of literature data concerning on surfactants occurrence and their concentration in environmental samples has been markedly increased . if compounds from the group of saas are present in water ecosystems , they can also undergo sorption and aerobic / anaerobic degradation processes . sorption processes inhibit also degradation of chemical compounds because their bioavailability can be reduced . in table 2 research allows observing relationship between higher salinity of water samples and higher sorption percentages for compounds from the group of las onto suspended solids ( as calcium and magnesium salts ) . generally , the higher concentrations of less polar compounds from the group of surfactants ( e.g. , c13las , np , and npe1 - 2o ) were observed in sediment or suspended solid samples . the higher concentrations of more polar compounds ( e.g. , c10las , short - chain spc , and npec ) were observed in the dissolved form [ 1315 ] . sorption process is relative to hydrophobic nature of compounds : more polar anionic saas were estimated in the dissolved phases;less polar cationic and nonionic saas were estimated in the particulate phases ( their transport will be associated with suspended solids ) . more polar anionic saas were estimated in the dissolved phases ; less polar cationic and nonionic saas were estimated in the particulate phases ( their transport will be associated with suspended solids ) . also other environmental factors like ph , salinity , carbon , or clay content of the particulate phase can have influence on sorption processes [ 8 , 16 ] . degradation of surface active agents caused by microbiological organisms ( biodegradation ) is the primary transformation taking place in different ecosystems to reduce impact on living organisms . biodegradation is an important process not only in wastewater treatment plants but also in the environment . during this process microorganisms are able to utilize surfactants as substrates to produce energy and nutrients or cometabolize them by microbial metabolic reactions . there are many factors ( e.g. , chemical structure of analytes , physiochemical parameters of ecosystems like temperature , light , and salinity ) that affect the efficiency of biodegradation of compounds from the group of surfactant in the environment . most of compounds from this group can be rapidly degraded by microorganisms in ecosystems in the presence of oxygen ( according to the current legislation ) , while some of them ( e.g. , las , dtdmac ) may be persistent under anaerobic conditions [ 8 , 15 , 17 ] . in table 3 the degradation pathway for alkyl trimethyl or dimethyl ammonium compounds ( tmac and dmac ) is initialed by n - dealkylation and followed by n - demethylation reaction ( trimethylamine , dimethylamine , and methylamine were identified as the intermediates of alkyl trimethyl ammonium salts in activated sludge ) . the length of alkyl chain has an important role in the fate and biological effects of these compounds in the environment . the aerobic biodegradability of compounds from the group of qacs decreases with the number of nonmethyl alkyl groups ( e.g. , r4n < r3men < r2me2n < rme3n < me4n ; me methyl group ) and substitue them with a benzyl group [ 20 , 21 ] . the biodegradation was not observed for ditallow dimethyl ammonium chloride ( dtdmac ) in anaerobic screening assay and this compound was replaced by diethyl ester dimethyl ammonium chloride ( deedmac ) . analytes from the group of las contained longer alkyl chains and benzene group in external position is more susceptible to biological degradation . mono- and dicarboxylic sulfophenyl acids ( spc ) , las biodegradation intermediates having an alkyl chain length of 4 to 13 , are formed during the following stages : -oxidation of the alkyl chain terminal carbon , successive -oxidation , and further desuphonation [ 7 , 2326 ] . for example , it can be observed that in surface waters primary degradation ( compounds that lost their chemical structure and properties ) of compounds from the group of las is completed after 4 days ; the average half - life of some analytes is 1024 h and 5690% of mineralization can be finished from 7 days up to 30 days . compounds from the group of fatty alcohol sulphates ( as ) undergo rapidly primary and ultimate biodegradation process under aerobic and anaerobic conditions [ 7 , 8 , 21 ] . the degradation process involves the enzymatic cleavage of the sulphate ester bonds to give mixture of inorganic and organic compounds ( sulphate and fatty alcohol ) . during further stages , alcohol is oxidized to aldehyde and next to fatty acid ( -oxidation pathway ) . biodegradation of compounds from the group of sds was reported in antarctic coastal waters with half lives of 160 to 460 h . removal efficiency in wwtp of homologues from the group of sas was estimated from 64 to more that 99% ( due to a combination of degradation in the active sludge unit ( 84% ) and sorption processes onto sludge ( 16% ) ) . the degradation process is depending on length of alkyl chain in sas homologues due to decreasing solubility for longer chains in polar medium ( shorter alkyl chain = higher degradation percentage ) . in addition , long alkyl chain of sas compounds is characterized by a about three times higher tendency to sorption onto sludge surfaces compared with short chain compounds [ 29 , 30 ] . for other anionic surfactant like aes removal efficiency from influent water degradation process of nonionic surfactants was investigated mainly using compounds from the group of nonylphenol ethoxylates . their primary biodegradation is relatively fast ( from 4 to 24 days ) and their mineralization is typically from 50 to 80% [ 3234 ] . degradability of npeo compounds with lower molecular weight is easier than that with higher molecular weight . it was confirmed that during biodegradation process the ethoxylated chain becomes progressive shorter as a result of hydrolysis and next oxidations reactions , leading to short - chain npes with one or two ethoxylate groups [ 32 , 36 ] . besides , oxidation reaction of the ethoxylated chain can occur more often than hydrolysis ( npecs are the most often found metabolites : 6998% ) and npec compounds can be degraded to npe2c [ 3335 ] . alkylphenol diethoxycarboxylates can be also formed as a consequence of -oxidation and later , -oxidation reaction of the alkyl chain . in different types of environmental samples were detected metabolites of npeo compounds ( np , npe1 - 2o , and npec ) [ 8 , 37 , 38 ] . degradation of compounds from the group of alcohol polyethoxylates ( aeo ) was also investigated . biodegradation processes can be more efficient for analytes with shorter alkyl and/or ethoxylated chains . in this process fatty acids and polyethylene glycols ( peg : slower biodegradation with production of carboxylic acids ) can be formed as a consequence of central cleavage of molecule and undergo -oxidation and later , -oxidation reaction of the alkyl chain [ 39 , 40 ] . branched compounds from the group of aeo are characterized by slower degradation and it involves -oxidation and successive , -oxidation reactions of the alkyl chain . therefore , surfactants occurred in water systems can be degraded easily ( half - lives from hours to few days ) depending on their properties and environmental parameters . compounds from the group of saa can undergo such processes like attachment to suspended solids and accumulation in sediments . in conditions with lack of oxygen ( starting in depth of few cm ) surfactants can be only degraded by anaerobic pathways . generally , in anaerobic condition processes are slower or they are not observed ( e.g. , dtdmac ) and pollutants occurred longer in sediment [ 8 , 13 ] . however , during laboratory experiments acceptable degradation percent of las has been observed with use of anoxic marine sediments ( up to 79% in 165 days ) . the following stages of anaerobic degradation pathway for las have been reported : initial reaction metabolites ( generated via the addition of fumarate ) ; their biotransformation into sulfophenyl carboxylic acids , and progressive degradation by -oxidation reactions [ 4143 ] . for different reasons , it is important to detect , identify , and monitor levels of surfactants in aquatic environments . prior analytics of pollutants from the group of saas in environmental samples pose new challenges . the searching for new tools and new sources for obtaining information about degree of pollution different environmental compartments is dictated by toxicological considerations , the desire to increase the accuracy of the description of the environment , and the study and protection of aquatic ecosystems balance . it is imperative to have the appropriate analytical tools ( standard analytical methodologies or their modification ) in order to monitor the presence of surface active agents in various environmental samples . the determination of saas in such samples causes a lot of problems , mainly because of [ 1 , 6]the complex composition of environmental samples interfering components which increased or reduced the identified levels of analytes , the low concentrations of individual surfactants in such samples , the diverse chemical structures of surfactants moreover commercially available surfactants are mixtures containing twenty or more individual compounds , the amphiphilic nature of surfactants ( a consequence of their chemical structure),the limited availability of commercial standard solutions of surfactants ( also isotope - labelled analytes ) . the complex composition of environmental samples interfering components which increased or reduced the identified levels of analytes , the low concentrations of individual surfactants in such samples , the diverse chemical structures of surfactants moreover commercially available surfactants are mixtures containing twenty or more individual compounds , the amphiphilic nature of surfactants ( a consequence of their chemical structure ) , the limited availability of commercial standard solutions of surfactants ( also isotope - labelled analytes ) . the complex and frequently variable matrix composition of environmental samples and the trace levels of saas mean that suitable isolation and/or preconcentration techniques have to be applied at the sample preparation stage . as a consequence of amphiphilic nature of surfactants molecules during preparation stage an internal standard has to be added to the sample before the solvent extraction ( for estimating the losses of analytes ) , which problematic because of lack of available commercial standards . moreover , the analytical methodologies enabling the determination of a wide range of saas present at different levels in environmental samples should be validated against certified reference materials . nowadays , only liquid reference materials are available on the market ; they can be used to validate methodologies for determining total contents of ionic ( cationic or anionic ) and nonionic surfactants . on the other hand , there are no reference materials suitable for validating entire analytical procedures . those problems have influence on quality control and quality assurance of measurement results and they might cause difficulties in obtainment of the reliable analytical information [ 44 , 45 ] . currently , despite an increase in the amount of information about the concentration of compounds of the group of saas in environmental samples , the knowledge about degree of pollution caused by surfactants is too low and it is still impossible to determine how they can affect the diversity of ecosystems . the determination of total concentration of ionic and nonionic surfactants in different types of liquid / solid samples can be carried with use of standard analytical methodologies ( including liquid - liquid extraction / solid - liquid extraction or soxhlet extraction , resp . ) . cationic surfactants can be determined as sum of substances that form ion pairs with disulfine blue ( disb ) and isolated with use of appropriated extraction technique . total concentration of compounds from the group of anionic surfactants can be evaluated as substances which react with methylene blue ( mb ) . at final determination stage spectrophotometrical technique can be employed to measure total concentration of anionic surfactants in solvent extract . the total concentration of nonionic surfactants can be determined with use of the same analytical technique as before but with the application of different reagents . for determining the individual surfactants belonging to different classes of chemical compounds should be applied modification of the available analytical procedures with use of isolation or / and preconcentration techniques at sample preparation stage ( e.g. , liquid - liquid extraction ; solid phase extraction ; solid phase microextraction ) and chromatographic techniques coupled with different detection systems ( e.g. , gc - ms or hplc - cd , hplc - ms ) at the final determination stage . intensive researches in this direction are carried out in a few research centers located only in spain , germany , usa , china , and so forth . gas chromatography ( gc ) is limited to volatile analytes and this requirement meets only low molecular mass nonionic surfactants ( contain low number of ethoxylated groups ) . this technique is suitable during determination of concentration of other nonionic and anionic after derivatization processes with specific agents . in the literature data presently , high - performance liquid chromatography ( hplc ) is the most often used analytical technique during analysis of surface active agents from all classes compounds ( their homologs , oligomers , and isomers ) in environmental samples . in most cases , derivatization processes of analytes are not necessary , because lc technique is suitable for determining level of low - volatility analytes with large molecules . it gives the possibility of excluding this operation from analytical procedures and simplifies them due to the green analytical chemistry ( gac ) concept . an undoubted advantage of hplc technique is that surfactants levels can be measured in a very short time . for determining the concentration of individual analytes from group of surfactants in appropriate prepared solvent extracts the following types of detectors can be used : fluorescence ( fld ) , ultraviolet ( uv ) , conductometric ( cd ) , mass spectrometry ( ms ) or their combinations [ 1 , 6 ] . mass spectrometer is universal detector , which can be used in qualitative and quantitative determination of wide range of trace analytes in single analysis . but this detection technique has several disadvantages ( high cost of equipment and its operation ; highly qualified staff ; high purity of reagents ) . according to these drawbacks for example , ion chromatographer coupled to conductometric detector can be applied to determinate individual ionic ( cationic and anionic ) surfactants in solvent extracts , for example . to investigate other groups of saa compounds ( with chromophores groups in their molecules ) in appropriate extracts can be involved ion chromatographer coupled to uv detector ( compounds from the group of las data under preparation by our research group ) . other analytical tool used to determine surface active agents coupled to ic equipment is evaporative light scattering detectors ( els ) . this device can detect almost all relatively nonvolatile analytes and is insensitive for gradient condition of analysis [ 5052 ] . analysis of the literature data confirmed that surface active agents are presented in various elements of the environment , but researchers focused mainly on determination of levels of anionic and nonionic analytes . thus , due to the very limited literature data on the determination of cationic saas , which are characterized by higher toxicity to living organisms and undergo sorption onto solid surfaces . additionally , commercial available surfactants are mixtures of various homologues and/or isomers and their determination in environmental samples becomes more problematic . those aspects indicate the need to develop new methodologies with the use of more selective and specific analytical tools . the ranges of concentrations or mean values of compounds from different groups of surfactants determined in solid and liquid environmental samples are presented in table 4 . easily , it can be noticed that there is lack of research describing the problems of determining surface active agents ( both total concentration or contents of individual analytes ) in aerosol samples , atmospheric precipitation , and atmospheric deposits samples ( e.g. , dew , hoarfrost , and fog ) . similarly the process of saas transformation in the snowpack deposited on the ground as well as the transfer of the deposited saas to surface waters has not been satisfactory recognized yet with hardly any publications on this topic . studies on degree of environmental contamination caused by surfactants should be analyzed in mentioned type of samples , because atmospheric deposition is considered as a major source of various pollutants . accordingly , for the first time the total concentration of various groups of saas has been determined in atmospheric deposits samples collected during different seasons on polish territory ( table 4section atmospheric waters ) . to fulfill this research aim , next step in investigation of surfactants fate in the environment should be determination of their individual levels in collected samples with the use of appropriate analytical protocols . it should be noticed that surface active agents also can be derived from natural sources , particularly from phytoplankton activities . the presence of those organisms can be observed in the surface of the euphotic zone , where chemical exchange with the atmosphere is possible . such processes may have significant influence on the boundary between the hydrosphere and the atmosphere ( this aspect will be presented in further part of paper ) [ 53 , 54 ] . a fraction of surfactant from influent stream can be emitted to the environment via wastewater treatment plant effluent discharge into surface waters . in aquatic ecosystems amount of compounds from the group of saas can be reduced by different processes ( dilution , bio- and photodegradation , and sorption to suspended solids and to sediments . application of sewage sludge ( containing surfactant and other pollutants ) on agricultural lands as fertilizers has an impact on terrestrial living organism and plants . cationic saas , because of positive charge of their molecules , are strongly sorbed to the negatively charged solid surfaces of sludge , soil , sediments , metals , plastics , and cell membranes ( susceptibility to accumulation and bioaccumulation ) . cationic surface active agents exhibit specific properties that may prevent ( or retard ) the growth or cause mortality of microorganisms . these properties allows for appling them as disinfectants and antiseptic agents in different products . on the other hand , the occurrence of cationic surfactants in aquatic ecosystems is very dangereous for aquatic organisms or , in case of humans , may cause irritation or burns to the skin , eyes and respiratory system [ 8 , 56 ] . anionic compounds can be accumulated in aquatic organisms and interact with their cell membranes , proteins , and enzymes . it causes the disturbances of their biological functions , cell lysis , or even death . additionally , decreasing surface tension properties of anionic surfactants makes easier migration processes of other toxic pollutants into living organisms [ 5759 ] . the most often applied group of nonionic surfactants ( nonylphenol ethoxylates and octylphenol ethoxylates ) undergoes a quick degradation in wastewater treatment plants into short - chain alkylphenol ethoxylates and carboxylated derivatives . moreover , it should be noticed that some products of apeo degradation are dangerous also due to their estrogenic properties . the cationic and nonionic surfactants can have influence on sorption processes and transport of some pharmaceutical compounds in living organisms , potentially reducing the rate of their migration through the subsurface . some compounds from the group of surfactants have a low biodegradability or their degradation products are more toxic than initial compounds or even become endocrine disruptor compounds ( edc ) . the toxicity properties of compounds from the group of surfactants can be used to estimate their environmental risks . in table 5 the literature data of toxicity for different classes of surfactants to different test organisms are collected . chemical compounds ( edc ) that occurred in the environment can present disruption properties of normal functioning of hormonal system in living organism . in 1938 for the first time data about recognition of estrogenic properties of p - n - alkylphenol were published . further research data shows that compounds from op , np , npneo ( n = 2 , 9 , 40 ) , and np1ec have estrogenic activities against different living organisms [ 6367 ] . in investigation of endocrine disruption effect mixtures of cationic , anionic , nonionic surfactants and some of their degradation products parent compounds have no estrogenic properties for tested organism but for their degradation products ( op , np , np1ec , np2ec , and np2eo ) were observed positive results . in previous section of article information about sorption and degradation processes of surfactants in wastewater treatment plants or in the environment is presented . herein , other important aspects of impact of compounds from the group of surfactants on different part of abiotic environment will be reviewed . so far , an atmospheric input of compounds from the group of surfactants to aquatic environment has not been considered yet . however , evaporation processes of semivolatile compounds ( anionic and nonionic surfactants or product of their degradation ) from surface waters , soils and vegetation have been recognized as a significant source for such contaminants in the atmosphere . compounds from the group of saas are able to interact on processes that occurred at different interfaces . they can also have influence on processes between air and water interfaces ( e.g. , suppression of evaporation processes , modification of the surface temperature field during free surface flows , inflection on gas transport and reduction of momentum transport from air to water , and damping of surface waves ) . scientists believe that there are two mechanisms of surfactant amount reduction on a water surface . first , rain drop can split into many small drops covered with surface active agents . these drops can be transported away from the surface of water by moving air masses . secondly , bubbles can be formed during rain drop impacts and they will move to water surface , burst , and lead to transporting pollutants to the air [ 7074 ] . moreover , the occurrence of surfactants in abiotic environment might disturb equilibrium of different compartments . those compounds can also increase the solubility of organic compounds in the aqueous phase ( increasing mobility of toxic agents in different ecosystems ) . in the environment this specific system can be observed as a thin boundary between the water basin ( e.g. , ocean ) and the atmosphere , named as sea - surface microlayer ( sml ) . the formation of sml is complicated and it is unknown which physical and chemical processes have influence on migration of chemical compounds in this boundary . for several times occurrence of surface active agents in sml has been proved . research in this area has shown that saas ( consist of low - molecular - weight carbonyl compounds ) can be also generated in microlayer by microorganisms . in surface water those compounds are produced photochemically from the degradation of refractory dissolved organic matter ( e.g. , humic substances ) and they are taken up quickly by microorganisms [ 7577 ] . accumulation of surfactants at surface of water can be toxic for marine and freshwater organisms . some of compounds from the group of surfactants can lead to chronic toxicity or estrogenic responses towards aquatic species [ 54 , 7882 ] . at the sea surface surfactants play a role in the recycling and long - range transport of pollutants via marine aerosols . heavy metals ( as pollutants from crustal and urban sources ) enriched on the sea surface were found to interact with the surface - active organic matter and become transferred into marine spray [ 8385 ] . the early studies of surfactants in rain water and atmospheric aerosols found that their concentrations were too low to have any effect upon the cloud physical process at high dilution . more concentrated surface active substances or in different mediums may influence the state of the gas - liquid interfaces of atmospheric particles and droplets . models of cloud formation based on laboratory research suggest that organic compounds significantly provide a decries in surface tension ( determining droplet population ) . surface tension , which is one of the factors that controls the vapor pressure of small droplets , is the consequence of intermolecular attractive forces tending to minimize the surface area of the liquid . some compounds from the group of surfactants found in tropospheric they might participate in generation of more cloud water due to the reduction of surface tension in a droplet and behave like cloud condensation nuclei ( ccn ) in the atmosphere . the amount of ccn increases the albedo effect and influences climate change in certain areas ( due to relation with cooling effect in the atmosphere ) [ 85 , 8890 ] . as a result of the intensification of certain types of human activity an upward trend due to their widespread use and freely migration between phases , surfactants and products of their degradation have been detected at various concentrations in different part of abiotic and biotic environment . the occurrence of surface active agents was confirmed in atmospheric precipitation and deposits , surface waters , sediments , soils , and living organisms . compounds from the group of surfactants have been detected in samples ( abiotic : air , snow , lake water , and sediment ; biotic : marine and terrestrial organisms ) from the areas of residence and economic exploitation by humans and from remote regions like the antarctic as well . the global source refers to long - range atmospheric transportation of pollutants ( e.g. , anionic surfactants ) from lower latitude area , but their transport pathways are not well understood . thus , it is widely accepted that saas play an important role of anthropogenic pollutant emission , having versatile environmental consequences . on the other hand , review of the literature also shows that we are far from understanding the environmental fate of surfactants . there is lack of data on composition , properties , and behavior of the organic material ( especially surfactants ) in the atmosphere or other environmental compartments [ 1 , 92 ] . the analysis of atmospheric precipitation and deposits is one of the important aspects of the assessment of the degree of environmental pollution caused by surfactants . both atmospheric precipitation and deposits compounds released into the atmosphere are present in the gaseous phases , in the aerosol phases , or are adsorbed on surface of suspended particles . two major deposition mechanisms of those pollutants are distinguished , that is , wet ( with rain and snow ) or dry removal . deposition is controlled by the distribution of chemical compounds between different phases and their physicochemical properties . on the other hand , the strong dependence was observed between saas concentration and deposition on the existing emission background and meteorological conditions , particularly wind direction , sunlight ( photodegradation ) , rain / snowfall , and so forth . therefore , it is essential to keep control of the content of those compounds in specific environmental samples ( e.g. , atmospheric precipitation and deposits ) and identify parameters that may affect content of surfactants in the different ecosystems . compounds from the group of surfactants are widely applied in different areas of human activities and after use they are discharging to wastewater treatment plant . after appropriated processes ( sorption , degradation ) , they can be emitted to surface waters with effluent streams . in aquatic ecosystems surfactants and their degradation products compounds from the group of saas can also affect living organisms and abiotic parts of the environment . moreover , they can interact with different interfaces ( water - air , soil / sediment - water ) and change natural processes in those systems . during last decades in environmental samples different types of surfactant were determined using analytical protocols depending on what information is required ( total concentration of surfactants or individual analytes from appropriated group of saas compounds ) . but it is still imperative to develop new analytical procedure for investigation analytes from the group of saas that occurred in ecosystems to make them easier , less cost - consuming , and more safe for biotic and abiotic elements of the environment . research data confirmed that they are able to spread between waters , soils , atmosphere , and living organisms from different geographic regions . but we are far from understanding migration pathways and their behavior of those pollutants , their impact on different ecosystems and living organisms . there is a need to investigate those processes in environment to protect abiotic and biotic part of the environment .
due to the specific structure of surfactants molecules they are applied in different areas of human activity ( industry , household ) . after using and discharging from wastewater treatment plants as effluent stream , surface active agents ( saas ) are emitted to various elements of the environment ( atmosphere , waters , and solid phases ) , where they can undergo numerous physic - chemical processes ( e.g. , sorption , degradation ) and freely migrate . additionally , saas present in the environment can be accumulated in living organisms ( bioaccumulation ) , what can have a negative effect on biotic elements of ecosystems ( e.g. , toxicity , disturbance of endocrine equilibrium ) . they also cause increaseing solubility of organic pollutants in aqueous phase , their migration , and accumulation in different environmental compartments . moreover , surfactants found in aerosols can affect formation and development of clouds , which is associated with cooling effect in the atmosphere and climate changes . the environmental fate of saas is still unknown and recognition of this problem will contribute to protection of living organisms as well as preservation of quality and balance of various ecosystems . this work contains basic information about surfactants and overview of pollution of different ecosystems caused by them ( their classification and properties , areas of use , their presence , and behavior in the environment ) .
Update: Reuters reports that the Syrian defector will be allowed to stay in Jordon on "humanitarian grounds," according to a Jordanian security official. In what appears to be the first defection of its kind, a Syrian fighter pilot has landed at a military base in neighboring Jordan Thursday and has requested asylum, according to Jordanian officials. "The pilot, identified as Col. Hassan Hammadeh, removed his air force tag and kneeled on the tarmac in prayer after landing his plane at King Hussein Air Base in Mafraq, Jordan," an anonymous Jordanian official tells the Associated Press. Defecting from the army is one thing—thousands of soldiers have deserted in the last 15 months—but flying off in a blaze of glory makes for quite the symbolic kiss-off. And it appears the pilot had good reason. According to Reuters' Suleiman Al-Khalidi and Khaled Yacoub Oweis, the pilot's hometown Kfar Takharim has been hammered by Syrian shelling in the last several months, including artillery barrages and helicopter attacks in the last week. "Opposition sources said Hamada is a 44-year-old Sunni Muslim from Idlib province and he had smuggled his family to Turkey before his dramatic defection." It looks like it might be difficult for Syria to shrug this one off, as earlier today State TV reported that a plane flown by the air force colonel went missing during training. As for the rebels, they're not wasting any time to claim a small victory and take credit. The Associated Press reports that Free Syrian Army spokesman Ahmad Kassem says the rebels "had encouraged the pilot to defect and monitored his activity until the jet landed safely in Jordan." As scholars have noted, the country's strong military has been "entrenched with the state since the Ba’ath takeover in the 70s" making high-level defections a rarity. Is this an outlier or a sign of more defections to come? Want to add to this story? Let us know in comments or send an email to the author at jhudson at theatlantic dot com. You can share ideas for stories on the Open Wire. John Hudson ||||| AMMAN (Reuters) - A Syrian air force pilot flew his MiG-21 fighter plane over the border to Jordan and was granted political asylum on Thursday, the first defection with a military aircraft since the start of the uprising against President Bashar al-Assad. Colonel Hassan Hamada landed at the King Hussein military air base 80 km (50 miles) northeast of Amman and immediately asked for sanctuary, Jordanian officials told Reuters. "The cabinet has decided to grant the Syrian pilot political asylum upon his request," Jordanian Minister of State for Information Samih al-Maaytah told Reuters. Syria's Defense Ministry called the pilot a "traitor to his country and his military honor" saying it would punish Hamada under military law and was in contact with Jordan to retrieve the aircraft. In Washington, the Pentagon was delighted. "We very much welcome the pilot's decision to do the right thing," said spokesman George Little. "We have long called for members of the Syrian armed forces and members of the Syrian regime to defect and to abandon their positions rather than be complicit in the regime's atrocities." The defection will boost the morale of the rebels as Assad's forces intensify efforts to crush the uprising and international peace efforts are stalled. Thousands of soldiers have deserted in the 15 months since the revolt broke out and they now form the backbone of the rebel army. But unlike last year's uprisings in Libya and Yemen, no members of Assad's inner circle have broken with him. The army maintained its bombardment of downtown areas of Homs on Thursday despite a temporary truce that had been agreed to allow the evacuation of civilians and the wounded. Aid workers from the International Committee of the Red Cross and Syrian Arab Red Crescent were forced to turn back because of shooting. "We could not identify the source of the shooting," said ICRC spokesman Hicham Hassan. "We will still attempt to enter the affected areas of Homs city but we cannot confirm the timing for that. Our dialogue with the parties continues," Hassan said. The aid workers returned to Damascus. Syrian state television blamed "armed terrorist groups" for thwarting the Red Cross mission while opposition activists in the city said the heavy army shelling on Sunni Muslim neighborhoods of Homs prevented evacuation of civilians. "The army has no intention of relieving the humanitarian situation. They want Homs destroyed," a Homs-based activist, Abu Salah, told Reuters. The pro-opposition Syrian Observatory for Human Rights said 125 people were killed around the country during the day, with at least 18 of them in Homs. SHELLING IN DAMASCUS SUBURB In Douma, a conservative Sunni suburb of Damascus, army shelling killed at least 20 people as rebels fought tank-backed forces to prevent them from advancing into the district, home to 300,000 people, opposition activists said. Assad, a member of Syria's Alawite minority, an offshoot of Shi'ite Islam, has sent tanks across the country to put down the mostly Sunni-led uprising, which started with peaceful demonstrations and was later coupled with an armed insurgency against his rule that's began in 2000, when he inherited power from his late father. Opposition sources said pilot Hamada is a 44-year-old Sunni Muslim from Idlib province and he had smuggled his family to Turkey before his dramatic defection. His hometown Kfar Takharim has been repeatedly shelled in the past several months and suffered intense artillery and helicopter bombardments in the last few days, opposition campaigners who spoke to his family said. Many air force personnel as well as army soldiers are from Syria's Sunni majority, although intelligence and senior officers are largely Alawite. The International Institute for Strategic Studies says the air force has 365 combat capable aircraft, including 50 MiG-23 Flogger and MiG-29 Fulcrum fighters and 40,000 personnel - a reflection of the overwhelming military advantage Assad has over his poorly-equipped foes. The most prominent defection so far in the conflict was that of Colonel Riad al-Asaad last July, who helped set up the rebel Free Syria Army after taking refuge in Turkey. Last week Brigadier General Ahmad Berro, head of a tank unit in Aleppo province, fled with his family, also to Turkey. The defection could complicate the international scenarios of a conflict that many governments fear could spread beyond Syria and throughout the already volatile Middle East. Ties between Jordan and Syria were already strained - Jordan has criticized Assad over his crackdown on the uprising but has been restrained in its rhetoric. Amman is nervous over a possible Syrian military reaction after months of border tension as thousands of Syrians flee the violence to Jordan. A Jordanian official, who asked not to be named, said the incident with the pilot was "difficult to handle". RUSSIAN HELICOPTERS The United Nations says more than 10,000 people have been killed by Assad's forces during the conflict. The government says at least 2,600 members of the military and security forces have been killed by what it characterizes as a plot by foreign-backed "Islamist terrorists" to bring it down. With a joint U.N.-Arab League ceasefire plan in tatters and the international community divided, world leaders and diplomats have been unable to stop the bloodshed. Moscow confirmed on Thursday that it was trying to send repaired combat helicopters to Syria but said they could "be used only for repelling foreign aggression and not against peaceful demonstrators". Russia, one of Assad's main suppliers of military equipment, has shielded its long-standing ally Syria from tougher U.N. sanctions. It says the solution must come through political dialogue, an approach most of the Syrian opposition rejects. The Arab League's deputy secretary general, Ahmed Ben Helli, criticized Russia on Thursday for selling arms to Syria and said U.N. sanctions could be needed to force Assad and the rebels to implement international envoy Kofi Annan's peace plan. "Any assistance in aiding violence should be stopped. When you deliver military equipment you are helping to kill people. That should be stopped," he told Russia's Interfax news agency. (Additional reporting by Oliver Holmes, Erika Solomon and Dominic Evans in Beirut, Thomas Grove in Moscow, Stephanie Nebehay in Geneva, David Cutler in London; Editing by Jon Hemming) ||||| A Syrian fighter pilot on a training mission flew his MiG-21 warplane to Jordan on Thursday and asked for political asylum, the first defection of an air force pilot with his plane during the 15-month uprising against President Bashar Assad. This image made from amateur video released by Ugarit News and accessed Thursday, June 21, 2012, purports to show Syrians helping a wounded man at Baba Amr neighborhood in Homs Province, Syria. A spokeswoman... (Associated Press) In this image made from amateur video released by the Shaam News Network and accessed Thursday, June 21, 2012, smoke leaps the air from purported shelling in Homs, Syria. A spokeswoman for the International... (Associated Press) This image made from amateur video released by the Shaam News Network and accessed Thursday, June 21, 2012, purports to show smoke rising from buildings near a mosque in Homs, Syria. A spokeswoman for... (Associated Press) This image made from amateur video released by the Shaam News Network and accessed Thursday, June 21, 2012, purports to show smoke rising from buildings near a mosque in Homs, Syria. A spokeswoman for... (Associated Press) This image made from amateur video released by the Shaam News Network and accessed Thursday, June 21, 2012, purports to show a Syrian wounded man receiving help at a field hospital in Homs, Syria. A... (Associated Press) This image made from amateur video released by the Shaam News Network and accessed Thursday, June 21, 2012, purports to show smoke rising from buildings in Homs, Syria. A spokeswoman for the International... (Associated Press) The air force is considered fiercely loyal to Assad's regime and the defection suggests some of Syria's most ironclad allegiances are fraying. It was a triumph for the rebels fighting to overthrow Assad. A spokesman for the rebel Free Syrian Army, Ahmad Kassem, said the group had encouraged the pilot to defect and monitored his activity until the jet landed safely in Jordan. The pilot, identified as Col. Hassan Hammadeh, removed his air force tag and kneeled on the tarmac in prayer after landing his plane at King Hussein Air Base in Mafraq, Jordan, 45 miles (70 kilometers) north of Amman, a Jordanian security official said. He said Jordanian official were questioning the defector, but he will be allowed to stay in the country on "humanitarian grounds." "He was given asylum because if he returned home, his safety will not be guaranteed. He may tortured or killed," the official said. He declined to say what Jordan will do with the jet. The official insisted on anonymity, citing the sensitivity of the matter. Syria's state-run TV reported earlier in the day that authorities had lost contact with a MiG-21 that was on a training mission in the country. The report gave no further details. Jordanian Information Minister Sameeh Maaytah confirmed that the pilot had defected. The defection is a sensitive issue for Jordan, which wants to avoid getting dragged into the Syrian conflict. Jordan already has taken in 125,000 Syrian refugees, including hundreds of army and police defectors, and Syria is seeking their return. Syria is one of Jordan's largest Arab trade partners, with bilateral trade estimated at $470 million last year. The Syrian regime has been hit with defections before, although none as dramatic as the fighter pilot's. Most have been low-level conscripts in the army. In March, however, Turkish officials said that two Syrian generals, a colonel and two sergeants had defected from the army and crossed into Turkey. Also in March, Syria's deputy oil minister became the highest-ranking civilian official to join the opposition and urged his countrymen to "abandon this sinking ship" as the nation spiraled toward civil war. Brig. Gen. Mostafa Ahmad al-Sheik, who fled to Turkey in January, was the highest ranking officer to bolt. In late August, Adnan Bakkour, the attorney general of the central city of Hama, appeared in a video announcing he had defected. In January, Imad Ghalioun, a member of Syria's parliament, left the country to join the opposition saying the Syrian people are suffering sweeping human rights violations. ___ Mroue reported from Beirut. .
– A Syrian fighter pilot deserted today—and he took his jet with him. A pilot identified as Colonel Hassan Hamada, 44, went AWOL during a training mission near the Jordanian border today, then landed in Jordan and requested political asylum, Reuters reports. His motives aren't terribly mysterious: His hometown of Kfar Takharim has been under artillery and helicopter fire in recent days. Hamada reportedly smuggled his family to Turkey before his fateful flight. Jordanian officials are currently debriefing Hamada. They tell the AP he'll be allowed to stay in the country on "humanitarian grounds," since if he returned to Syria he might be tortured or killed. Syria is one of Jordan's top Arab trading partners. The defection is noteworthy because the air force has long been considered one of Assad's most loyal units. Plus, as the Atlantic puts it, "Defecting from the army is one thing, but flying off in a blaze of glory makes for quite the symbolic kiss-off."
in their doubly deprotonated form , bis(arylcarboxamido)pyridines 1 have been used as ligands to support nickel and copper complexes that exhibit novel properties . a unique anionic copper(ii)superoxide complex supported by 1 ( r = ipr ) acts as a nucleophile , in contrast to other such species supported by neutral n - donor ligands . monoanionic nickel(ii) and copper(ii)hydroxide complexes supported by 1 ( r = ipr or me ) undergo co2 fixation reactions at exceptionally high rates and react with ch3cn in an unprecedented manner to yield cyanomethide complexes , [ ( 1)m(ch2cn ) ] ( r = me , m = ni or cu ) . in addition , one - electron oxidation of the copper(ii)hydroxide complexes yields thermally unstable cu(iii ) species that rapidly oxidize dihydroanthracene via hydrogen atom abstraction ( hat ) . among the various factors that underlie these unique observations , the dianionic nature and strong electron - donating properties of the supporting ligand 1 would appear to be key . as part of ongoing studies of these various influences , we asked : what would be the consequences of decreasing the negative charge of the supporting ligand while keeping the steric properties approximately constant ? as a first step toward addressing this question experimentally , we targeted ligands 2a2c for synthesis and study of their coordination chemistry . these ligands may be viewed as a hybrid of the aforementioned 1 and bis(arylimino)pyridines like 3 , which have been widely studied , including with cu(ii ) . ligand 2b has been reported , but only as a product of an oxidation of a reduced ni(ii ) complex of 3 . a direct large - scale synthesis was not described , and 2a and 2c are new . alkyl - substituted analogues 4 , which , in deprotonated form , would be expected to be more basic than monoanionic versions 2a2c , have been used to prepare ni(ii ) , pd(ii ) , and fe(ii ) catalysts ( e.g. , for olefin polymerizations ) . ligands 5(10 ) and 6(11 ) are noteworthy relatives of 2a2c , insofar as they contain similar tridentate , mer , monoanionic n - donor sets . herein , we report reproducible , large - scale synthetic routes to 2a2c and the results of explorations of their ability to complex to divalent metal ions , with an emphasis on cu(ii ) . we found that metalations in the absence of base result in complexes that exhibit carboxamide o , n , n-coordination and that subsequent treatment of these compounds with base induces isomerization to carboxamido n , n,n-coordination . the structural and spectroscopic characterization of the complexes provides a foundation for future studies of biomimetic and/or catalytic reactivity . the report of l(h ) ( 2b ) sparked our interest in arylcarboxamido(arylimino)pyridine ligands and motivated the development of a large - scale synthesis that could be modified to enable access to a series of related ligands with variable aryl substitution . we found that treatment of 6-acetylpicolinic acid with oxalyl chloride , followed by the desired aniline in the presence of net3 , yielded ketocarboxamide precursors 7 ( scheme 1 ) . addition of 7a or 7b to a preformed mixture of ticl4 and the second aniline provided l(h ) ( 2a2c ) in a total yield of up to 47% . the indicated formulations for 7a,7b and 2a2c were supported by h and c nmr spectroscopy and , in the case of l(h ) ( 2c ) , x - ray crystallography . in the x - ray crystal structure of 2c , the amide , pyridine , and imine moieties are coplanar , but with the imine donor facing away from the putative metal ion binding pocket ( figure 1a and table 1 ) . representations of the x - ray crystal structures of ( a ) l(h ) ( 2c ) , ( b ) lcucl ( 8b ) , ( c ) lcucl ( 8a ) , and ( d ) lcuoac ( 9b ) , showing all non - hydrogen atoms as 50% thermal ellipsoids . see table 1 for selected interatomic distances and angles . full lists of atomic coordinates and bond distances are available in the cifs ( supporting information ) . treatment of l(h ) ( 2a2c ) with sodium methoxide in the presence of cucl2 yielded complexes lcucl ( 8a8c ) ( scheme 2 ) . related complexes lcuoac ( 9b,9c ) were synthesized by refluxing l(h ) ( 2b ) or l(h ) ( 2c ) , respectively , with cu(oac)2h2o in mecn . the formulations of all of these compounds are supported by uv vis and epr spectroscopy , esi mass spectrometry , and x - ray crystallographic data ( 8a , 8b , and 9b in figure 1 ; 8c and 9c in figure s2 , supporting information ) . similar n , n,n-coordination of their arylcarboxamido(arylimino)pyridine ligands is apparent in all of the x - ray structures , each of which shows a tetragonal geometry for the cu(ii ) ion . n(imine ) reflected by the average distances of 1.927 , 1.980 , and 2.100 , respectively . the observation of the shortest cu n bond for the pyridyl group is consistent with previously reported structures of complexes of bis(arylcarboxamido)pyridine or diiminopyridine ligands 1 and 3 . apparently , as a result of decreased steric bulk of its methyl - substituted aryl groups , the x - ray structure of lcucl ( 8a ) is composed of polymeric repeating units resulting from axial coordination of the carboxamide carbonyl of one monomer to the copper center of a neighboring unit ( 8a ; cu1o11 = 2.345(3 ) similar axial coordination , albeit intramolecular and involving an acetate ligand o atom , is observed in ipr2lcuoac ( 9b ; cu1o2 = 2.369(2 ) ; figure 1d ) and lcuoac ( 9c ; cu1o3 = 2.456(3 ) ; figure s6b , supporting information ) . x - band epr spectra of solutions of lcucl ( 8a8c ) and lcuoac ( 9b,9c ) in ch2cl2/toluene ( 1:1 v / v ) at 230 k exhibit rhombically distorted axial signals with resolved n - superhyperfine coupling ( 8a , 8b in figure 2 ; 8c , 9b , 9c , in figure s3 , supporting information ) . these parameters compare favorably to those obtained for cu(ii ) complexes of bis(arylcarboxamido)pyridine ligand 1 , as illustrated by entries 6 and 7 . from the combined data , it appears that a gz value of 2.2 , a large a(cu ) 195 10 cm , and well - resolved n - superhyperfine features are signatures of n , n,n-coordination of the supporting ligand . the only exception to this generalization is the smaller a(cu ) value and lesser - resolved n - superhyperfine coupling for 8a . with the data in hand , we can only speculate that the outlier properties of 8a result from the reduced steric bulk of the aryl groups in this complex , perhaps enabling axial ligand interactions with the copper center ( as seen in its x - ray structure ) that perturb the epr spectrum . epr spectra ( black ) and simulations ( gray ) of ( a ) lcucl ( 8a ) and ( b ) lcucl ( 8b ) . measured in frozen solution at 230 k ; units of a are in 10 cm . cyclic voltammetry was performed on complexes lcucl ( 8b ) and lcuoac ( 9b ) to investigate the effect of the asymmetric ligand environment on the oxidation potential of neutral lcux ( x = cl , oac ) complexes in comparison to previously studied anionic [ ( 1)cux ] ( r = ipr , x = cl ) compounds . a reversible oxidative wave was observed for lcucl ( 8b ) upon scanning anodically with e1/2 = 0.760 v vs fc / fc and ep = 62 mv ( 50 mv s , 0.1 m bu4npf6 in acetone , figure 3 , red trace ) . in comparison to the analogous [ ( 1)cucl ] ( r = ipr ; e1/2 = 0.296 v vs fc / fc ) complex , the oxidation potential of lcucl ( 8b ) is larger by almost 0.5 v ( figure 3 ) . data for lcuoac ( 9b ) under identical conditions ( 0.1 m bu4npf6 in acetone ) demonstrated a slightly lower oxidation potential of e1/2 = 0.708 v vs fc / fc using scan rates of greater than 1000 mv s ; scan rates below 500 mv s resulted in an irreversible oxidative wave ( figure s5b , supporting information ) . the observed 0.5 v larger oxidation potentials for lcucl ( 8b ) and lcuoac ( 9b ) relative to analogues supported by 1 support the hypothesis that installing the neutral imine donor into the ligand framework significantly raises the oxidation potential of n , n,n-copper(ii ) complexes . cyclic voltammograms of [ ( 1)cucl ] ( black trace ) and lcucl ( 8b ) ( red trace ) all performed in acetone ( 0.1 m bu4npf6 ) . epr spectra ( black ) and simulations ( gray ) of ( a ) lcuoac ( 9c ) and ( b ) [ l(h)cu(mecn)2][(sbf6)2 ] ( 11 ) . parameters derived from the simulations are listed in table 2 . in the absence of coordinating halides , a variety of solvent - labile cationic copper(ii ) complexes with bound solvent ligands were prepared by treatment of l(h ) ( 2b ) or l(h ) ( 2c ) with [ cu(mecn)5](sbf6)2 ( scheme 3 ) . x - ray crystal structures of the complexes [ l(h)cu(mecn)][(sbf6)2 ] ( 10 , figure 5b ) and [ l(h)cu(oh2)(thf)][(sbf6)2 ] ( 12 , figure 5c ) revealed tetragonal copper ion geometries with o , n , n-ligation at typical cu o , n distances ( table 1 ) . metal ligand bond distances ( table 1 ) are generally longer than those in the n , n,n-coordinated complexes , as expected for the differences in the protonation state of the ligands ( neutral charge for o , n , n- vs anionic for n , n,n-coordination ) . longer axial interactions with counterions ( 10 , cu f = 2.662(2 ) and 2.712(2 ) ; 12 , cu o(thf ) = 2.235(2 ) ) are also present . also , in 12 , two thf solvate molecules form hydrogen bonds to the bound water molecule , with h(water)o(thf ) distances of 1.788(9 ) and 1.802(11 ) , respectively . representations of the x - ray crystal structures of ( a ) l(h)zncl2 ( 15 ) , ( b ) [ l(h)cu(mecn)][(sbf6)2 ] ( 10 ) , and ( c ) [ l(h)cuoh2(thf)](sbf6)2 ( 12 ) ( omitting one sbf6 and showing two additional thf solvate molecules ) , with all non - hydrogen atoms shown as 50% thermal ellipsoids and the hydrogen atoms attached to the amide n atoms and the h2o molecule as spheres . see table 1 for selected interatomic distances and angles . consideration of the epr spectra for complexes 1012 reveals notable differences compared to the spectra for 8 and 9 , which enable n , n,n- and o , n , n-coordination to be distinguished ( table 2 and figure s3 , supporting information ) . notably , the complexes with o , n , n-coordination display larger gz ( 2.3 vs 2.2 ) , decreased rhombicity ( gx gy ) , and smaller a(cu ) values ( 160 vs 190 10 cm ) . in addition , n - superhyperfine coupling is not observed for any of the o , n , n-copper(ii ) complexes . these differences are illustrated in figure 4 , in which data and simulations for lcuoac ( 9c ) and [ l(h)cu(mecn)2][(sbf6)2 ] ( 11 ) are directly compared . n , n-coordination included l(h)mcl2 ( m = cu , co , zn ) , which were generated through the combination of divalent metal ions with l(h ) ( 2c ) in the absence of added base ( scheme 3 ) . for example , treatment of l(h ) ( 2c ) with mcl2 ( m = cu , co , zn ) yielded the neutral complexes 1315 . visible spectroscopy , esi - ms , elemental analysis , and , in the cases of 14 ( m = co ) and 15 ( m = zn ) , by x - ray crystallography . the x - ray structures of 14 and 15 are essentially isostructural , with five - coordinate geometries illustrating o , n , n-binding of the protonated forms of the arylcarboxamido(arylimino)pyridine ligand ( 15 in figure 5a ; 14 in figure s6c , supporting information ) . coordination geometries intermediate between square - pyramidal and trigonal - bipyramidal are indicated by values of 0.566 ( 14 ) and 0.491 ( 15 ) . consistent with the solvent - labile cationic copper(ii ) metal ligand bond distances , those in 14 and 15 are elongated relative to those in the n , n,n-coordinated complexes ( table 1 ) . in both structures , solvent molecules in the crystal lattice propagate hydrogen - bonding networks through intermolecular interactions with the amide proton of the bound ligand l(h ) ( 2c ) . in the absence of suitable crystals for structure determination by x - ray diffraction , the formulation of 13 ( m = cu ) is supported by chn analysis results and the presence of a peak envelope for [ l(h)cucl ] in the esi mass spectrum , which is consistent with the [ l(h)mcl ] peaks observed for 14 and 15 . as described above , o , n , n-bound complexes of l(h ) or n , n,n-bound complexes of l may be accessed by performing the syntheses in the absence or presence of base . in addition , we have been able to demonstrate that addition of base can induce conversion of the former to the latter type . such a linkage isomerization reaction was identified by monitoring reactions of l(h)cucl2 ( 13 ) with net3 by epr and uv vis spectroscopy ( figures s7 and s8 , supporting information ) . preparation and analysis of a uniform series of independent frozen solution ( 1:1 , mecn / toluene ) samples of l(h)cucl2 ( 13 ) after reaction with increasing amounts of net3 ( ranging from 0 to 2 equiv of net3 ) by epr spectroscopy allowed the reaction to be monitored incrementally . interestingly , the epr spectra of l(h)cucl2 ( 13 ) exhibit an isotropic signal , which does not vary upon preparation in various solvents and analysis under a range of temperatures ( 230 k ) . while this signal deviates from the previously observed spectral features for the o , n , n- and n , n,n-coordinated copper(ii ) series of compounds , related isotropic epr signals have been reported for similar neutral n , n , n - coordinated cux2 ( x = cl , clo4 , scn , no3 ) complexes . upon reaction of l(h)cucl2 ( 13 ) with net3 , the isotropic epr signal diminishes in intensity as features consistent with the axial signal of lcucl ( 8c ) appear . this axial signal displays g and a(cu ) values in agreement with the epr spectra of independently synthesized lcucl ( 8c ) . consistent with this result , the progressive addition of increasing amounts of net3 to a solution of l(h)cucl2 ( 13 ) results in a color change from orange to dark green , which is characteristic of lcucl ( 8c ) . the absorption features for the latter reached maximum intensity upon addition of 1 equiv of net3 . also , single crystals isolated from thf solutions of l(h)cucl2 ( 13 ) after reaction with net3 were determined to be isostructural to those obtained from independently synthesized lcucl ( 8c ) by x - ray diffraction analysis . in conclusion , we have developed a modular synthesis for the preparation of arylcarboxamido(arylimino)pyridine ligands and demonstrated their abilities to coordinate a variety of metal(ii ) ions ( cu , co , and zn ) . synthetic procedures for preparation of complexes featuring anionic n , n,n-carboxamido or neutral o , n , n-carboxamide ligation , as well as demonstration of linkage isomerization from o , n , n- to n , n,n-coordination , have been established within these novel ligand frameworks . extensive spectroscopic and structural characterization of a variety of metal(ii ) complexes in various coordination environments has provided an insight into how the asymmetric carboxamido(arylimino)pyridine framework influences the properties of these novel complexes . ongoing investigations are focused on further establishing how these ligands support metal complexes in higher oxidation states and their potential reactivity . all solvents and reagents were obtained from commercial sources and used as received unless otherwise stated . the solvents tetrahydrofuran ( thf ) , diethyl ether ( et2o ) , toluene , pentane , and dichloromethane were passed through solvent purification columns ( glass contour , laguna , ca ) . dichloromethane and acetonitrile were dried over cah2 and then distilled under vacuum prior to use . acetone was dried over activated 3 molecular sieves and distilled under vacuum prior to use . purified solvents were stored in a nitrogen - filled glovebox over either activated 3 molecular sieves or cah2 and filtered through a 0.45 m ptfe syringe filter immediately before use . all complexes were prepared under dry nitrogen using standard schlenk techniques or in a vacuum atmospheres inert atmosphere glovebox , unless otherwise stated . 2,6-dibromopyridine was recrystallized from benzene / n - heptane and dried prior to use . the synthesis of 6-acetylpyridine-2-carboxylic acid was performed according to the literature , with slight modifications ( see the supporting information for details ) . vis spectra were recorded with an hp8453 ( 1901100 nm ) diode array spectrophotometer . ( parsippany , nj ) and robertson microlit laboratory ( ledgewood , nj ) . epr spectra were recorded with a bruker continuous wave elexsys e500 spectrometer at either 2 or 30 k. epr simulations were performed by using bruker simfonia software ( version 1.25 ) . nmr spectra were recorded on either varian vi-300 or vxr 300 spectrometers at room temperature . chemical shifts ( ) for h and c nmr spectra were referenced to residual protium in the deuterated solvent ( h ) or the characteristic solvent resonances of the solvent nuclei ( c ) . esi - ms were recorded with a bruker biotof ii instrument in positive ion mode . cyclic voltammetry was performed in a three - electrode cell with a ag / ag reference electrode , a platinum auxiliary electrode , and a glassy carbon working electrode and analyzed with basi epsilon software . x - ray crystallography data collections and structure solutions were conducted by using either siemens smart or bruker apex ii ccd instruments and the current shelxtl suite of programs . 6-acetyl-2-pyridinecarboxylic acid ( 1.69 g , 10.3 mmol ) was dissolved in toluene ( 100 ml ) , treated with oxalyl chloride ( 1.39 ml , 16.5 mmol ) , and refluxed 16 h under n2 . the resulting brown solid and 2,6-diisopropyl aniline hydrochloride salt ( 1.1 equiv , 2.4 g , 11.3 mmol ) were dissolved in thf ( 75 ml ) and cooled to 0 c under n2 . triethylamine ( 2.5 equiv , 3.6 ml , 25.7 mmol ) was then added via syringe , resulting in the immediate formation of a white precipitate . after stirring for 15 min at 0 c , the reaction mixture was warmed to room temperature and subsequently brought to reflux for 2 h. after cooling to room temperature , the reaction mixture was filtered and the resulting brown filtrate was concentrated by rotary evaporation . the resulting residue was then washed with hexanes to yield a brown solid and isolated via filtration . the brown solid was then dissolved in a 10:90% etoac : pentane solution and passed through charcoal . evaporation of the resulting filtrate yielded a white solid ( 2.46 g , 74% ) . h nmr ( 300 mhz , cd2cl2 ) : h 9.35 ( br s , 1h , nh ) , 8.43 ( d , 1h , j = 8.4 hz , py h ) , 8.23 ( d , 1h , j = 7.5 hz , py h ) , 8.10 ( t , 1h , j = 7.8 hz , py h ) , 7.37 ( t , 1h , j = 7.6 hz , ar h ) , 7.26 ( d , 2h , j = 7.2 hz , ar h ) , 3.14 ( m , 2h , ar ch(ch3)2 ) , 2.77 ( s , 3h , c(o)ch3 ) , 1.22 ( d , 12 h , j = 6.9 hz , ar ch(ch3)2 ) . c nmr ( 300 mhz , cd2cl2 ) : c 23.9 , 26.1 , 29.5 , 124.1 , 124.6 , 126.3 , 128.9 , 131.9 , 139.5 , 146.9 , 149.7 , 152.6 , 163.4 , 199.0 . calcd for c20h24n2o2 : c 74.04 , h 7.46 , n 8.64 . found : c 73.96 , h 7.29 , n 8.55 . 7b was synthesized following the identical procedure as was used for 7a , except with the substitution of 2,6-dimethylaniline for 2,6-diisopropyl aniline h nmr ( 300 mhz , cd2cl2 ) : h 9.43 ( br s , 1h , nh ) ; 8.44 ( d , 1h , j = 7.5 hz , py h ) , 8.22 ( d , 1h , j = 7.8 hz , py h ) , 8.10 ( t , 1h , j = 7.8 hz , py h ) , 7.17 ( br s , 3h , ar h ) , 2.78 ( s , 3h , c(o)ch3 ) , 2.31 ( s , 6h , ar ch(ch3)2 ) . c nmr ( 300 mhz , cd2cl2 ) : c 18.8 , 26.1 , 124.5 , 126.2 , 127.8 , 128.7 , 134.5 , 136.0 , 139.4 , 149.8 , 152.6 , 162.1 , 199.0 . calcd for c16h16n2o2 : c 71.62 , h 6.01 , n 10.44 . found : c 71.71 , h 6.01 , n 10.40 . 2a was synthesized following the identical procedure as was used for 2b , except starting from 7b instead of 7a ( 1.43 g , 48% ) . h nmr ( 300 mhz , cd2cl2 ) : h 9.50 ( br s , 1h , nh ) , 8.62 ( d , 1h , j = 7.8 hz , py h ) , 8.35 ( d , 1h , j = 6.6 hz , py h ) , 8.07 ( t , 1h , j = 7.8 hz , py h ) , 7.166.92 ( m , 6h , ar h ) , 2.31 ( s , 6h , ar ch(ch3)2 , n - arylcarboxamide ) , 2.23 ( s , 3h , n = cch3 ) , 2.04 ( s , 6h , ar ch(ch3)2 , n - arylimine ) . c nmr ( 300 mhz , cd2cl2 ) : c 16.8 , 18.2 , 18.9 , 123.7 , 124.0 , 124.4 , 125.8 , 127.7 , 128.4 , 128.6 , 136.0 , 138.7 , 155.5 , 176.7 . found : c 77.49 , h 6.69 , n 11.40 a solution of 2,6-diisopropylaniline ( 3.7 ml , 19.8 mmol ) was dissolved in 100 ml of toluene and cooled to 0 c under n2 . ticl4 ( 0.36 ml , 3.3 mmol ) was added via syringe , and the resulting cloudy brown solution was stirred for 2 h. after warming the solution to room temperature , a solution of 7a ( 2.14 g , 6.6 mmol ) in 40 ml of toluene was added to the reaction . the reaction mixture was then refluxed for 16 h. after cooling to room temperature , et2o ( 100 ml ) was added and the reaction mixture was stirred for 15 min . the reaction mixture was then filtered through celite , and the brown - yellow filtrate was concentrated via rotary evaporation . the resulting brown - yellow solid was purified by column chromatography on silica gel ( etoac / pentane ( 1:10 ) ; rf = 0.36 ) to yield a yellow solid ( 2.02 g , 63% ) . the h nmr and high - resolution esi - ms of 2b are previously reported and correlate well with the current data . h nmr ( 300 mhz , cd2cl2 ) : h 9.45 ( br s , 1h , nh ) , 8.61 ( d , 1h , j = 7.8 hz , py h ) , 8.36 ( d , 1h , j = 7.5 hz , py h ) , 8.08 ( t , 1h , j = 7.8 hz , py h ) , 7.397.08 ( m , 6h , ar h ) , 3.17 ( m , 2h , ar ch(ch3)2 , n - arylcarboxamide ) , 2.76 ( m , 2h , ar ch(ch3)2 , n - arylimine ) , 2.26 ( s , 3h , n = cch3 ) , 1.241.14 ( m , 24 h , ar ch(ch3)2 ) . c nmr ( 300 mhz , cd2cl2 ) : c 17.5 , 23.1 , 23.5 , 23.9 , 28.9 , 29.5 , 30.3 , 123.6 , 124.1 , 124.1 , 124.4 , 124.5 , 128.8 , 132.1 , 136.2 , 138.8 , 146.7 , 146.9 , 149.3 , 155.5 , 163.9 , 166.4 . calcd for c32h41n3o : c 79.46 , h 8.54 , n 8.69 . found : c 79.42 , h 8.66 , n 8.51 . 2c was synthesized following the identical procedure as was used for 2b , except using 2,6-dimethylaniline instead of 2,6-diisopropylaniline ( 1.81 g , 64% ) . single crystals suitable for x - ray diffraction were obtained from slow evaporation of a concentrated ch2cl2 solution at room temperature . cd2cl2 ) : h 9.45 ( br s , 1h , nh ) , 8.63 ( d , 1h , j = 7.8 hz , py h ) , 8.36 ( d , 1h , j = 7.5 , py h ) , 8.08 ( t , 1h , j = 7.8 hz , py h ) , 7.396.93 ( m , 6h , ar h ) , 3.17 ( m , 2h , ar ch(ch3)2 ) , 2.23 ( s , 3h , n = cch3 ) , 2.05 ( s , 6h , ar ch(ch3)2 , n - arylimine ) , 1.24 ( d , 12h , j = 6.9 hz , ar ch(ch3)2 , n - arylcarboxamide).c nmr ( 300 mhz , cd2cl2 ) : c 16.8 , 18.2 , 23.9 , 29.5 , 123.7 , 124.1 , 124.1 , 124.5 , 125.8 , 128.4 , 128.8 , 132.1 , 138.7 , 146.9 , 149.1 , 149.3 , 155.5 , 163.9 , 166.5 . calcd for c28h33n3o : c 78.65 , h 7.78 , n 9.83 . found : c 78.59 , h 7.80 , n 9.79 . 8a was synthesized analogously to 8b and 8c , except using 2a instead of 2b and the reaction time was shortened to 30 min ( longer times resulted in lower yields ) ( 0.111 g , 76% ) . single crystals suitable for x - ray diffraction were obtained from diffusion of et2o into a concentrated mecn solution at 20 c . ms ( esi+ , ch3oh ) : m / z = 490.64 [ 8a + na ] . vis ( ch2cl2 ) max ( , m cm ) : 435 ( 1964 ) ; 655 ( 348 ) nm . epr [ 9.64 ghz , thf / toluene ( 1:1 ) , 2 k ] : gx = 2.08 , gy = 2.05 , gz = 2.23 ; a(cu ) : 165 10 cm ; a(n ) : 12.5 10 cm ; a(cl ) : 12.5 10 cm . anhydrous cucl2 ( 0.0353 g , 0.263 mmol ) and 2b ( 0.1156 g , 0.239 mmol ) were placed in a 100 ml round - bottom flask and dissolved in 20 ml of thf , forming a golden brown solution . sodium methoxide ( 0.5 m in meoh , 0.57 ml , 0.287 mmol ) was added , causing the solution to turn dark green with a light - colored precipitate . after stirring for 16 h , the resulting green residue was dissolved in ch2cl2 ( 10 ml ) and filtered to remove any insoluble material . pentane ( 50 ml ) was then added , and the mixture was placed in a 20 c freezer for several hours . the resulting green solid was isolated by vacuum filtration ( 0.101 g , 73% ) . single crystals suitable for x - ray diffraction were obtained from diffusion of pentane into a concentrated ch2cl2 solution at 20 c . ms ( esi+ , ch3oh ) : m / z = 581.16 [ 8b + na ] . uv vis ( ch2cl2 ) max ( , m cm ) : 440 ( 1785 ) ; 675 ( 260 ) nm . epr [ 9.64 ghz , ch2cl2/toluene ( 1:1 ) , 2 k ] : gx = 2.065 , gy = 2.090 , gz = 2.200 ; a(cu ) : 196 10 cm ; a(n ) : 15 10 cm ; a(cl ) : 15 10 cm . calcd for c32h40clcun3o : c 66.07 , h 6.93 , n 7.22 . found : c 65.98 , h 6.89 , n 7.13 . 8c was synthesized following an identical procedure as was used for 8b , except using 2c instead of 2b ( 0.0989 g , 70% ) . single crystals suitable for x - ray diffraction were obtained from diffusion of pentane into a concentrated ch2cl2 solution at 20 c . ms ( esi+ , ch3oh ) : m / z = 548.24 [ 8c + na ] . vis ( ch2cl2 ) max ( , m cm ) : 435 ( 1976 ) ; 660 ( 346 ) nm . epr [ 9.64 ghz , ch2cl2/toluene ( 1:1 ) , 2 k ] : gx = 2.060 , gy = 2.045 , gz = 2.185 ; a(cu ) : 197 10 cm ; a(n ) : 15 10 cm , a(cl ) : 15 10 cm . found : c 63.85 , h 6.04 , n 7.94 . a suspension of 2b ( 100 mg , 0.21 mmol ) and cu(oac)2h2o ( 45 mg , 0.23 mmol ) in 50 ml of mecn was heated to reflux for 2 h , resulting in a dark green solution . upon cooling to room temperature , the reaction was stirred with mgso4 for 30 min . the reaction mixture was then filtered , and the solvent was removed via rotary evaporation to yield a dark green solid ( 0.0964 g , 77% ) . single crystals suitable for x - ray diffraction were obtained from diffusion of pentane into a concentrated ch2cl2 solution at 20 c . ms ( esi+ , ch3oh ) : m / z = 545.21 [ 9b oac ] . vis ( ch2cl2 ) max ( , m cm ) : 385 ( 1972 ) ; 655 ( 275 ) nm . epr [ 9.64 ghz , dcm / toluene ( 1:1 ) , 30 k ] : gx = 2.0375 , gy = 2.0725 , gz = 2.2100 ; a(cu ) : 190 10 cm , a(n ) : 15 10 cm . calcd for c34h43cun3o3 : c 67.47 , h 7.16 , n 6.94 . found : c 67.43 , h 7.17 , n 6.85 . single crystals suitable for x - ray diffraction were obtained from diffusion of pentane into a concentrated ch2cl2 solution at 20 c ( 0.103 g , 80% ) . ms ( esi+ , ch3oh ) : m / z = 489.13 [ 9c oac ] . vis ( acetone ) max ( , m cm ) : 375 ( 1860 ) ; 645 ( 343 ) nm . epr [ 9.64 ghz , ch2cl2/toluene ( 1:1 ) , 2 k ] : gx = 2.070 , gy = 2.055 , gz = 2.200 ; a(cu ) : 194 10 cm , a(n ) : 15 10 cm calcd for c30h35cun3o3 : c 65.61 , h 6.42 , n 7.65 . found : c 65.49 , h 6.41 , n 7.54 . cu(mecn)5(sbf6)2 ( 81 mg , 0.10 mmol ) and 2b ( 50 mg , 0.11 mmol ) were combined in 4 ml of thf . the resulting green powder was washed with pentane ( 3 10 ml ) and dried under vacuum for 1 h ( 0.767 g , 70% ) . single crystals suitable for x - ray diffraction were obtained from diffusion of pentane into a concentrated ch2cl2 solution at 30 c . ms ( esi+ , ch3oh ) : m / z = 545.23 [ lcu ] . vis ( ch2cl2 ) max ( , m cm ) : 428 ( 1864 ) ; 665 ( 463 ) nm . epr [ 9.64 ghz , ch2cl2/toluene ( 1:1 ) , 30 k ] : gx = 2.06 , gy = 2.07 , gz = 2.27 ; a(cu ) : 165 10 cm . calcd for c32h41n3ocusb2f12 ( l(h)cu ; the mecn ligand was lost upon drying of the crystals under vacuum prior to analysis ) : c 37.73 , h 4.06 , n 4.12 . found : c 37.43 , h 4.26 , n 4.76 . 11 was synthesized following the procedure as was used for 10 except 2c was used in place of 2b ( 0.0890 g , 73% ) . ms ( esi+ , ch3oh ) : m / z = 489.18 [ lcu ] . vis ( ch2cl2 ) max ( , m cm ) : 415 ( 1060 ) ; 690 ( 215 ) nm . epr [ 9.64 ghz , ch2cl2/toluene ( 1:1 ) , 30 k ] : gx = 2.06 , gy = 2.07 , gz = 2.27 ; a(cu ) : 165 10 cm . calcd for c32h39n5ocusb2f12 : c 36.79 , h 3.76 , n 6.70 . found : c 36.73 , h 3.81 , n 6.44 . cu(mecn)5(sbf6)2 ( 93 mg , 0.12 mmol ) and 2c ( 57 mg , 0.12 mmol ) were combined in 4 ml of thf in a glovebox . after stirring for 30 min , the reaction was removed from the glovebox and 10 ml of wet solvent ( thf ) was added to the reaction mixture . the reaction was allowed to continue stirring for 1 h , after which the solvent was removed . the resulting green residue was taken up in 5 ml of thf , and pentane ( 100 ml ) was added to the flask . the solid was isolated via vacuum filtration and washed with pentane ( 3 10 ml ) . single crystals suitable for x - ray diffraction were obtained from diffusion of pentane into a concentrated ch2cl2 solution at 20 c ( 0.0884 g , 63% ) . ms ( esi+ , ch3oh ) : m / z = 489.21 [ lcu ] . vis ( ch2cl2 ) max ( , m cm ) : 410 ( 2395 ) ; 695 ( 375 ) nm . epr [ 9.64 ghz , thf / toluene ( 1:1 ) , 30 k ] : gx = 2.03 , gy = 2.11 , gz = 2.27 ; a(cu ) : 155 10 cm . anal . calcd for c32h43cuf12n3o3sb2 : c 36.51 , h 4.12 , n 3.99 . found : c 36.60 , h 4.29 , n 3.76 . anhydrous cucl2 ( 16 mg , 0.12 mmol ) and 2c ( 50 mg , 0.12 mmol ) were combined in 4 ml of mecn . the solution was stirred at room temperature for 30 min , resulting in an orange - brown solution . et2o ( 12 ml ) was added to the solution , which was then cooled to 30 c . the resulting orange - brown solid was collected by vacuum filtration , washed with pentane ( 3 10 ml ) , and dried under vacuum for 1 h ( 0.0624 g , 95% ) . ms ( esi+ , ch3oh ) : m / z = 525.27 [ 13 cl ] . vis ( mecn ) max ( , m cm ) : 400(sh ) ( 726 ) ; 450 ( 700 ) ; 890 ( 94 ) nm . epr [ 9.64 ghz , mecn / toluene ( 1:1 ) , 30 k ] : gx , y , z = 2.14 . calcd for c28h33cl2n3ocu : c 59.84 , h 5.92 , n 7.48 . found : c 59.71 , h 5.82 , n 7.46 . cocl2 ( 16 mg , 0.12 mmol ) and 2c ( 53 mg , 0.12 mmol ) were stirred in 10 ml of a 1:1 acetone / mecn mixture to yield a bright green solution . after stirring for 2 h , the reaction mixture was filtered and the filtrate was concentrated to approximately 2 ml total volume . et2o ( 10 ml ) was added to the solution , which was then cooled to 30 c . the resulting green powder was collected by vacuum filtration , washed with pentane ( 3 10 ml ) , and dried under vacuum for 1 h ( 0.0463 g , 71% ) . single crystals suitable for x - ray diffraction were obtained from diffusion of et2o into a concentrated mecn solution at 30 c . ms ( esi+ , ch3cn ) : m / z = 521.06 [ 14 cl ] . vis ( mecn ) max ( , m cm ) : 590 ( 230 ) ; 685 ( 303 ) nm . calcd for c28h33cl2n3oco : c 60.33 , h 5.97 , n 7.54 . found : c 60.18 , h 5.87 , n 7.45 . zncl2 ( 16 mg , 0.12 mmol ) and 2c ( 50 mg , 0.12 mmol ) were dissolved in 4 ml of thf . after stirring for 15 min , a light colored precipitate formed in the solution . the solid was collected by vacuum filtration , washed with pentane ( 3 10 ml ) , and dried under vacuum for 1 h ( 0.0488 g , 74% ) . single crystals suitable for x - ray diffraction were obtained from diffusion of et2o into a concentrated mecn solution at 30 c . h nmr ( 300 mhz , ( cd3)2so ) : h 10.18 ( br s , 1h , nh ) ; 8.55 ( d , 1h , j = 7.5 hz , py h ) , 8.248.18 ( m , 2h , py h ) , 7.366.90 ( m , 6h , ar h ) , 3.11 ( m , 2h , ar ch(ch3)2 ) , 2.94 ( s , 3h , n = cch3 ) , 1.99 ( s , 6h , ar ch(ch3)2 , n - arylimine ) , 1.15 ( d , 12h , j = 6.6 hz , ar ch(ch3)2 , n - arylcarboxamide ) . ms ( esi+ , ch3cn ) : m / z = 526.17 [ 15 cl ] . anal . calcd for c28h33cl2n3ozn : c 59.64 , h 5.90 , n 7.45 . found : c 59.58 , h 5.78 , n 7.35 .
the synthesis of a series of asymmetric mixed 2,6-disubstituted ( arylcarboxamido)(arylimino)pyridine ligands and their coordination chemistry toward a series of divalent first - row transition metals ( cu , co , and zn ) have been explored . complexes featuring both anionic n , n,n-carboxamido and neutral o , n , n-carboxamide coordination have been prepared and characterized by x - ray crystallography , cyclic voltammetry , and uv visible and epr spectroscopy . specifically , rlm(x ) ( m = cu ; x = cl , oac ) and rl(h)mx2 ( m = cu , co , zn ; x = cl , sbf6 ) complexes that feature n , n,n- or o , n , n-coordination are presented . base - induced linkage isomerization from o , n , n-carboxamide to n , n,n-carboxamido coordination is also confirmed by multiple forms of spectroscopy .
consider the piecewise linear periodic function @xmath12 \text { for some } n\in{\mathbb z } , \\ n - t & \text{if } t\in[n-\frac{1}{2},n ] \text { for some } n\in{\mathbb z } , \end{cases}\ ] ] whose graph looks as follows ( 300,80)(-30,-10 ) ( -20,0)(1,0)300 ( 275,-10)@xmath13 ( 0,-10)(0,1)70 ( -20,20)(1,-1)20 ( 0,0)(1,1)40 ( 40,40)(1,-1)40 ( 80,0)(1,1)40 ( 120,40)(1,-1)40 ( 160,0)(1,1)40 ( 200,40)(1,-1)40 ( 240,0)(1,1)20 for every @xmath14 consider the function @xmath15 which is a homothetic copy of the function @xmath16 . spaces called _ shark teeth _ are constructed in @xcite and are parametrized by an infinite non - decreasing sequence @xmath17 . let @xmath18\times\{0\}$ ] be the _ bone _ of shark teeth , and for every @xmath19 let @xmath20\big\}$ ] be the @xmath21th _ row _ of teeth . the space shark teeth is given by the following formula @xmath22 in @xcite is shown that the shark teeth constructed in the plane @xmath23 with the non - decreasing sequence @xmath24 where @xmath25 is the integer part of @xmath26 , is not homeomorphic to an ifs - attractor ( see figure [ shark ] ) . in other words it is not an ifs - attractor in any metric . ] we show that the space @xmath27 from @xcite is a topological ifs - attractor . for @xmath19 and the sets @xmath28 , @xmath29 and @xmath27 , by the same names we denote the functions : @xmath30\ni t\to \big(t,\frac{1}k\varphi_{n_k}(t)\big)\in m_k,\ ] ] @xmath31\ni t\to ( t,0)\in i \text { and}\ ] ] @xmath32\ni t\to i(t)\cup\bigcup_{k=1}^\infty m_k(t).\ ] ] note that for every @xmath33 there exists unique @xmath34 $ ] , such that @xmath35 or @xmath36 for some @xmath21 . therefore we can represent every point of the space @xmath27 as an element from the unit interval and perhaps with positive parameter @xmath21 . note that for @xmath37 and for every @xmath38 we have @xmath39 , because then @xmath26 belongs to @xmath29 . in three steps we will present the construction of topological ifs and prove that @xmath27 is its attractor . * let @xmath40 be the collection of continuous functions on @xmath27 to itself such that for every @xmath33 @xmath41 @xmath42 @xmath43 @xmath44 thus the union of images of @xmath27 under every function @xmath45 fills up the first row of the teeth @xmath46 . analogously we construct functions @xmath47 which fill up the second row @xmath48 . now we are going to construct functions @xmath49 and @xmath50 which cover left and right side of the rest of rows . define @xmath51 , so it only shifts values of function @xmath49 . for every @xmath52 let us define @xmath53 as @xmath54-th _ generation _ of shark teeth . we can also treat it like a function @xmath55\ni t\to \bigcup\{m_k(t ) : n_k = i\}\in g_i$ ] . note that every row in one generation contains the same number of teeth ( @xmath56 ) . by @xmath57 we denote the number of first row of teeth in @xmath58 , and by @xmath59 we denote the number of rows in @xmath58 . function @xmath49 has to transform every generation into the left part of next generation , so let @xmath60 be the number of rows from @xmath61 filled by one row from @xmath58 . in our case @xmath62 and @xmath63 for every @xmath52 . we want the function @xmath49 to transform whole row from @xmath58 into @xmath64 rows from @xmath65)$ ] . therefore , points @xmath66 for @xmath67 and some positive @xmath21 , must have distinct values @xmath68 in the same order on @xmath29 . to obtain this , every tooth from @xmath58 must be divided into @xmath69 pieces , which each of them covers one tooth from @xmath61 and the last one fills small part of bone @xmath29 . in other words for @xmath70 a tooth from @xmath71\big)$ ] is transformed by @xmath49 into @xmath64 teeth from @xmath72\big)$ ] and bone @xmath73\big)$ ] ( see figure [ tooth ] ) . is transform to @xmath64 teeth from @xmath61 and small part of bone @xmath29 . ] note that for @xmath74 and for similarity @xmath75 , we can write @xmath76=p_{i , j}([0,1])$ ] . moreover , define @xmath77 $ ] for @xmath78 . now we can present the formula for the function @xmath49 : @xmath79 and for @xmath52 , @xmath80 and @xmath70 we have @xmath81 we can write that @xmath82 . indeed @xmath83 and easy calculations can show that for every @xmath52 we have @xmath84)\cup i([0,\frac{1}2])$ ] and @xmath85)\cup i([\frac{1}2,1])$ ] , so @xmath86 * step 2 . * according to the definition of functions @xmath45 and @xmath47 we have the following property for @xmath87 @xmath88 so for every positive @xmath8 and connected set @xmath89 we have @xmath90 where @xmath91 and analogously for functions @xmath47 . we know also the similar thing about functions @xmath92 . for any positive @xmath8 @xmath93 where @xmath94 . this arose due to the fact that for every @xmath52 and @xmath70 @xmath95\big)\big ) = g_{i+1}\big(\big[\frac{j}{2^{i+1 } } , \frac{j+1}{2^{i+1}}\big]\big)\cup i\big(\big[\frac{j}{2^{i+1 } } , \frac{j+1}{2^{i+1}}\big]\big).\ ] ] * step 3 . * let @xmath7 be an open cover of @xmath27 . in the last step we are going to find a positive number @xmath96 , such hat the diameter of @xmath97 is less than the lebesgue number @xmath98 of @xmath7 , where @xmath99 . let us consider every possible compositions of functions from @xmath100 . we will study the diameter of image of the space @xmath27 under this composition . from step 2 we know that composition of functions only from @xmath101 , from @xmath102 or from @xmath103 makes half the size of the space @xmath27 ( see equations ( [ zlozenia_g ] ) and ( [ zlozenia_f ] ) ) . note also that for every connected set @xmath89 its images @xmath104 , @xmath105 and @xmath106 are contained in @xmath107 , @xmath108 and @xmath109 respectively , so @xmath110 @xmath111 because they are all singletons . this means that if the functions @xmath45 , @xmath47 and @xmath92 appear in composition in the above order , the diameter of the image will be 0 . it only remains for us to consider the compositions of the form @xmath112 and analogously @xmath113 , where @xmath114 and @xmath115 . let @xmath116 be the lipschitz constant of function @xmath49 and @xmath50 restricted to @xmath21-th generation . it is finite because of the definition of @xmath49 . note that the set @xmath112 is contained in generation @xmath117 , so we obtain @xmath118 on the other hand @xmath119 now fix @xmath120 such that @xmath121 and fix @xmath122 such that @xmath123 then we claim the thesis holds for @xmath124 . indeed , all images of @xmath27 under compositions only from @xmath101 , from @xmath102 or from @xmath103 have diameters less than @xmath98 , because of the definition of @xmath125 . moreover @xmath126 for @xmath114 and @xmath115 because 1 . if @xmath127 then @xmath128 2 . if @xmath129 then @xmath130 analogously we show that @xmath131 . the others compositions transform whole space @xmath27 into the point so the diameter of the image of @xmath27 is @xmath132 . this ends the proof . in fact the construction above can be extended to all shark teeth . if we try to construct a topological ifs for shark teeth with an arbitrary sequence @xmath17 , we can meet the following problems 1 . some @xmath58 are empty . + then we have to renumber the sequence @xmath58 such that the empty sets are omitted . 2 . @xmath133 . + then define @xmath134 , where @xmath135 is a minimal integer grater or equal to @xmath26 . consequently , the formula for the function @xmath49 slightly changes . the last row of teeth from every @xmath54-th generation has to be transformed into less than @xmath64 rows from @xmath61 . it can be done by covering some rows form @xmath61 once again . @xmath64 is odd . + then we do not have to cover a small part of bone under every tooth , so we divide every tooth from @xmath58 into @xmath64 pieces , like in the figure [ tooth2 ] .
we show that the space called shark teeth is a topological ifs - attractor , that is for every open cover of @xmath0 , its image under every suitable large composition from the family of continuous functions @xmath1 lies in some set from the cover . in particular , there exists a space which is not homeomorphic to any ifs - attractor but is a topological ifs - attractor . iterated function systems ( ifs ) are one of the most popular and simple method of constructing fractal structures , which has wide applications to data compression , computer graphics , medicine , economics , earthquake and weather prediction and many others . a compact metric space @xmath2 is called an _ ifs - attractor _ if @xmath0 for some contractions @xmath3 . in this case the family @xmath4 is called an _ iterated function system_. we recall that a map @xmath5 is a _ contraction _ if its lipschitz constant @xmath6 is less than 1 . the notion of an iterated function system was introduced by john hutchinson in 1981 @xcite and popularized by michael barnsley @xcite . topological properties of ifs - attractors were studied in @xcite , @xcite and @xcite . in particular the definition of topological ifs - attractor was proposed in the last paper : compact topological space @xmath2 is a _ topological ifs - attractor _ if @xmath0 for some continuous maps @xmath3 with the property that for any open cover @xmath7 of @xmath2 there is @xmath8 , such that for any functions @xmath9 the set @xmath10 lies in some set @xmath11 . note that every compact , metric space @xmath2 is a topological ifs - attractor if for its any open cover @xmath7 the diameter of the set @xmath10 is less than the lebesgue number of @xmath7 , for some @xmath8 and every @xmath9 . it is easy to see that each ifs - attractor is a topological ifs - attractor but not the other way around . moreover , we show that a space called shark teeth , constructed in @xcite , which is not homeomorphic to attractor of any iterated function system is a topological ifs - attractor .
null
efficient dynamic nuclear polarization ( dnp ) in solids , which enables very high sensitivity nmr experiments , is currently limited to temperatures of around 100 k and below . here we show how by choosing an adequate solvent , 1h cross effect dnp enhancements of over 80 can be obtained at 240 k. to achieve this we use the biradical tekpol dissolved in a glassy phase of ortho - terphenyl ( otp ) . we study the solvent dnp enhancement of both tekpol and bdpa in otp in the range from 100 to 300 k at 9.4 and 18.8 t. surprisingly , we find that the dnp enhancement decreases only relatively slowly for temperatures below the glass transition of otp ( tg = 243 k ) , and 1h enhancements around 1520 at ambient temperature can be observed . we use this to monitor molecular dynamic transitions in the pharmaceutically relevant solids ambroxol and ibuprofen .
Image: Jason Koebler Facebook really didn’t want this to happen. On Wednesday, a British politician who has been highly critical of the social media giant publicly dumped a huge cache of sensitive internal Facebook documentsfor anyone to download and read. The documents include details on the distribution of Facebook’s various apps; how the company worked very closely with some app developers to grant them access to user data, and how the company specifically incentivizes sharing on the platform in order to feed that data back to advertisers. They also include information about how the company tried to hide and downplay the amount of data that it collected from the Android version of the Facebook app. The documents also include emails between top company executives, including COO Sheryl Sandberg and CEO Mark Zuckerberg. “Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial,” a summary of the documents written by Damian Collins, Conversative MP and Chairman of the Digital Culture, Media and Science Committee who published the documents, reads. “To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.” The news signals an escalation in the fallout around Facebook’s Cambridge Analytica and data sharing scandals, which have irked European politicians in particular. Collins tweeted a link to the documents, which are hosted on Parliament’s official website. “I believe there is considerable public interest in releasing these documents. They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market,” Collins tweeted. Collins obtained the documents in a rather unusual way. As Buzzfeed News recently reported, Ted Kramer, managing director of a company called Six4Three which has used Facebook data in the past, is suing Facebook in California, and Kramer was given the sealed documents as part of discovery in that case. Kramer then traveled to the UK in possession of the documents, and was met with an obscure UK legal power, demanding he hand them over to Collins’ Committee. A Facebook spokesperson told Motherboard in a statement "As we've said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context." Update: This piece has been updated to include comment from Facebook. ||||| Last week, the House of Commons Digital, Culture, Media, and Sport (DCMS) Committee, which is investigating Facebook for user data mishaps and the social network’s role in the Brexit referendum, took the unusual move of seizing Kramer’s papers by serving him several orders at his London hotel room. Believing them to be pertinent to his body’s ongoing investigation of the company, DCMS chair Damian Collins has also told Facebook that it may publish the documents unredacted, flouting the US protective seal. Ted Kramer, the managing director of Six4Three, an app developer that made software to locate Facebook photos of people wearing bikinis, said in a filing in California superior court on Monday that a parliamentary committee forced him to hand over documents while he was in London on business last week. Those documents, which his legal team had obtained in discovery in an ongoing lawsuit against Facebook, included the email correspondence of company executives discussing its relationship with developers and user data, and were subject to a court’s protective order. In accordance with that order, Kramer was not supposed to have access to the papers at all. In a strange twist to an already bizarre case, a software developer who is suing Facebook, and whose sealed documents related to the lawsuit were seized during a trip to the United Kingdom, has now named a prominent journalist, Carole Cadwalladr, in response to a question from a California court as to how authorities knew he was in the country. Although Kramer concedes he does not know how the DCMS committee knew where he was staying in London, he suggests in a 19-page court filing made on Monday that Carole Cadwalladr, a freelance reporter at British outlet the Observer, had tipped off the committee to his hotel address so that it could obtain the documents. Kramer and his lawyers did not respond to his request for comment. A spokesperson for Damian Collins declined to comment. Following the publication of this story, Cadwalladr declined to comment to BuzzFeed News on Tuesday. A spokesperson for the Guardian and the Observer also declined to comment when asked about Cadwalladr's alleged involvement with the seizure of the Facebook documents. Kramer’s case has received attention of late because of its potential to reveal Facebook’s internal discussions around user data sharing and privacy. After the Cambridge Analytica debacle, which Cadwalladr broke in the Observer while working with the New York Times and the Guardian, government bodies around the world, including the UK’s DCMS committee, have questioned whether the world’s largest social networking company mishandled or compromised users’ information. The filing from Six4Three's legal team details Kramer's contact with Cadwalladr, suggesting the two first began corresponding in May about Six4Three's case. Around that time, Six4Three’s camp had shopped its story around to several outlets including BuzzFeed News, and suggested that outlets could file amicus briefs in an attempt to remove some of the documents from the California court’s protective order. CNN and the Guardian, the Observer's sister publication, filed a joint brief. BuzzFeed News did not. In August in California, Kramer allegedly met with Cadwalladr who, according to the filing, "informed him that she would like to raise Six4Three’s case with Damian Collins." The court document alleges that Collins and Kramer corresponded in October and November and that Collins noted "he had received from Ms. Cadwalladr information regarding certain categories of documents filed in the case" and asked Kramer to provide the confidential information. Kramer, according to the filing, denied numerous requests from Collins’ office for weeks. The filing alleges that on Nov. 17, 2018, during a phone conversation with Cadwalladr, Kramer told the reporter he would be on an unrelated business trip to London. According to the document, "she suggested they meet for her to receive another update on the case. Mr. Kramer agreed to meet with her at his hotel and sent her a calendar invitation with the address of the hotel." Two days later, upon Kramer's arrival in England, he received an email from Collins' office ordering him to produce documents related to his case by 5 p.m. the next day. The following morning, Kramer received a hard copy of Collins' office's order at his hotel room. According to the filing, Kramer did not know “how the DCMS learned where he was staying in London." Yet in response to a question from the California court as to how DCMS was "made aware that Mr. Kramer and the documents are both in the UK at the present," the Six4Three director stated that he "communicated the name of his hotel only to Ms. Cadwalladr” and also indicated to her in a meeting that the papers were located in a cloud-based file storage system. Kramer and lawyers admit, however, that they did not know how DCMS actually learned of his location. Despite numerous replies from Kramer’s legal team that he was unable to comply with Collins’ order because of the California court’s restrictions, the filing alleges that Collins and the DCMS continued to press, eventually sending the House of Commons’ sarjeant at arms to serve him an order to produce documents, in a highly unusual move. The second order, according to the language of the court filing, suggested that Kramer “could be considered to be acting in contempt and face investigation and sanction by the House.” After continuing to refuse to comply, Kramer was issued a third order by the DCMS, which stated that “the process of investigation will commence.” Kramer, the filing says, was "shaken" by the parliamentary inquiry and the third order and was concerned that he might be barred from leaving the country, so without consulting his attorneys, he went to meet Collins. During that meeting, which the filing suggests lasted more than two hours, Kramer "panicked" after hearing the penalties associated with noncompliance. The court document says Kramer "opened his computer, took out a USB drive, and went onto the local dropbox folder containing Six4Three’s documents" pertaining to his lawsuit against Facebook. Kramer left the country shortly following the meeting, the filing says. Cadwalladr broke the news about the parliamentary seizure of the documents. “Parliament has used its legal powers to seize internal Facebook documents in an extraordinary attempt to hold the US social media giant to account after chief executive Mark Zuckerberg repeatedly refused to answer MPs’ questions,” she wrote on Saturday in the Observer. In that story, Collins noted that he took the extreme measures, which carried the possibility of imprisonment for Kramer, because they were “in uncharted territory.” “This is an unprecedented move but it’s an unprecedented situation,” he told Cadwalladr for her story. “We’ve failed to get answers from Facebook and we believe the documents contain information of very high public interest.” Zuckerberg, who has been invited on multiple occasions to testify in front of parliament, has so far declined, infuriating Collins and other members of the DCMS committee who believe that he is not taking their concerns seriously. On Tuesday, Facebook vice president of policy solutions Richard Allan is expected to testify in front of members of nine international parliaments in London. Facebook did not immediately respond to a request for comment. Collins has also said publicly that his committee reserves the right to publish any of the documents it obtained from Kramer “if we choose as part of our inquiry,” which is meant to examine the role of fake news. In a letter to Facebook on Sunday, Collins cited parliamentary privilege as defense for any possible publication of the documents, and suggested the seal on the documents is a matter for the California court and not his committee. Cadwalladr has been celebrated in the journalistic community for her role in breaking the Cambridge Analytica scandal for the Observer — the Guardian’s Sunday newspaper. Earlier this year, the 48-year-old won Britain’s Orwell prize for journalism, while also winning a series of other prestigious journalism awards for the reporting around the scandal. In June, Cadwalladr appeared at a hearing of MEPs alongside her source for the story, whistleblower Christopher Wylie, and the UK’s Information Commissioner to talk about “personal data protection.” But last month, BuzzFeed News revealed Cadwalladr had taken the extraordinary step of threatening to injunct Channel 4 news over disagreements relating to the TV broadcaster’s own Cambridge Analytica investigation. The lawyers acting on behalf of Cadwalladr unsuccessfully demanded that Channel 4 hand over sources related to the investigation before the broadcaster’s undercover documentary got to air. The legal threats were not pursued and Cadwalladr, Channel 4 News, and the New York Times all coordinated the publication of the Cambridge Analytica story for release in late March. Cadwalladr claimed there were there were “source protection concerns” motivating her legal action against the media organization. “I pay tribute to the journalistic skill of Channel 4 News and the New York Times and am grateful for the contributions they made,” she said in a statement. “It’s certainly true that collaborations are not easy and there were difficulties and frustrations on both sides. For my part, I chose to put these aside in order not to distract from the far more important issues at stake.” ||||| LONDON — Facebook used the mountains of data it collected on users to favor certain partners and punish rivals, giving companies such as Airbnb and Netflix special access to its platform while cutting off others that it perceived as threats. The tactics came to light on Wednesday from internal Facebook emails and other company documents released by a British parliamentary committee that is investigating online misinformation. The documents spotlight Facebook’s behavior from roughly 2012 to 2015, a period of explosive growth as the company navigated how to manage the information it was gathering on users and debated how best to profit from what it was building. The documents show how Facebook executives treated data as the company’s most valuable resource and often wielded it to gain a strategic advantage. Mark Zuckerberg, Facebook’s chief executive, and Sheryl Sandberg, the chief operating officer, were intimately involved in decisions aimed at benefiting the social network above all else and keeping users as engaged as possible on the site, according to emails that were part of the document trove. In one exchange from 2012 when Mr. Zuckerberg discussed charging developers for access to user data and persuading them to share their data with the social network, he wrote: “It’s not good for us unless people also share back to Facebook and that content increases the value of our network. So ultimately, I think the purpose of platform — even the read side — is to increase sharing back into Facebook.”
– Even after Facebook agreed to restrict access to user data, the social media giant gave certain companies special access to that data, according to a trove of documents released by a British parliamentary committee Wednesday. The emails and other internal Facebook documents from 2012 to 2015 show that Facebook entered into agreements with companies including Airbnb, Lyft, and Netflix allowing those companies special access, the New York Times reports. Motherboard calls the nearly 250 pages of documents "devastating" for Facebook, but the company says in a statement that the documents were gathered as part of a "baseless case" and "are only part of the story and are presented in a way that is very misleading without additional context." The statement adds, "The facts are clear: we’ve never sold people’s data." BuzzFeed last week published an extensive explainer on the documents: They were gathered by Ted Kramer, the managing director of an app developer that is suing Facebook in California, as part of discovery in that lawsuit; he traveled to the UK on business while they were in his possession, and the House of Commons Digital, Culture, Media, and Sport Committee, which is investigating Facebook over user data issues, seized the documents from him. It was Damian Collins, the chair of that committee, who then published the documents; he used parliament's sergeant-at-arms to obtain the documents as well as the authority to publish them. The documents also show Facebook debating whether to shut down access to user data for competitors and whether to offer more access to app developers that advertised with Facebook.
SECTION 1. SHORT TITLE. This Act may be cited as the ``Captive Primate Safety Act''. SEC. 2. ADDITION OF NONHUMAN PRIMATES TO DEFINITION OF PROHIBITED WILDLIFE SPECIES. Section 2(g) of the Lacey Act Amendments of 1981 (16 U.S.C. 3371(g)) is amended by inserting before the period at the end ``or any nonhuman primate''. SEC. 3. CAPTIVE WILDLIFE AMENDMENTS. (a) Prohibited Acts.--Section 3 of the Lacey Act Amendments of 1981 (16 U.S.C. 3372) is amended-- (1) in subsection (a)-- (A) in paragraph (2)-- (i) in subparagraph (A), by inserting ``or'' after the semicolon; (ii) in subparagraph (B)(iii), by striking ``; or'' and inserting a semicolon; and (iii) by striking subparagraph (C); and (B) in paragraph (4), by inserting ``or subsection (e)'' before the period; and (2) in subsection (e)-- (A) by striking ``(e)'' and all that follows through paragraph (1) and inserting the following: ``(e) Captive Wildlife Offense.-- ``(1) In general.--It is unlawful for any person to import, export, transport, sell, receive, acquire, or purchase in interstate or foreign commerce, or in a manner substantially affecting interstate or foreign commerce, any live animal of any prohibited wildlife species.''; and (B) in paragraph (2)-- (i) by striking so much as precedes subparagraph (A) and inserting the following: ``(2) Limitation on application.--Paragraph (1) does not apply to any person who--''. (ii) in subparagraph (A), by inserting before the semicolon at the end ``and does not allow direct contact between the public and prohibited wildlife species''; (iii) in subparagraph (B), by striking ``State-licensed wildlife rehabilitator,''; (iv) in subparagraph (C)-- (I) in clauses (ii) and (iii), by striking ``animals listed in section 2(g)'' each place it appears and inserting ``prohibited wildlife species''; (II) in clause (iv), by striking ``animals'' and inserting ``prohibited wildlife species''; and (III) by striking ``or'' after the semicolon at the end; (v) in subparagraph (D)-- (I) by striking ``animal'' each place it appears and inserting ``prohibited wildlife species''; and (II) by striking the period at the end and inserting ``; or''; and (vi) by adding at the end the following: ``(E) is transporting a nonhuman primate solely for the purpose of assisting an individual who is permanently disabled with a severe mobility impairment, if-- ``(i) the nonhuman primate is a single animal of the genus Cebus; ``(ii) the nonhuman primate was obtained from, and trained at, a licensed nonprofit organization that before July 18, 2008 was exempt from taxation under section 501(a) of the Internal Revenue Code of 1986 and described in sections 501(c)(3) and 170(b)(1)(A)(vi) of such Code on the basis that the mission of the organization is to improve the quality of life of severely mobility-impaired individuals; ``(iii) the person transporting the nonhuman primate is a specially trained employee or agent of a nonprofit organization described in clause (ii) that is transporting the nonhuman primate to or from a designated individual who is permanently disabled with a severe mobility impairment; ``(iv) the person transporting the nonhuman primate carries documentation from the applicable nonprofit organization that includes the name of the designated individual referred to in clause (iii); ``(v) the nonhuman primate is transported in a secure enclosure that is appropriate for that species; ``(vi) the nonhuman primate has no contact with any animal or member of the public, other than the designated individual referred to in clause (iii); and ``(vii) the transportation of the nonhuman primate is in compliance with-- ``(I) all applicable State and local restrictions regarding the transport; and ``(II) all applicable State and local requirements regarding permits or health certificates.''. (b) Civil Penalties.--Section 4(a) of the Lacey Act Amendments of 1981 (16 U.S.C. 3373(a)) is amended-- (1) in paragraph (1), by inserting ``(e),'' after ``subsections (b), (d),''; and (2) in paragraph (1), by inserting ``, (e),'' after ``subsection (d)''. (c) Criminal Penalties.--Section 4(d) of the Lacey Act Amendments of 1981 (16 U.S.C. 3373(d)) is amended-- (1) in subparagraphs (A) and (B) of paragraph (1) and in the first sentence of paragraph (2), by inserting ``(e),'' after ``subsections (b), (d),'' each place it appears; and (2) in paragraph (3), by inserting ``, (e),'' after ``subsection (d)''. (d) Effective Date; Regulations.-- (1) Effective date.--Subsections (a) through (c), and the amendments made by those subsections, shall take effect on the earlier of-- (A) the date of promulgation of regulations under paragraph (2); and (B) the expiration of the period referred to in paragraph (2). (2) Regulations.--Not later than 180 days after the date of enactment of this Act, the Secretary of the Interior shall promulgate regulations implementing the amendments made by this section. SEC. 4. APPLICABILITY PROVISION AMENDMENT. Section 3 of the Captive Wildlife Safety Act (117 Stat. 2871; Public Law 108-191) is amended-- (1) in subsection (a), by striking ``(a) In General.-- Section 3'' and inserting ``Section 3''; and (2) by striking subsection (b). SEC. 5. REGULATIONS. Section 7(a) of the Lacey Act Amendments of 1981 (16 U.S.C. 3376(a)) is amended by adding at the end the following: ``(3) The Secretary shall, in consultation with other relevant Federal and State agencies, promulgate regulations to implement section 3(e).''.
. Captive Primate Safety Act - (Sec. 2) Amends the Lacey Act Amendments of 1981 to: (1) make nonhuman primates a prohibited wildlife species; and (2) make it unlawful to import, export, transport, sell, receive, acquire, or purchase them in interstate or foreign commerce. (Sec. 3) Modifies exceptions to restrictions on such transactions in prohibited wildlife species, making them inapplicable to a person who: (1) is a licensed and inspected person only if the person does not allow direct contact between the public and prohibited wildlife species, or (2) is transporting under certain conditions a single primate of the genus Cebus that was obtained from and trained by a charitable organization to assist a permanently disabled individual with a severe mobility impairment. Removes state-licensed wildlife rehabilitators from the list of entities exempted from the restrictions. Sets forth civil and criminal penalties for violations of the requirements of this Act.
A top-secret weapon being developed by the US military was destroyed four seconds after its launch from a test range in Alaska early on Monday after controllers detected a problem with the system, the Pentagon said. The Advanced Hypersonic Weapon is part of a program to create a missile that will destroy targets anywhere on Earth within hours - traveling at speeds in excess of 3,500 miles-an-hour or Mach 5. The mission was aborted to ensure public safety, and no one was injured in the incident, which occurred shortly after 4 am EDT at the Kodiak Launch Complex in Alaska, said Maureen Schumann, a spokeswoman for the U.S. Defense Department. 'We had to terminate,' Schumann said. 'The weapon exploded during takeoff and fell back down in the range complex,' she added. The incident caused an undetermined amount of damage to the launch facility 25 miles from the city of Kodiak, Schumann said. Scroll down for video Detonation: The moment the weapon exploded is captured by Scott Wight and shows the horizon from Cape Greville in Chiniak, Alaska Officials said that the weapon system was not carrying a warhead when it was aborted. The rocket carrying the Advanced Hypersonic Weapon was terminated near a pad of the Kodiak Launch Complex on Kodiak Island shortly after liftoff, spokeswoman Maureen Schumann said. After an anomaly was detected, testers made the decision to destroy the rocket to ensure public safety, Schumann said. "It came back down on the range complex," she said. "Fortunately, no people on the ground were injured. There was damage, but I'm not sure of the extent of it at this time." The launch complex is about 25 miles from the city of Kodiak. Witnesses watched the rocket lift off at 12:25 am, quickly head nose-down and explode, KMXT radio reported. STRIKE ANYWHERE ON EARTH WITHIN HOURS: RACE TO CREATE WORLD'S MOST LETHAL WEAPON According to the Washington Free Beacon, the Advanced Hypersonic Weapon is being developed as a joint project between the Army Space and Missile Defense Command and the Army Forces Strategic Command to form the Pentagon's Prompt Global Strike initiative. The Defense Department wants a weapon that can strike targets anywhere in the world within hours using a conventionally armed missile traveling at Mach 5 or 3,500 miles an hour. The missile would be used to hit terrorist targets identified on satellites thousands of miles away or weapons of mass destruction being moved in open ground that only have a small window within which to strike. The disastrous abort of the Advanced Hypersonic Weapon in Alasak follows a failed test by the Chinese military of a similar system. The Wu-14 missile is being developed by China to launch nuclear warheads or to strike ships and is being designed to travel at speeds of up to Mach 10 or 8,000 miles-an-hour. Hong Kong’s South China Morning Post, said that the Chinese test of the Wu-14 three weeks ago failed in similar circumstances to the American test. According to the Washington Free Beacon, Russia too is attempting to develop its own hypersonic weapon. Source: Washington Free Beacon Kodiak photographer Scott Wight watched the launch from Cape Greville in Chiniak, about a dozen miles from the launch site. He described the explosion as quite loud and scary. A fire afterward burned brightly. The rocket was the booster for the Advanced Hypersonic Weapon, a glide vehicle designed to quickly reach a target. The design is one of several being tested by the Army under the umbrella of the Conventional Prompt Global Strike program, Schumann said. "It's a concept that will allow the Department of Defense to engage any target anywhere in the world in less than an hour," she said. The first flight test of the Advanced Hypersonic Weapon on November 17, 2011, flew the weapon from Hawaii to Kwajalein Atoll in the South Pacific. The test Monday was designed to enhance previous ground testing, modeling and simulation, Schumann said. Traveling at hypersonic speed, the glider also was aimed at Kwajalein and was supposed to cover the 3,500 miles in less than an hour, Schumann said. Experimental: Traveling at hypersonic speed, the glider also was aimed at Kwajalein and was supposed to cover the 3,500 miles in less than an hour Experimental: This US Defense Advanced Research Projects Agency artists rendering shows the Falcon Hypersonic Technology Vehicle 2 (HTV-2). The US military had to detonate a hypersonic weapon seconds after lift-off on August 25, 2014 due to a technical problem, cutting short a flight test for the experimental project, officials said on Monday Strike capability: The Falcon HTV-2 will be launched on a rocket into space then will glide back down to Earth. The 2011 test flight lasted only nine minutes before being deliberately crashed as a safety measure due to technical difficulties It was a setback for the US program, which some analysts see as countering the growing development of ballistic missiles by Iran and North Korea but others say is part of an arms race with China, which tested a hypersonic system in January. Riki Ellison, founder of the nonprofit Missile Defense Advocacy Alliance, said he did not think Monday's failure would lead to the program's termination. 'This is such an important mission and there is promise in this technology,' he said. He said officials aborted the mission after detecting a fault in the computers. Anthony Cordesman, a defense analyst at the Center for Strategic and International Studies think tank, said the technology was best suited for use against smaller, less-developed countries with missiles. 'The United States has never assumed that these ... are going to be systems that you can use against a power like China by themselves,' he said. 'For a country like Iran or North Korea, they could be a very significant deterrent.' The rocket carrying the Advanced Hypersonic Weapon was terminated near a pad of the Kodiak Launch Complex (pictured) on Kodiak Island shortly after liftoff James Acton, a defense analyst at the Carnegie Endowment for International Peace, said the Pentagon had never been clear about the mission for the weapon, with some viewing it as an effective tool against terrorists and others seeing it as a counter to China or Iran and North Korea. While hypersonic weapons are unlikely to be fielded for a decade, Acton said the fact that Washington and Beijing were both testing the weapons indicated there was a real potential for an arms race. 'I believe the US program is significantly more sophisticated than the Chinese program,' he said. The weapon, known as the Advanced Hypersonic Weapon, was developed by Sandia National Laboratory and the US Army. Schumann said it included a glide body mounted on a three-stage, solid-propellant booster system known as STARS, for Strategic Target System. In a previous test in November 2011, the craft had successfully flown from Hawaii to the Kwajalein Atoll in the Marshall Islands, she said. On Monday, it was supposed to fly from Alaska to the Kwajalein Atoll. ||||| WASHINGTON A hypersonic weapon being developed by the U.S. military was destroyed four seconds after its launch from a test range in Alaska early on Monday after controllers detected a problem with the system, the Pentagon said. The weapon is part of a program to create a missile that will destroy targets anywhere on Earth within an hour of getting data and permission to launch. The mission was aborted to ensure public safety, and no one was injured in the incident, which occurred shortly after 4 a.m. EDT at the Kodiak Launch Complex in Alaska, said Maureen Schumann, a spokeswoman for the U.S. Defense Department. "We had to terminate," Schumann said. "The weapon exploded during takeoff and fell back down in the range complex," she added. The incident caused an undetermined amount of damage to the launch facility, Schumann said. It was a setback for the U.S. program, which some analysts see as countering the growing development of ballistic missiles by Iran and North Korea but others say is part of an arms race with China, which tested a hypersonic system in January. Riki Ellison, founder of the nonprofit Missile Defense Advocacy Alliance, said he did not think Monday's failure would lead to the program's termination. "This is such an important mission and there is promise in this technology," he said. He said officials aborted the mission after detecting a fault in the computers. Anthony Cordesman, a defense analyst at the Center for Strategic and International Studies think tank, said the technology was best suited for use against smaller, less-developed countries with missiles. "The United States has never assumed that these ... are going to be systems that you can use against a power like China by themselves," he said. "For a country like Iran or North Korea, they could be a very significant deterrent." James Acton, a defense analyst at the Carnegie Endowment for International Peace, said the Pentagon had never been clear about the mission for the weapon, with some viewing it as an effective tool against terrorists and others seeing it as a counter to China or Iran and North Korea. While hypersonic weapons are unlikely to be fielded for a decade, Acton said the fact that Washington and Beijing were both testing the weapons indicated there was a real potential for an arms race. "I believe the U.S. program is significantly more sophisticated than the Chinese program," he said. The weapon, known as the Advanced Hypersonic Weapon, was developed by Sandia National Laboratory and the U.S. Army. Schumann said it included a glide body mounted on a three-stage, solid-propellant booster system known as STARS, for Strategic Target System. In a previous test in November 2011, the craft had successfully flown from Hawaii to the Kwajalein Atoll in the Marshall Islands, she said. On Monday, it was supposed to fly from Alaska to the Kwajalein Atoll. Acton said no conclusions could be drawn about the weapon based on Monday's accident because the launcher detonated before the glide vehicle could be deployed. (Reporting by Andrea Shalal and David Alexander; Editing by David Storey and Leslie Adler)
– The US military tested a hypersonic weapon in Alaska yesterday, and things didn't go according to plan. Within four seconds of its launch, the weapon was destroyed by authorities due to a problem, Reuters reports; an expert says it was a computer issue. For public safety reasons, "we had to terminate," says a Pentagon rep. "The weapon exploded during takeoff and fell back down in the range complex," where it resulted in some damage to the launch area, but no injuries. A missile defense advocate doubts the failed test will end the program. "This is such an important mission and there is promise in this technology," he says. "It's a concept that will allow the Department of Defense to engage any target anywhere in the world in less than an hour," the Defense rep says. The resulting missile would travel faster than 3,500mph, the Daily Mail reports. Some experts think it's being developed with Iran and North Korea's ballistic missile development in mind; others point to a US-China arms race, Reuters notes. China ran a similar test this year. (Click to read about new Navy weapons that sound like something out of Star Wars.)
giant cell tumor ( gct ) of bone usually occurs in the epiphyses of long bones like the distal femur , proximal tibia , distal radius and proximal humerus1 . craniofacial bone involvement is rare but has been reported to occur in the mandible , temporal bone , maxilla , occipital and sphenoid23 . the incidence of gct varies between regions and is highest amongst the asian population , especially the chinese and japanese where they account for nearly 15% of all primary bone tumors4 . although generally thought to be benign tumors , gcts are known to be locally aggressive at times , with local recurrence occurring in about 25%-35% of patients3 . this is a case of a 22-year - old female who was previously well till she experienced swelling and pain over her right temporal region for 18 months . she had episodes of jaw locking and could hear clicking sounds when chewing or talking . she sought medical help but was treated as temporomandibular disorder ( tmd ) and was given analgesics , physiotherapy and an occlusal splint . the pain reduced after the prescribed treatment and the swelling was inconspicuous initially but increased in size and hence further investigations were carried out . examination revealed a 22 cm swelling over the right temporal region which was firm and tender on palpation . upon opening her mouth , computed tomography ( ct ) scan showed an aggressive erosive tumor of the squamous temporal bone extending to the right temporomandibular joint ( tmj).(fig . a magnetic resonance imaging ( mri ) of the brain and tmj was performed which showed an extra - axial mass at the right middle cranial fossa involving the right tmj , measuring 4.21.62.6 cm . 2 ) during surgery , we used a modified frontotemporal flap for access to the temporal bone and tmj . an intraoral right sulcular incision along the ascending ramus of the mandible was used for access to the coronoid process . intraoperatively , the tumor was seen to invade the temporal bone , mandibular condyle , tmj and overlying temporalis muscle but did not invade the temporal dura . the patient underwent right partial temporal craniectomy , removal of part of the mandibular condyle and zygomatic arch , excision of the coronoid process , and excision of the tmj.(fig . histopathological examination of the tumor revealed fibrous connective tissue with a few foci of numerous multinucleated giant cells , histiocytes , neutrophils , lymphocytes and occasional foam cells . gcts are usually benign but have been known to be locally aggressive and occasionally metastasize , especially to the lung789 . very rarely , gcts may turn into sarcoma4 . the usual sites of occurrence are the epiphysis of long bones and less than 2% occur in the head and neck region where the usual sites are the sphenoid and temporal bones10 . portions of the temporal bone form by endochondral ossification , which is the same way epiphyses of long bones are formed and thus it is possible that the temporal bone is more prone to develop gcts because of this11 . patients usually present with progressive pain and swelling over the site . in the temporal region , hearing impairment and facial nerve paralysis can occur due to compression or local invasion from the tumor12 . involvement of the tmj causes jaw locking , deviation of mandibular movement and clicking sounds . we would like to highlight the danger of treating patients as tmd before a precise diagnosis is made . a thorough history and examination should be done and a list of differential diagnoses should be considered , including tumors14 . tmd may present with pain and swelling at the temporal area as in this case , but swelling in tmd is different and not common . some patients may still have temporal swelling but the swelling should be softer on palpation and not persistent in size as compared to tumors . the swelling seen in tmd may be present during and after chewing and should decrease gradually between meals . gcts appear lytic , subarticular , eccentrically located and usually lack a sclerotic rim on radiographs . ct will rarely provide information that helps physicians arrive at a diagnosis but may be useful in delineating tumor extent , evaluation of cortical integrity and determination of tumor recurrence15 . on mri , gcts have low signal intensity on t1-weighted images , heterogeneous high signal intensity on t2-weighted images and heterogeneous enhancement with gadolinium . mri is the preferred imaging modality for gcts , as the diagnostic accuracy of mri is high and it can detect soft tissue and intra - articular extension16 . macroscopically , most gcts are soft and fleshy and appear grey to light red or dark reddish - brown . there may be areas of cyst , hemorrhage or fibrous septa formation as well . the margins are usually ill defined , which explains the high percentage of recurrence if only curettage is done417 . gct is a neoplasm of stromal - like neoplastic cells that are able to recruit macrophage and multinucleate osteoclast - like giant cells . histologically , gcts are characterized by the finding of large osteoclast - like multinucleated giant cells scattered among a background of plump or spindle shaped mononuclear stromal cells . the stromal cells may be mitotically active but should not have abnormal or atypical mitotic cells . these giant cells have approximately 10 - 20 nuclei per cell , but may have 100 or more nuclei . there may be reactive bone formation usually at the periphery and reactive changes such as reactive fibrosis , necrosis , hemorrhage and xanthogranulomatous inflammation . this is likely the reason the ultrasound - guided fine needle aspiration biopsy result showed a xanthogranuloma . gcts often have abundance of neovascularization , which explains the hemorrhages that are frequently seen within such tumors418 . it is important to consider lesions such as giant cell reparative granuloma , hyperparathyroidism , non - ossifying fibroma , chondroblastoma , solid areas of aneurysmal bone cyst , malignant fibrous histiocytoma and osteogenic sarcoma4 . surgery with the aim of wide excision is the mainstay of treatment for gcts , preferably with a wide margin of normal tissue81619 . radiotherapy is reserved for cases where wide excision can not be achieved or for patients who are not fit for surgery . irradiation - induced sarcomatous transformation is a known risk with orthovoltage radiation , but interestingly there is less risk with current use of megavoltage radiation19 . denosumab , a receptor activator of nuclear factor kappa - b ligand ( rankl ) inhibitor has been approved for use in recurrent and unresectable gcts20 . high dose dexamethasone therapy had been used effectively to rapidly reduce the size of these tumors but unfortunately , discontinuation of steroids is associated with re - growth in nearly every case4 .
giant cell tumor ( gct ) of the craniofacial bones has been reported but they are not common . this tumor occurs more often in women than in men and predominantly affects patients around the third to fifth decade of life . gcts are generally benign but can be locally aggressive as well . we report a case of gct involving the temporomandibular joint ( tmj ) , which was initially thought to be temporomandibular disorder ( tmd ) . a 22-year - old female presented with swelling and pain over the right temporal region for 18 months associated with jaw locking and clicking sounds . on examination , her jaw deviated to the right during opening and there was a 22 cm swelling over the right temporal region . despite routine treatment for tmd , the swelling increased in size . computed tomography and magnetic resonance imaging of the brain and tmj revealed an erosive tumor of the temporal bone involving the tmj which was displacing the temporal lobe . surgical excision was done and the tumor removed completely . histopathological examination was consistent with a gct . no clinical or radiological recurrence was detected 10 months post - surgery .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Defense Production Act Reauthorization of 2003''. SEC. 2. REAUTHORIZATION OF DEFENSE PRODUCTION ACT OF 1950. (a) In General.--The 1st sentence of section 717(a) of the Defense Production Act of 1950 (50 U.S.C. App. 2166(a)) is amended-- (1) by striking ``sections 708'' and inserting ``sections 707, 708,''; and (2) by striking ``September 30, 2003'' and inserting ``September 30, 2004''. (b) Authorization of Appropriations.--Section 711(b) of the Defense Production Act of 1950 (50 U.S.C. App. 2161(b)) is amended by striking ``through 2003'' and inserting ``through 2004''. SEC. 3. RESOURCE SHORTFALL FOR RADIATION-HARDENED ELECTRONICS. (a) In General.--Notwithstanding the limitation contained in section 303(a)(6)(C) of the Defense Production Act of 1950 (50 U.S.C. App. 2093(a)(6)(C)), the President may take actions under section 303 of the Defense Production Act of 1950 to correct the industrial resource shortfall for radiation-hardened electronics, to the extent that such Presidential actions do not cause the aggregate outstanding amount of all such actions to exceed $200,000,000. (b) Report by the Secretary.--Before the end of the 6-month period beginning on the date of the enactment of this Act, the Secretary of Defense shall submit a report to the Committee on Banking, Housing, and Urban Affairs of the Senate and the Committee on Financial Services of the House of Representatives describing-- (1) the current state of the domestic industrial base for radiation-hardened electronics; (2) the projected requirements of the Department of Defense for radiation-hardened electronics; (3) the intentions of the Department of Defense for the industrial base for radiation-hardened electronics; and (4) the plans of the Department of Defense for use of providers of radiation-hardened electronics beyond the providers with which the Department had entered into contractual arrangements under the authority of the Defense Production Act of 1950, as of the date of the enactment of this Act. SEC. 4. CLARIFICATION OF PRESIDENTIAL AUTHORITY. Subsection (a) of section 705 of the Defense Production Act of 1950 (50 U.S.C. App. 2155(a)) is amended by inserting after the end of the 1st sentence the following new sentence: ``The authority of the President under this section includes the authority to obtain information in order to perform industry studies assessing the capabilities of the United States industrial base to support the national defense.''. SEC. 5. CRITICAL INFRASTRUCTURE PROTECTION AND RESTORATION. Section 702 of the Defense Production Act of 1950 (50 U.S.C. App. 2152) is amended-- (1) by redesignating paragraphs (3) through (17) as paragraphs (4) through (18), respectively; (2) by inserting after paragraph (2) the following new paragraph: ``(3) Critical infrastructure.--The term `critical infrastructure' means any systems and assets, whether physical or cyber-based, so vital to the United States that the degradation or destruction of such systems and assets would have a debilitating impact on national security, including, but not limited to, national economic security and national public health or safety.''; and (3) in paragraph (14) (as so redesignated by paragraph (1) of this section), by inserting ``and critical infrastructure protection and restoration'' before the period at the end of the last sentence. SEC. 6. REPORT ON CONTRACTING WITH MINORITY- AND WOMEN-OWNED BUSINESSES. (a) Report Required.--Before the end of the 1-year period beginning on the date of the enactment of this Act, the Secretary of Defense shall submit a report to the Committee on Banking, Housing, and Urban Affairs of the Senate and the Committee on Financial Services of the House of Representatives on the extent to which contracts entered into during the fiscal year ending before the end of such 1-year period under the Defense Production Act of 1950 have been contracts with minority- and women-owned businesses. (b) Contents of Report.--The report submitted under subsection (a) shall include the following: (1) The types of goods and services obtained under contracts with minority- and women-owned businesses under the Defense Production Act of 1950 in the fiscal year covered in the report. (2) The dollar amounts of such contracts. (3) The ethnicity of the majority owners of such minority- and women-owned businesses. (4) A description of the types of barriers in the contracting process, such as requirements for security clearances, that limit contracting opportunities for minority- and women-owned businesses, together with such recommendations for legislative or administrative action as the Secretary of Defense may determine to be appropriate for increasing opportunities for contracting with minority- and women-owned businesses and removing barriers to such increased participation. (c) Definitions.--For purposes of this section, the terms ``women- owned business'' and ``minority-owned business'' have the meanings given such terms in section 21A(r) of the Federal Home Loan Bank Act, and the term ``minority'' has the meaning given such term in section 1204(c)(3) of the Financial Institutions Reform, Recovery, and Enforcement Act of 1989. SEC. 7. COMMERCE RESPONSIBILITIES REGARDING CONSULTATION WITH FOREIGN NATIONS. (a) Offsets in Defense Procurements.--Section 123(c) of the Defense Production Act Amendments of 1992 (50 U.S.C. App. 2099 note) is amended to read as follows: ``(c) Negotiations.-- ``(1) Interagency team.--It is the policy of Congress that the President shall designate the Secretary of Commerce to lead, in coordination with the Secretary of State, an interagency team to negotiate with foreign nations the elimination of offset arrangements, industrial participation, or similar arrangements in defense procurement. The President shall transmit an annual report on the results of these negotiations to the Congress as part of the report required under section 309(a) of the Defense Production Act of 1950. ``(2) Recommendations for modifications.--Pending the elimination of the arrangements described in paragraph (1), the interagency team shall submit to the Secretary of Defense any recommendations for modifications of a memorandum of understanding entered into under section 2531 of title 10, United States Code, or a related agreement that the team considers to be an appropriate response to a contractual offset, industrial participation, or similar arrangement that is entered into under the policy to which section 2532 of such title applies. ``(3) Notification to ustr regarding offsets.--If the interagency team determines that a foreign country is pursuing a policy on contractual offset arrangements, industrial participation arrangements, or similar arrangements in connection with the purchase of defense equipment or supplies that requires compensation for the purchase in the form of nondefense or dual-use equipment or supplies in a value greater than the defense equipment or supplies, the team shall notify the United States Trade Representative of that determination. Upon receipt of the notification, the United States Trade Representative shall treat the policy and each such arrangement as an act, policy, or practice by the foreign country that is unjustifiable and burdens or restricts United States commerce for purposes of section 304(a)(1) of the Trade Act of 1974 (19 U.S.C. 2414(a)(1)), and shall take appropriate action under title III of such Act with respect to such country.''. (b) Report on Effects of Foreign Contracts on Domestic Contractors.--Section 309(d)(1) of the Defense Production Act of 1950 (50 U.S.C. App. 2099(d)(1)) is amended-- (1) in subparagraph (D), by striking ``and'' at the end; and (2) in subparagraph (E), by striking the period at the end and inserting the following: ``; and ``(F) a compilation of data delineating-- ``(i) the impact of foreign contracts that have been awarded through offsets, industrial participation agreements, or similar arrangements, on domestic prime contractors, and at least the first three tiers of subcontractors; and ``(ii) details of contracts with foreign 1st, 2nd, and 3rd tier subcontractors awarded through offsets, industrial participation agreements, or similar arrangements.''.
Defense Production Act Reauthorization of 2003 - Amends the Defense Production Act of 1950 to extend its expiration date and authorization of appropriations through FY 2004. Authorizes the President, under such Act, to: (1) correct the industrial shortfall for radiation-hardened electronics to the extent that such action does not cause the aggregate outstanding amount of all such actions to exceed $200 million; and (2) obtain information in order to perform industry studies assessing capabilities of the U.S. industrial base to support the national defense. Defines "critical infrastructure." Directs the Secretary of Defense to report to the House Financial Services Committee on the extent to which contracts entered into under such Act during the one-year period after the enactment of this Act have been contracts with minority- and women-owned businesses. States as the policy of Congress that the President shall designate the Secretary of Commerce to lead an interagency team to: (1) negotiate with foreign nations the elimination of offset arrangements, industrial participation, or similar arrangements in defense procurement; (2) make recommendations for modifications of memoranda of understanding with respect to such arrangements, pending their termination; and (3) notify the United States Trade Representative if a foreign country pursues a policy of offset or similar arrangements in connection with the purchase of defense equipment or supplies.
SECTION 1. SHORT TITLE. This Act may be cited as the ``Alaska Native Veterans Land Allotment Equity Act''. SEC. 2. OPEN SEASON FOR CERTAIN ALASKA NATIVE VETERANS FOR ALLOTMENTS. Section 41 of the Alaska Native Claims Settlement Act (43 U.S.C. 1629g) is amended-- (1) in subsection (a)-- (A) in the subsection heading, by striking ``In General'' and inserting ``Alaska Native Veteran Allotments''; (B) by striking paragraphs (1) through (4) and inserting the following: ``(1) Allotments.-- ``(A) Eligible recipients.--Any person described in paragraph (1) or (2) of subsection (b) shall be eligible to receive an allotment under the Act of May 17, 1906 (34 Stat. 197, chapter 2469) (as in effect before December 18, 1971), of not more than 2 parcels of Federal land, the total area of which shall not exceed 160 acres. Any person described in paragraphs (1) and (2) of subsection (b) who, prior to the date on which the Secretary promulgates regulations pursuant to section 3 of the Alaska Native Veterans Land Allotment Equity Act, received an allotment that has a total area of less than 160 acres shall be eligible to receive an allotment under the Act of May 17, 1906 (34 Stat. 197, chapter 2469) (as in effect before December 18, 1971), of not more than 1 parcel of Federal land, the total area of which shall not exceed the difference in acres between 160 acres and the total area of the allotment that the person previously received under the Act. ``(B) Rule of construction.--The civil action styled `Shields v. United States' (698 F.2d 987 (9th Cir. 1983), cert. denied (104 S. Ct. 73 (1983))) shall not be construed to diminish or modify the eligibility of any person described in paragraph (1) or (2) of subsection (b). ``(C) Filing deadline.--An allotment shall be filed for an eligible recipient not later than 3 years after the date on which the Secretary promulgates regulations pursuant to section 3 of the Alaska Native Veterans Land Allotment Equity Act. ``(2) Land available for allotments.-- ``(A) In general.--Subject to subparagraph (C), an allotment under this section shall be selected from land that is-- ``(i)(I) vacant; and ``(II) owned by the United States; ``(ii) selected by, or conveyed to, the State of Alaska, if the State voluntarily relinquishes or conveys to the United States the land for the allotment; or ``(iii) selected by, or conveyed to, a Native Corporation, if the Native Corporation voluntarily relinquishes or conveys to the United States the land for the allotment. ``(B) Relinquishment by native corporation.--If a Native Corporation relinquishes land under subparagraph (A)(iii), the Native Corporation may select appropriate Federal land, as determined by the Secretary, the area of which is equal to the area of the land relinquished by the Native Corporation, to replace the relinquished land. ``(C) Exclusions.--An allotment under this section shall not be selected from land that is located within-- ``(i) a right-of-way of the TransAlaska Pipeline; ``(ii) an inner or outer corridor of such a right-of-way; or ``(iii) a unit of the National Park System, a National Preserve, or a National Monument. ``(D) Rule of construction.--The civil action styled `Shields v. United States' (698 F.2d 987 (9th Cir. 1983), cert. denied (104 S. Ct. 73 (1983))) shall not be construed to limit the land that is eligible for allotment under this paragraph. ``(3) Alternative allotments.--A person described in paragraph (1) or (2) of subsection (b) who qualifies for an allotment under this section on land described in paragraph (2)(C) may select an alternative allotment from land that is-- ``(A) located within the boundaries of land described in paragraph (2)(C); ``(B)(i)(I) withdrawn under section 11(a)(1)(C); and ``(II) not selected, or relinquished after selection, under section 11(a)(3); ``(ii) contiguous to an outer boundary of land withdrawn under section 11(a)(1)(C); or ``(iii) vacant, unappropriated, and unreserved; and ``(C) not a unit of the National Park System, a National Preserve, or a National Monument.''; and (C) by redesignating paragraphs (5) and (6) as paragraphs (4) and (5), respectively; (2) in subsection (b)-- (A) in paragraph (1), by striking subparagraph (B) and inserting the following: ``(B) is a veteran who served during the period beginning on August 5, 1964, and ending on May 7, 1975.''; (B) by striking paragraph (2) and inserting the following: ``(2) Deceased persons.--If an individual who would otherwise have been eligible for an allotment under this section dies before applying for an allotment, an heir of the person may apply for, and receive, an allotment under this section, on behalf of the estate of the person.''; and (C) by striking paragraph (3) and inserting the following: ``(3) Limitations.--No person who received an allotment or has a pending allotment under the Act of May 17, 1906, may receive an allotment under this section, other than-- ``(A) an heir who applies for, and receives, an allotment on behalf of the estate of a deceased person under paragraph (2); and ``(B) a person who, prior to the date on which the Secretary promulgates regulations pursuant to section 3 of the Alaska Native Veterans Land Allotment Equity Act, received an allotment under the Act of May 17, 1906 (34 Stat. 197, chapter 2469), that has a total area of less than 160 acres.''; (3) by redesignating subsections (d) and (e) as subsections (f) and (g), respectively; (4) by inserting after subsection (c) the following: ``(d) Approval of Allotments.-- ``(1) In general.--Subject to any valid right in existence on the date of enactment of the Alaska Native Veterans Land Allotment Equity Act, and except as provided in paragraph (3), not later than 5 years after the date of the enactment of the Alaska Native Veterans Land Allotment Equity Act, the Secretary shall-- ``(A) approve any application for an allotment filed in accordance with subsection (a); and ``(B) issue a certificate of allotment under such terms, conditions, and restrictions as the Secretary determines to be appropriate. ``(2) Notification.--Not later than 2 years after the date of the enactment of the Alaska Native Veterans Land Allotment Equity Act, on receipt of an application for an allotment under this section, the Secretary shall provide to any person or entity that has an interest in land described in subsection (a)(2) that is potentially adverse to the interest of the applicant a notice of the right of the person or entity, by not later than 90 days after the date of receipt of the notice-- ``(A) to initiate a private contest of the allotment; or ``(B) to file a protest against the allotment in accordance with procedures established by the Secretary. ``(3) Action by secretary.--If a private contest or protest relating to an application for an allotment is initiated or filed under paragraph (2), the Secretary shall not issue a certificate for the allotment under paragraph (1)(B) until a final determination has been made with respect to the private contest or protest. ``(e) Reselection.--A person that selected an allotment under this section may withdraw that selection and reselect land in accordance with this section after the date of enactment of the Alaska Native Veterans Land Allotment Equity Act, if the land originally selected-- ``(1) was selected before the date of enactment of the Alaska Native Veterans Land Allotment Equity Act; and ``(2) as of the date of enactment of that Act, was not conveyed to the person.''; and (5) by striking subsection (f), as designated by paragraph (3) and inserting: ``(f) Definitions.--For the purposes of this section: ``(1) The term `veteran' means a person who served in the active military, naval, or air service, and who was discharged or released therefrom. ``(2) The term `Vietnam era' has the meaning given the term by paragraph (29) of section 101 of title 38.''. SEC. 3. REGULATIONS. Not later than 1 year after the date of enactment of this Act, the Secretary of the Interior shall promulgate, after consultation with Alaska Native organizations, final regulations to carry out the amendments made by section 2. During the consultation process, the Secretary shall, in coordination with Alaska Native organizations and to the greatest extent possible, identify persons who are eligible to receive an allotment under the amendments made by section 2. Upon promulgation of the final regulations, the Secretary shall contact each of these persons directly to provide an explanation of the process by which the person may apply for an allotment under the amendments made by section 2.
Alaska Native Veterans Land Allotment Equity Act This bill amends the Alaska Native Claims Settlement Act to revise provisions regarding land allotments for Alaska Native Vietnam veterans. Eligibility is expanded to include all Alaska Native veterans who served between August 5, 1964, and May 7, 1975. Allotments may be selected from vacant federal lands or lands that have been selected or conveyed to the state of Alaska or an Alaska Native corporation, if the state or corporation relinquishes or conveys the land to the United States for allotment. Land may not be selected from: (1) the right-of-way of the TransAlaska Pipeline; (2) the inner or outer corridor of that right-of-way; or (3) a unit of the National Park System, a National Preserve, or a National Monument. An heir of a deceased eligible veteran, regardless of the cause of death, may apply for and receive an allotment. Alaska Native Vietnam veterans who selected an allotment of land before enactment of this bill and who were not conveyed the allotment before the enactment of this bill may reselect land.
small cell lung cancer ( sclc ) is highly malignant neoplasm , derived from neuroendocrine cells . it represents approximately 15% of all bronchial carcinomas , and this percentage is tending to decrease recently . in most cases , sclc arises in the larger airways and grows rapidly , becoming quite large . it also has propensity to metastasize widely throughout the body at an early stage in its clinical course . tonsillar metastasis from sclc is extremely rare , and clinically apparent cases are even less common . idiopathic pulmonary fibrosis ( ipf ) is a chronic , progressive form of interstitial lung disease with poor prognosis and is a clinical term of usual interstitial pneumonia of unknown cause . it has been reported to be associated with increased risk of lung cancer . in a study , the incidence of lung cancer was increased 7-fold in the ipf group compared with healthy subjects . although the features of the lung cancer with ipf are similar to the general features of lung cancer , sclc is not common in fibrotic area of ipf . the present case report describes sclc in ipf - associated lesion and its tonsillar metastasis , which is rarely seen . a 77-year - old man admitted to our hospital with 1-month history of cough and dyspnea . he had personal history of pulmonary tuberculosis 1 year ago . on physical examination of the thorax , inspiratory dry crackles were heard on both lower lung fields . in observation of oral cavity , a large oval mass composed of soft tissue was detected in his throat . the mass was arising from the right palatine tonsil and extending across the midline of the oropharynx ( figure 1a ) . ( a ) in physical examination , a large oval mass composed of soft tissue arose from the right palatine tonsil and extended across the midline of the oropharynx . ( b ) whole - body magnetic resonance revealed intraluminal protruding mass in the right peritonsillar region with heterogeneous enhancement , suggesting malignancy of palatine tonsil . high - resolution computed tomography of chest showed 2 masses in the left lower lobe , 1 mass in the right upper lobe , and multiple enlarged mediastinal lymph nodes of the lung ( figure 2 ) . one of the left lower lobe masses was 4.4 4.0 cm sized in superior and lateral segments , and the other was 5.7 3.7 cm sized with fibrosis in subpleural region . also , there was typical honeycomb appearance with traction bronchiectasis and ground - glass opacity pattern , predominantly in subpleural areas of both lower lobes . under suspicion of lung cancer and usual interstitial pneumonia that is pathological equivalent to ipf , further workup was started to confirm the diagnosis . high - resolution computed tomography showed 1 mass in the right upper lobe ( a ) , 2 masses in the left lower lobe ( b ) , and honeycomb appearance in subpleural area of both lower lobe ( c ) . percutaneous transthoracic needle biopsy for lung mass and punch biopsy for tonsillar lesion were performed . both tumors were composed of nests of small , round , or oval cells with little cytoplasm and hyperchromatic nuclei . the cells from both tumors were positive for cd56 , a glycoprotein expressed on the surface of neurons and neuroendocrine tumors as well known that it is the neural cell adhesion molecule ( ncam ) . in addition , the cells showed positive staining for synaptophysin and chromogranin a , although the intensity was weaker than cd56 , and it was more distinct in lung mass than that in tonsillar mass . based on these pathologic findings and the known fact that sclc belongs to the neuroendocrine lineage of lung cancer , the masses were diagnosed as sclc and tonsillar metastasis ( figure 3 ) . representative h&e sections of lung mass ( a ) and tonsillar mass ( b ) revealed nests of small , round , or oval cells with little cytoplasm and hyperchromatic nuclei in both lesions . in immunohistochemical staining for cd 56 of lung mass ( c ) and tonsillar mass ( d ) , cd = cluster of differentiation , sclc = small cell lung cancer . we performed systemic evaluation using whole - body magnetic resonance imaging , which showed a mass indicating brain metastasis in body portion of the right corpus callosum and a 2 2.8 cm - sized , intraluminal protruding mass in the right peritonsillar region with heterogeneous enhancement , suggesting malignancy of palatine tonsil ( figure 1b ) . we suspected that the patient has ipf based on the clinical findings and radiological patterns . there was no history of exposure to any toxic materials and no clinical symptoms of connective tissue diseases . bronchioloalveolar lavage fluid analysis exhibited the percentage of alveolar macrophage , lymphocytes , neutrophil , and eosinophil was 77% , 3% , 15% , and 5% , respectively . chemotherapy with irinotecan and carboplatin for sclc and a standard medication of steroid and acetylcysteine for ipf were applied . however , in following - up , he expired due to respiratory failure by an acute exacerbation of ipf 3 months after the diagnosis . susceptible organs include the liver , abdominal lymph node , bone , brain , adrenal gland , skin , kidney , and pancreas . a few cases of tonsillar metastasis of sclc have been reported . the palatine tonsil is a rare site in which metastatic tumor deposit is found . according to a study , only 12 tumors ( 0.8% ) the metastatic tumors include carcinoma of the breast , stomach , renal cell carcinoma , seminoma , melanoma , and carcinoma of rectum . furthermore , in a review of 76 cases of primary neoplasm complicated by tonsillar metastasis , only 12 were found to be due to carcinoma of the bronchus . among these 12 metastatic cases , evidence for the metastasis to other tissues was found in 10 of the 12 cases . in all reported cases of tonsillar metastasis of sclc , the metastasis developed following presentations : the mean interval of time between development of the primary bronchogenic cancer and the tonsillar metastasis was 8 months , and the mean time interval between appearance of the tonsillar metastasis and death was 5 months . in our present case , most patients with tonsillar metastasis are symptomatic , such as difficulty in breathing , sore throat , irritable cough , dysphagia , otalgia , and swallowing pain accompanied by a foreign body - like sensation . however , in our case , the patient with tonsillar metastasis of sclc did not show any of these symptoms . ipf , which is the most frequent type of interstitial lung diseases , has been reported as an independent risk factor for lung cancer by epidemiological studies . recent studies have reported that alteration of genes like fragile histidine triad ( fhit ) gene is associated with lung cancer and ipf , supporting that lung cancer can be a result of the occurrence of atypical or dysplastic epithelial changes in fibrosis . ipf is significantly related to the development of lung cancer in peripheral location of the lower lobes . adenocarcinoma and squamous cell carcinoma are the most common histological findings among lung cancer patients with ipf . however , sclc , which is a central disease , shows exceptional tendency to occur in the ipf - nonassociated and nonfibrotic lesion . our case is interesting in the point that sclc has developed from ipf - associated fibrotic lesion of left lower lobe . considering our case presented here , physicians should include metastatic sclc as the differential diagnosis for a single tonsillar mass , although the incidence is very low . in addition , this case shows that sclc can be developed in ipf - associated fibrotic lesion , indicating that physicians should consider a possible association between sclc and lung fibrosis such as ipf . institutional review board of chonbuk national university hospital has stated that it is not necessary to achieve irb approval for this case report , and this report requires obtaining patient consent because this study is dealt with only the patient 's medical record and related images , retrospectively . written informed consent of this case report and accompanying images was obtained from the patient for the publication .
abstractsmall cell lung cancer ( sclc ) metastasizes widely , but palatine tonsil is an extremely unusual site for metastasis . idiopathic pulmonary fibrosis ( ipf ) is associated with increased risk of lung cancer . however , the most common histological findings among patients of lung cancer with ipf are known as non - sclc such as adenocarcinoma and squamous cell carcinoma . in addition , the majority of them are located in ipf - associated fibrotic peripheral lesions.a 77-year - old man visited for 1-month persistent cough and dyspnea , with inspiratory dry crackles on both lower lung fields and a large oval mass in his throat . chest computed tomography revealed 2 masses in the left lower lobe , 1 mass in the right upper lobe , and multiple enlarged mediastinal lymph nodes of the lung accompanying with ipf , which were diagnosed as sclc pathologically . very interestingly , the tonsillar mass was also confirmed as the metastatic lesion of sclc . chemotherapy for sclc and medical treatment for ipf were applied . however , in following - up , he expired due to respiratory failure by an acute exacerbation of ipf 3 months after the diagnosis.in this current report , we describe , for the first time , a case of tonsillar metastasis of sclc with ipf detected simultaneously in a 77-year - old man .
in searching for gravitational wave signals from coalescing binary compact objects , one commonly uses an optimal filtering technique @xcite . this technique consists of the comparison of the output signal of an interferometric gravitational waves detector with a family of expected theoretical waveforms , called templates . each template depends on one or more parameters @xmath0 . the choice of the templates in the @xmath0 parameter space , called placement , is the purpose of this paper . we restrict ourselves to a 2d parameter space , considering spinless templates computed at second post - newtonian order . we will first describe in section [ sec_motivations ] the motivations of our placement technique , comparing it with a simple uniform paving of the parameter space . section [ sec_computation_par ] describes the calculation of the parameters of the parameter space portion covered by a single template . this portion is in our case well approximated by an ellipse . next , section [ sec_triangulation ] treats the triangulation of the parameter space , a step needed by the placement , which is covered by section [ sec_placement ] . finally , performance tests are covered by section [ sec_performance ] , where some real use - cases are considered in the context of the virgo detector @xcite . the comparison of a signal with one template is made through a wiener filter @xcite : @xmath1\ ] ] this is essentially a weighted intercorrelation , @xmath2 being the interferometer output and @xmath3 the template . @xmath4 is the noise power spectral density ( psd ) of the detector , @xmath5 and @xmath6 are the lower and upper limits of the detector spectral window . each template is represented by a point in a multidimensional parameter space . after taking care of most extrinsic parameters ( like time of arrival or initial orbital phase of the system ) by maximizing the output of the optimal filter over them @xcite , there remain only two parameters , that we will call @xmath7 and @xmath8 . those parameters may be the masses of the two bodies but in general , one uses parameters derived from the masses that simplify the calculations . a template corresponding to parameters @xmath9 is sensitive to a signal corresponding to nearby parameters @xmath10 . the difference leads to a decrease in signal over noise ratio ( snr ) with respect to the snr obtained with a signal corresponding to the exact template . for an acceptable loss in snr , each template covers a portion of the two dimensional parameter space . following owen @xcite in a geometrical interpretation of the optimal filtering , one is able to define a distance between two templates as the ambiguity function maximized over extrinsic parameters , called `` match '' . when filtering a signal which has the same shape as a template of parameters @xmath10 with a reference template of parameters @xmath9 , the match is the fraction of the optimal snr obtained when filtering the reference template with a signal identical in shape to itself . given a minimal match @xmath11 , we can define the region of parameter space around a given point corresponding to a template @xmath12 , the match of which , computed with any template corresponding to a point in the region , will be above @xmath11 . we will call the boundary of this region the `` isomatch contour '' . the shape of this boundary may be complex , so one generally uses parameters for which it has been shown that , for high values of the minimal match , ( @xmath13 ) the contour is closed and well approximated by an ellipse @xcite . throughout this paper , instead of masses , we will use chirp times @xmath14 and @xmath15@xcite defined as : @xmath16 in geometrized units ( @xmath17 ) , where @xmath18 is the total mass of the binary system , @xmath19 is the symmetric mass ratio and @xmath20 a fiducial frequency chosen as the lower frequency cutoff of the detector sensitivity . results are properly scaled to restore physical units . the calculation of the parameters of the ellipse may be done analytically for a given spectral density @xcite@xcite . the final goal of our study is to pave the parameter space with isomatch contours in as optimal a way as possible . this is equivalent to finding the minimal set of templates whose isomatch contours pave all the parameter space , without letting any hole or unpaved region @xcite . one simple solution , already described elsewhere , is to calculate the ellipse parameters for the point in the parameter space where it is known to be the smallest and pave the space with this single ellipse @xcite , obtaining a regular tiling of the parameter space . this is not very different from paving a bidimensional space with circles . as was already noted @xcite , because of the rotational symmetry , the centers of the circles should sit at the vertices of regular polygons which make a regular tiling of the plane . this is only possible for triangles , squares or hexagons . in the first case , the centers of the circles are placed on the corners of an equilateral triangle , as shown in figure [ circle_place ] a ) . it is desirable to have the sparsest possible circles , which means that three circles touch at one single point @xmath21 . the surface region consisting of the points whose closest circle center is @xmath22 is shown in gray . this is also the surface covered on average by one circle . , width=566 ] in the triangular case , it is a hexagon . the set of points which belongs to this region is called the voronoi set of @xmath22 . as illustrated in figure [ circle_place ] , in the case of a square tiling , the voronoi set has a square shape and in the case of a hexagonal tiling , the voronoi set has a triangular shape . it has been shown @xcite , as one would intuitively expect , that the most efficient tiling in the case of placement of circles is the triangular one . of course , in our case , the circles are skewed according to the parameters of the initially calculated ellipse . the tiling is extended outside the parameter space to make the coverage complete . the ellipses , the center of which lies in a physically forbidden region ( under the equal mass line ) , are shifted towards the allowed region , staying on the equal mass line , still ensuring the completeness of the coverage . an example is given in fig . [ simple_tile ] , where the ellipse at the extreme right ( smallest masses ) represents the only computed point . , width=491 ] the above simple method is very fast but , assuming that one uses the smallest possible ellipse , is clearly suboptimal in most cases . it gives a higher number of templates than would be ideally needed if one was able to calculate the shape of the isomatch contours at any given point of the parameter space and use those bigger shapes to cover the space . a second problem would then arise , since an optimal tiling of the parameter space with varying shapes is far from being obvious . the principle of reconstruction of exact isomatch contours has been described previously @xcite as well as a preliminary placement method . we present in this paper an extension and improvement of this method in the case where the elliptic approximation for isomatch contours is assumed valid . before doing the placement , one should be able to calculate as fast as possible the ellipse parameters at any given point in the parameter space . this is done by * calculating the ellipses at a chosen set of points ( we obtain `` seed ellipses '' ) . * triangulating the parameter space with this set . actually , as we will see , those two steps are closely linked . we give in appendix a short tutorial about triangulation and computational geometry . * interpolate linearly ellipses at any point using the previously calculated seed ellipses . this step is much faster than an analytical computation . the seed ellipses are computed using the algorithm included inside the ligo analysis library ( lal ) @xcite . this algorithm uses the procedure described in @xcite . the metric components used to find the parameters of the ellipse are calculated using the moments of the psd curve . the triangulation of the parameter space deserves hereafter a section by itself . once it is computed , each point @xmath21 in the parameter space belongs to one and only one triangle whose corners are three seed points . one is able to interpolate linearly the shapes ( resp . metric parameters ) of the three seed ellipses to obtain the parameters of the ellipse ( resp . metric ) at point @xmath21 ( see fig . [ interpolation ] ) . , the triangulation of the parameter space is done using standard techniques known in computational geometry . the notions necessary to understand the present study are explained in appendix . the base algorithm used is known as the bowyer - watson @xcite@xcite algorithm . the bowyer - watson algorithm is quite simple but needs adaptation to our problem . we need to take care of the fact that the borders of the parameter space are not convex and we need to choose which points to use for the triangulation . the main idea of our adapted algorithm is to start from an existing triangle at the corners of which sit three already calculated ellipses @xmath23 and subdivide it only if necessary , i.e. if for any point @xmath21 inside the triangle , the ellipse linearly interpolated between @xmath23 is different enough from the one calculated using the metric at that point . let @xmath24 be the interpolated ellipse and @xmath25 the calculated one . @xmath26 being the measure of the surface of @xmath24 , @xmath27 the surface of @xmath24 that does not intersect @xmath25 ( fig . [ intersect_ellipses ] ) , the variable describing the difference between @xmath24 and @xmath25 has been chosen as the proportion @xmath28 it was not deemed necessary to also take into account the surface of @xmath25 that does not intersect @xmath24 , because if @xmath27 is null , the interpolated ellipse is completely inscribed inside the calculated one and we are simply going to make a more dense placement at a later stage . , height=132 ] a limit is set on this variable to stop the subdivision of triangles . given an existing triangle , a choice has to be made on the points appropriate for its subdivision . ideally , one would use the points which have the highest proportion @xmath29 . it is however impractical , and very expensive in terms of computing power to test all the points in a triangle to find the one with the higher @xmath29 . we chose to test only the middle points of each segment forming the triangle . each of these three points is inserted and used to subdivide the triangle following a delaunay method , but considering only the triangle , not the adjacent ones that may exist in the ongoing triangulation process . if the middle point of a segment is outside the parameter space , it is replaced by the closest point on the border , perpendicularly to the segment ( fig . [ close_border_point ] ) . some peculiar situations ( two middle segment points outside the parameter space for example ) are taken into account . all subtriangles generated outside the parameter space are removed . [ cols="^,^ " , ] the frequency of recomputation of the placement is still under consideration in virgo . it depends on the change rate of the shape of the sensitivity curve over time , the stability of which is not yet fully assessed for future science runs . the numbers given in table [ compute_time ] may seem too large for a frequent recomputation , for instance every 15 minutes , in the case of large volume parameter spaces . though such a frequency is not expected for the final virgo science runs , we may need to consider a parallelization of the algorithm . the part of the algorithm that could be parallelized efficiently is the placement part , but one should not expect more than an estimated factor 2 to 5 improvement in overall computing time , due to the sequential nature of the algorithm . indeed , in one line of ellipses , ellipse number @xmath30 may not be placed before ellipse number @xmath31 . only the placement of complete lines may be somewhat decorrelated . beside very important pioneering efforts @xcite@xcite the results of which are now widely used , several previous studies were done for the template placement problem . we believe that our method is somewhat complementary to them . for example , the placement algorithm used in @xcite for extended hierarchical searches is based on a square tiling . this is justified in this case by the low minimal match value used ( @xmath32 ) , which gives very irregularly shaped contours . our method could probably be adapted to such a case by applying methods such as in @xcite to determine the shape of the contours , but an important effort has to be made to improve the speed of the shape reconstruction algorithm , which is going to be one of the main limiting factors . another example is the paper of arnaud et al.@xcite where authors devise a 2d tiling method and test it in the case of supernova ringdown signals . it is very difficult to make a direct comparison between this algorithm and ours . the very large parameter space curvature described by arnaud et al is likely to bring some holes if we apply directly our tiling method to ringdown signals . this would imply the need for an improvement to our placement procedure . on the other hand , the arnaud 2d tiling method was not yet applied to the case of a @xmath33 inspiral parameter space and it is not clear what would be the result in terms of speed and possible overcoverage . the computational geometry tools that we used are still valid in higher dimensional spaces . it may be tempting to consider the extension of our algorithm to multidimensional searches . in that case , the main challenge would be to improve the algorithm speed , since the number of contours in nd is roughly going as @xmath34 where @xmath35 is the number of contours obtained in 2d . this is of course a `` worst case '' scenario where the granularity is the same ( and high ) in all dimensions . we presented a technique for doing the placement of isomatch ellipses on a template parameter space using triangulation and interpolation of seed ellipses . a comparison is done with a simple regular triangular tiling using a single ellipse . this comparison shows an improvement between 6% and 30% depending on the mass range and frequency range . some coverage tests were also performed that show a few percent undercoverage of the parameter space , mainly in the high mass region . this undercoverage seems to come from the miscalculation of the metric for high masses . finally , speed tests were made . we would like to thank all our virgo colleagues from the inspiral data analysis group , and particularly andrea vicer for his help in bringing an important piece of mathematica code and insightful comments . we would also like to acknowledge the use of the ligo analysis library , and thank thomas cokelaer for his precious help . 99 , b.j.owen , physical review d , * 53*(1996 ) 6749 - 6761 virgo coll . , final design report , 1997 , see also http://www.virgo.infn.it/ see , e.g. , n. wiener,_the extrapolation , interpolation and smoothing of stationary time series with engineering applications _ ( wiley , new york , 1949 ) , b.s . sathyaprakash , physical review d , * 50*(1994 ) r7111-r7115 b.f . schutz , in _ the detection of gravitational radiation _ , cambridge university press , cambridge , england , 1989 d.k . churches , t . cokelaer , b.s . package bank _ , lal software documentation , r.p . croce , th demma , v. pierro and i.m . pinto , physical review d , * 65*(2002 ) 102003 , b.j.owen and b.s . sathyaprakash , physical review d , * 60*(1999 ) 022002 , f. beauville et al . , class . quantum grav . , * 20*(2003 ) s789-s801 j. orourke,_computational geometry in c _ ( cambridge university press , cambridge , 1998 ) p.l . george , h. borouchaki _ triangulation de delaunay et maillage _ ( hermes , paris , 1997 ) b. delaunay ( 1934 ) , _ sur la sphre vide _ , bul . urss , class . , 793 - 800 http://www.lsc-group.phys.uwm.edu/lal/ d.f . watson,_computing the @xmath30-dimensional delaunay tessellation with applications to voronoi polytopes _ , the computer journal , * 24*(2 ) ( 1981 ) 167 - 172 . a. bowyer,_computing dirichlet tessellations _ , the computer journal , * 24*(2 ) ( 1981 ) 162 - 166 . schutz , in _ the detection of gravitational waves _ , edited by d.g . blair , cambridge university press , cambridge , england , 1991 , p.406 a. vicer , _ computational costs for coalescing binaries detection in virgo using matched filters _ , virgo note , * vir - not - pis-1390 - 149 * ( 2000 ) . sengupta , s. dhurandhar and a. lazzarini , physical review d , * 67*(2003 ) 082004 , n. arnaud et al . , physical review d , * 67*(2003 ) 102003 since computational geometry is not very commonly used in our field , we will give a very short introduction to the notions useful for the present study . it is in no way exhaustive or pretending to be accurate . more details may be found in @xcite or @xcite . given a set @xmath36 of points in a euclidian space , 2-dimensional in our case , we would like to subdivide the space into a set of triangles , each triangle being formed by three points from @xmath36 . any point @xmath21 in the space belongs to ( is included into ) one and only one triangle . this is however not enough and the properties of the set of triangles should be the ones of a _ triangulation_. * the convex hull of a set of points is the minimal convex set containing all the points ( imagine a rubber band stretched so that it encompasses all the points ) . * a _ simplex _ is the convex hull of a set of @xmath39 points ( a line segment in 1d , a triangle in 2d , a tetrahedron in 3d , ... ) . * the set of points that are vertices of the simplices coincides with @xmath36 . * any two simplices in @xmath12 intersect in a common face , only one vertex or not at all . * the convex hull of @xmath36 defines a domain @xmath40 in @xmath38 . if @xmath41 is a simplex , then @xmath42 all triangulations are not equivalent for a given problem . there is a need to define a criterion of suitability . the most commonly used criterion is the delaunay criterion which constraints the compactness of the triangles and will be explained later . it is linked to the so called vorono diagram . given @xmath43 a set of points @xmath44 in a @xmath45-dimensional space , the vorono diagram is the set of cells @xmath46 associated with each point @xmath44 and defined as @xmath47 where @xmath45 is the euclidian distance between two points . in other words , @xmath46 is the locus of points in @xmath38 closer to @xmath44 than to any other point of @xmath43 . it has been shown @xcite that the geometrical dual of the vorono diagram is a triangulation , the delaunay triangulation ( fig . [ voronoi_diagram ] ) . the delaunay criterion states that the open circumdisk ( in 2 dimensions , circumsphere in @xmath30 dimensions ) of a triangle ( simplex ) contains no point from the set . the example in figure [ delaunay_example ] shows a triangulation not satisfying the delaunay criterion . based on the previous definition of the delaunay criterion , it is possible to devise a simple algorithm to compute a triangulation based on a set of points . it is called an incremental algorithm , or bowyer - watson algorithm @xcite@xcite . the algorithm is incremental in the sense that the points of the set @xmath36 are added one by one , recomputing a triangulation at each step . the process starts by the generation of a supertriangle that encompasses all the points in @xmath36 . at the end , all triangles that share one edge with the supertriangle are removed . the addition of one point is illustrated in figure [ incremental_algo ] to add one point @xmath21 , all the triangles whose circumcircle contains @xmath21 are first removed . the resulting hole in the triangulation has a polygonal shape . new triangles are formed between @xmath21 and the outside edges of the polygon .
in the search for binary systems inspiral signal in interferometric gravitational waves detectors , one needs the generation and placement of a grid of templates . we present an original technique for the placement in the associated parameter space , that makes use of the variation of size of the isomatch ellipses in order to reduce the number of templates necessary to cover the parameter space . this technique avoids the potentially expensive computation of the metric at every point , at the cost of having a small number of `` holes '' in the coverage , representing a few percent of the surface of the parameter space , where the match is slightly lower than specified . a study of the covering efficiency , as well as a comparison with a very simple regular tiling using a single ellipse is made . simulations show an improvement varying between 6% and 30% for the computing cost in this comparison .
accreting pulsars consist of rotating , magnetized neutron stars that accrete matter from a companion ( pringle & rees 1972 ; davidson & ostriker 1971 ) . material may be captured either from the stellar wind of the companion ( wind accretion ) or through roche - lobe overflow of the mass donating star ( disk accretion ) . the flow of material , either radially inward or through an accretion disk , is interrupted when magnetic stresses dominate material stresses at the magnetospheric radius , @xmath5 . here , @xmath6 is the neutron - star magnetic moment , @xmath7 is its mass , @xmath8 is the mass accretion rate , and k is a constant of order unity . for @xmath9 , @xmath10 is equal to the alfven radius for spherical accretion . in the simplest picture of disk accretion , matter becomes attached to magnetic field lines at @xmath11 and transported to the magnetic poles . if the angular momentum of material captured from the disk at @xmath11 is carried to the neutron star , the neutron star experiences an accretion torque @xmath12 a star with moment of inertia @xmath13 subject to the torque in equation ( [ n ] ) will spin up at a rate @xmath14 assuming the gravitational potential energy of the accreted material is converted to x - rays at the neutron - star surface , the x - ray luminosity will be @xmath15 where @xmath16 is the neutron - star radius . from equations ( [ nudot ] ) and ( [ lx ] ) , the rate of spin up is related to the x - ray intensity through @xmath17 if the magnetospheric radius lies outside the corotation radius , @xmath18 , where the keplerian orbital frequency equals the spin frequency of the neutron star , matter that becomes attached to field lines may be expelled from the system . accretion is then centrifugally inhibited . in this `` propellor '' regime , the neutron star may spin down rapidly . neutron stars accreting at a constant rate thus tend toward an equilibrium spin period where @xmath19 , given by @xmath20 the relation between torque and luminosity near equilibrium is expected to be more complicated than equation ( [ n ] ) . magnetic accretion occurs in a variety of astrophysical systems , including magnetic cvs and t tauri stars ( warner 1990 ; konigl et al . 1991 ) . accreting pulsars are well suited for studying accretion phenomena . their small moments of inertia and strong magnetic fields result in measureable changes in the spin frequency , @xmath21 , in hours to days with current instruments ( e.g. nagase 1989 ) . in particular , accreting pulsars open the possibility of probing the interaction between material in the accretion disk and the magnetic field through measurements of how @xmath22 depends upon @xmath23 . a correlation between spin - up rate and x - ray luminosity has been observed in outbursts of 5 transient systems ; between the spin - up rate and the 120 kev flux measured with exosat in ( parmar , white & stella 1989 ; parmar et al . 1989 ; reynolds et al . 1996 ) , and between the spin - up rate and the flux above 20kev ( in some cases the pulsed flux ) measured with batse in ( finger , wilson & chakrabarty 1996 ) , ( bildsten et al . 1997 ; finger , wilson & harmon 1996 ) , ( wilson et al . 1997 ) and ( bildsten et al . 1997 ) . all of these outbursts had luminosities and accretion torques exceding those normally observed in most persistent sources . they almost certainly satisfy @xmath24 during most of the outburst . the case for torque - luminosity correlations in persistent sources is less clear . the disk - fed systems cen x-3 and gx 1 + 4 both exhibit strong flares lasting several days . in cen x-3 , no correlation between the pulse frequency history and the x - ray flux history has been found ( tsunemi , kitamoto & tamura 1996 ; bildsten et al . 1997 ) . in gx 1 + 4 , there is an _ anti_correlation between the 2050 kev pulsed flux measured with the batse and accretion torque ( chakrabarty et al . 1997b ) . in the wind - fed system gx 3012 , the x - ray flux has been found to vary with orbital phase . however , continuous measurements with batse show no correlation between orbital phase and either the magnitude or the sense of the accretion torque ( koh et al . 1997 ) . in this paper we present measurements of x - ray flux for the accreting pulsars 4u 162667 and gx 3012 made with the all - sky monitor ( asm ) on the _ ginga _ satellite over the course of 4.5 years . we describe the asm in section 2 . although the `` snapshots '' of source flux taken by the asm can not be used to determine the frequency histories of pulsed sources , they are a long term , uniform set of measurements that can be correlated with other measurements . in particular , they complement measurements of pulse frequency made with batse , whose low - energy cutoff of @xmath020kev misses most of the bolometric luminosity of most pulsars , a point we return to in the discussion . the two instruments overlapped from 1991 april october . 4u 162667 is a low - mass , disk - fed accreting pulsar with a 7.6s spin period , a 42 minute orbital period ( middleditch et al . 1981 ; chakrabarty et al . 1997a ) , and a low - mass ( @xmath25 ) helium or carbon - oxygen dwarf companion kz tra ( levine et al . 4u 162667 was observed to be in a state of steady spin up at a rate of @xmath26hzs@xmath27 for nearly two decades since its discovery by _ uhuru _ in 1972 . observations with batse have shown it to be in a state of spin down at a rate of @xmath28hzs@xmath27 since 1991 . quasi - periodic oscillations in x - ray intensity with a frequency of 0.04hz were observed during spin up ( shinoda et al . 1990 ) . during spin down , the qpo frequency was 0.048hz ( angelini et al . 1995 ) . figure 1 shows the frequency history of 4u 162667 . extrapolating the spin - up trend observed from 1975 through 1988 ( levine et al . 1988 ) and the spin - down trend observed by batse from 1991 through the present ( chakrabarty et al . 1997a ) yields a transition from spin - up to spin - down in mid 1990 . we note that both the spin - up and spin - down trends have significant higher - order components ( levine et al . 1988 ; chakrabarty 1997a ) , hence the transition time is approximate . the rate of spin - down shortly after the turnaround , as measured with batse , is roughly 15% slower than the rate of spin up prior to the turnaround as measured by previous instruments . gx 301 - 2 is a high - mass , wind fed accreting pulsar with a 680s spin period , a highly - eccentric ( @xmath29 ) 42 day orbit , and a high - mass ( @xmath30 ) ob supergiant companion wray 977 ( koh et al . daily batse spin - frequency measurements show that most of the time , gx 3012 experiences a rapidly - changing accretion torque with virtually no net change in spin frequency on long time scales . however , batse observed two episodes of steady , rapid spin up from mjd 4844048463 ( orbital phase @xmath31 ) , and mjd 4923049245 ( @xmath32 , with an average spin - up rate of @xmath33hzs@xmath27 . the pulsed flux in 2055 kev during the spin - up episodes is @xmath34erg@xmath35s@xmath27 , 50% higher than the average pulsed flux for the same orbital phase , and almost twice as high as the average pulsed flux over all orbital phases ( koh et al . the pulsed flux measured with batse depends strongly on orbital phase , with a peak slightly before periastron and a secondary peak at apastron . the _ ginga _ asm performed well throughout the 1987 february to 1991 october period that _ ginga _ was in orbit . the effective area of the asm was about 420 @xmath4 , with a @xmath36 fwhm fan - beam collimator . details of the asm appear in tsunemi et al . sky - scanning observations with the asm were typically performed at intervals of a few days , when the satellite was rotated around the @xmath37-axis in 20 min . during such scanning observations , 16-channel source spectra were obtained covering the energy range 1 to 20 kev . an exposure time of 318s was obtained for each scan across each observed source , depending on the source s latitude in the spacecraft equatorial ( @xmath38 ) plane . for favorably located sources , the detection limit was about 50 mcrab ( 16 kev ) , at the 5 @xmath39 level , worsening for sources far from the spacecraft equatorial plane . data selection criteria included : ( 1 ) background low and stable , ( 2 ) source unocculted by the earth , and ( 3 ) acceptable spacecraft aspect , such that the source was within @xmath40 of the center of the asm field of view . during the 4.5 year mission , a total of 294 observations of 4u 162667 and 277 observations of gx 3012 satisfied these conditions and were accepted for follow - on analysis . in the discussion we make extensive use of frequencies and pulsed fluxes measured with batse . batse consists of 8 uncollimated detector modules facing outward from the corners of the cgro spacecraft . each of the 8 modules contains a large area detector with a @xmath41sr field of view , sensitive to photons with energies of 201800kev . fluxes measured with batse suffer from the limitation that the bulk of the bolometric flux from most accreting pulsars is in the energy range 120kev . further , the background is large and variable . only the pulsed component of the flux can be measured by epoch folding batse data . batse observations of accreting puplsars are discussed in detail in bildsten et al . _ ginga _ asm light curves of 4u 162667 in 16kev and 620 kev are plotted in figure 2 . each point is the average of 30d ( typically @xmath010 pointings ) . the 16 kev count rate shows a clear drop in mid 1990 . the average counting rate in 16 kev is 0.0443(18)@xmath35s@xmath27 from 1987 february 1990 may , and 0.0124(37)@xmath35s@xmath27 from 1990 june 1991 november . in 620 kev , the average rate is 0.0295(17)@xmath35s@xmath27 from 1987 february 1990 may , and 0.0176(29)@xmath35s@xmath27 from 1990 june 1991 november . count - rate variations are consistent with measurement errors within both intervals . thus , the 16 kev count rate is smaller after 1990 june than before 1990 june by 72% , and the 620 kev count rate by 40% . -0.7 cm -1 cm -0.5 cm figure 3 shows the average energy spectrum of 4u 162667 before and after 1990 june . we fit both spectra by a simple power law model ; @xmath42 . previous instruments have found the column density to be negligible . we verified that n@xmath43@xmath35 using data obtained before and after the turnaround , then fixed the column density at zero . the 90% confidence contours of the best fit parameters are shown in figure 4 . the pre-1990 june photon spectrum can be fit with @xmath44 ( at 1kev ) and @xmath45 . the post - june 1990 spectrum can also be fit using a power law with @xmath46 and @xmath47 . in figure 4 , lines of constant 1 - 20 kev flux , @xmath48 are ploted as dotted lines . the flux before 1990 june is slightly higher than after . the flux in 110kev , 1020kev and 120kev are given in table 1 for pre- and post-1990 june spectra . the 110kev flux is observed to drop by more than 50% , and the 120kev flux by roughly 20% . the decrease in flux is less than the decrease in count rate because the spectrum during spin down is harder . lll + & & + & & + 120 kev & [email protected] & [email protected] + 110 kev & [email protected] & [email protected] + 1020 kev & [email protected] & [email protected] + @xmath50 @xmath51erg@xmath35s@xmath27 + chakrabarty et al . ( 1997a ) have compared the spectra of 4u 162667 measured with a variety of instruments including heao1 , einstein , _ ginga _ lac , and asca . the photon spectral index of 0.41 measured with the _ ginga _ asm after turnaround is the smallest ever measured for this source . to evaluate the possibility that absorption or scattering of low energy photons is responsible for the change in spectrum , we fixed the power - law spectral index at the pre turnaround value and attempted to fit the post turnaround spectrum by varying the absorption . this did not yield an acceptable fit , and a large excess below 3 kev appears in residuals . a partial covering model coupled with a power law did provide a reasonable fit , where 86(6)% of the x - rays are obscured by thick material with a column density of 10@xmath52@xmath35 , and the remaining x - rays are unabsorbed ( @xmath53@xmath35 ) . if this partial covering model is correct , the 120 kev flux , @xmath48 , is is 1.65(47 ) times larger during spin down than during spin - up . _ ginga _ asm light curves of gx 3012 in 16 and 620 kev are plotted in figure 5 . each point corresponds to one scanning observation . since the duration of each observation ( 318s ) is shorter than the spin period of gx 3012 ( @xmath0680s ) , the count rate depends upon the pulse phase at the time of the observation , which introduces scatter into the measurements . the first of the two episodes of rapid spin up observed with batse occurred during _ ginga _ operation . unfortunately , the _ ginga _ asm did not observe gx 3012 during the spin - up episode . however , the periastron passage prior to spin up is well covered . the average 620kev count rate for the four observations within 3d ( @xmath49 0.072 in orbital phase ) of the periastron passage prior to the spin - up episode is 0.37@xmath35s@xmath27 . the average rate for observations within 3d of periastron , over the asm lifetime , is 0.22@xmath35s@xmath27 , with a standard deviation of 0.18@xmath35s@xmath27 . periastron rates comparable to those seen prior to spin up occur about every six orbits . we constructed an average spectrum of all observations within 3d of periastron ( @xmath54 ) , and of observations in the same range of phases during the periastron passage prior to the spin - up episode , both shown in figure 6 . we fit both with a simple power law model with photoelectric absorption . the 90% confidence contours in photon index and normalization are shown in figure 7 . the best fit parameters of the average periastron passare are a = 0.204(11)@xmath55 ( at 1kev ) , @xmath56 , and @xmath57 . the spectral parameters of the periastron passage prior to the spin - up episode are a = 1.1@xmath58@xmath55 ( at 1kev ) , @xmath59 , and @xmath60 . -1 cm -1 cm -1 cm -0.5 cm -1 cm -0.5 cm the spectral shape is consistent in the two fits . lines of constant 620 kev flux are plotted in the figure , and the 120 kev and 620 kev fluxes are given in table 2 . since x - rays below 6 kev are absorbed strongly , the uncertainty in 16kev flux is large . figure 8 shows the _ ginga _ asm flux folded at an orbital period of @xmath61d and referenced to an epoch of periastron of mjd 48,802.85 ( koh et al . 1997 ) , along with the batse pulsed flux folded with the same orbit ( koh et al . the _ ginga _ asm fluxes were determined by folding asm count rates corrected for effective area , then converting to a flux by assuming a constant spectral shape . the _ ginga _ asm and batse fluxes both show a peak shortly before periastron , a broad secondary maximum at apastron , and a minimum at orbital phase @xmath00.2 . the 120kev flux is a factor of @xmath03 larger than the 2055kev pulsed flux , and shows amplitude modulations a factor of @xmath02 larger . lll + & & + & & + 120 kev & [email protected] & [email protected] + 620 kev & [email protected] & [email protected] + @xmath62 @xmath63erg@xmath35s@xmath27 dynamical tests of accretion theory require simultaneous measurements of torque and bolometric luminosity . the observations presented here provide indirect evidence for a connection between accretion torque and luminosity in the persistent accreting pulsars 4u 162667 and gx 3012 . the 120kev flux is 20% lower in 4u 162667 during spin down than during spin up . no other single instrument observed the source in both states . it is notable that the 110kev flux decreased by 55% , compared with a 20% change in the 120kev flux . measurements with the soft x - ray instruments rosat and asca during spin - down both yielded decreases of 60% or larger relative to the flux in the same band measured in 1979 with heao . the change in luminosity measured with the asm is close to the 15% change in the magnitude of the accretion torque measured with batse . we now consider implications of our measurement to models of accretion torque . in the following discussion we will sometimes denote the spin frequency of the neutron star @xmath64 , rather than @xmath21 , to avoid confusion with other frequencies . pulsars can not spin down while accreting in the simple picture of disk accretion outlined in 1 . near equilibrium , however , the accretion torque may depend in detail upon the disk - magnetosphere interaction . spin - down can occur while material continues to accrete if negative , `` non - material '' torques are present . these may result from dragging of the magnetosphere by the disk outside the corotation radius , where the disk rotates more slowly than the neutron star ( ghosh & lamb 1979 ) . alternatively , mass ejection in a magnetohydrodynamic wind may slow the neutron star ( arons & lea 1980 ) . both models predict that a drop in @xmath23 causes a decrease in @xmath65 , and potentially a reversal from spin up to spin down . the model of ghosh & lamb is the most detailed . they derived a modified torque equation given by @xmath66 . the dimensionless function @xmath67 is the same for all accreting pulsars . the `` fastness '' is defined as @xmath68 , and is proportional to @xmath69 . the function @xmath67 increases smoothly with @xmath8 . it passes through zero at a critical value @xmath70 , and approaches @xmath71 when @xmath72 . its exact form is somewhat controversial , but it is sufficient for our purposes that it can be written approximately as if the model of ghosh and lamb is correct , we can determine how close 4u 162667 is to equilibrium . in going from spin up to spin down , the change in accretion torque was @xmath74 . assuming @xmath75 , the change in @xmath8 was @xmath76 . in general , @xmath77 and @xmath78 . near equilibrium , where @xmath79 and @xmath80 , we have @xmath81 $ ] . from the measured @xmath82 and @xmath83 we find that during spin up , @xmath84 . we can constrain the distance , @xmath85 , to 4u 162667 , from the luminosity and flux during spin up and from @xmath86 . combining equations [ n ] and [ lx ] and using @xmath87 yields @xmath88 . solving for @xmath11 in terms of @xmath18 and @xmath89 and using @xmath90 then gives us @xmath91 . finally , we assume that the 120 kev flux , @xmath48 , is a fraction , @xmath92 , of the bolometric flux . using @xmath93 , and putting in the measured values of @xmath48 , @xmath94 and @xmath95 during spin up yields @xmath85 must be at least @xmath05kpc for a neutron - star with a mass of @xmath97 , a radius of 10 km and a moment of intertia of @xmath3g@xmath4 . chakrabarty et al . 1997a obtained a lower limit of 3kpc using the higher flux value of @xmath98ergs@xmath35s@xmath27 measured with heao1 ( pravdo et al . 1979 ) . we can go on to determine @xmath89 if we further assume that the qpo observed in 4u 162667 during spin up is a magnetospheric beat - frequency oscillation satisfying @xmath99 ( alpar & shaham 1985 ; lamb et al . 1985 ) . by definition , @xmath100 . substituting @xmath101 , with @xmath102hz and @xmath103hz , and using @xmath104 , yields @xmath105 . equation ( [ d ] ) then yields a distance of @xmath05kpc for @xmath106 of order unity . if @xmath99 , we would have expected @xmath107 to decrease in going from spin up to spin down . in fact the qpo frequency increased from 0.04hz to 0.048hz . one possible explanation is that @xmath108 decreased from @xmath00.17hz to @xmath00.08hz and that @xmath107 changed from @xmath109 to @xmath110 . this interpretation has two problems . first , it requires the magnetospheric radius to move outside the corotation radius . it is not known how accretion can occur when @xmath111 . second , the change in @xmath108 then requires an approximately 80% decrease in @xmath8 . in contrast , we observed a 20% decrease in @xmath112 . it is unlikely that the dependence of @xmath11 on @xmath8 changes significantly near equilibrium since the magnetic energy density is a such strong function of radius ( @xmath113 ) . we think it more likely that the qpo in 4u 162667 is not a magnetospheric beat - frequency oscillation . if not , then we can not estimate @xmath89 from these observations , and equation ( [ d ] ) provides only a lower limit to the distance . a partial covering model provides a reasonable fit to the energy spectrum of 4u 162667 for 1990 june 1991 october , with a 60% higher x - ray flux than 1987 april 1990 june . observationally , partial covering can not be ruled out . however , the flux from 4u 162667 is low enough that even for a distance of 10kpc the luminosity is substantially sub eddington . we do not expect the physical conditions associated with partial absorption in wind - fed systems like as vela x-1 , such as an accretion wake , to be present . in wind - fed systems small , transient accretion disks with frequent reversals in rotational sense are thought to form ( e.g. wang et al . 1981 ; fryxell & taam 1988 ) . wind - fed pulsars , such as vela x-1 and gx 3012 , display erratic spin - frequency behavior that can be described as a random walk in spin frequency ( deeter & boynton 1982 ; bildsten et al . because neither the rate nor the sense of accretion is constant , equilibrium is not a meaningful concept to apply to wind accretors . correlations between torque and luminosity would have to be measured on the time scale of disk formation and reversal , which is thought to be of order hours to days . the two episodes of steady spin up observed with batse in gx 3012 were probably associated with the formation and accretion of a disk . during both episodes , the 2055kev pulsed flux was @xmath050% higher than average . however , since the average 120kev flux is 3 times larger than the average 2055kev pulsed flux ( see figure 8) , a change in spectral shape or pulsed fraction could have caused the enhancement seen by batse . the _ ginga _ asm did not observe gx 3012 during the spin - up episode . however , the _ ginga _ asm flux , folded at the orbital period , is in good agreement with the batse pulsed flux folded with the same orbit . this agreement makes it unlikely that the increase of pulsed flux during the spin - up episode resulted just from a change in spectral shape or pulsed fraction , and strongly supports the correlation between luminosity and accretion torque inferred from the batse measurements . the flux enhancement seen with the _ ginga _ asm during the periastron passage prior to spin - up is further evidence for an increase in total flux during spinup . on the other hand , such an enhancement is seen in other periastron passages , and no such enhancement in the 2055kev pulsed flux is seen with batse during the periastron passage prior to spin up relative to other periastron passages . chichkov et al . see a periastron flare in the 120kev flux with watch / granat , along with evidence of flux enhancement at apastron and other phases . they measure an average peak periastron luminosity of @xmath114ergss@xmath27 for a distance of @xmath115kpc , a factor of 2 larger than the flux from the _ ginga _ asm . their larger flux may be due to the harder spectral shape they assumed in converting from count rate to flux ( white , swank , & holt 1983 ) . our measurements provide indirect evidence for a connection between torque and luminosity in the persistent accreting pulsars 4u 162667 and gx 3012 . they reveal a @xmath020% change in the 120kev x - ray flux of 4u 162667 between spin up and spin down , comparable to the change in @xmath116 . the asm monitored 4u 162667 during the last 3 years of its extended spin up episode and during the first 16months of spin down . the energy spectrum is significantly harder during spin down than during spin up the change in the 110kev flux is @xmath050% . these observations highlight the importance of continuous , broad - band monitoring . we confirm that the x - ray flux of gx 3012 varies with orbital phase , as observed in 2055kev pulsed flux with batse . the shape of the 120kev x - ray light curve shows all the features of the batse light curve . although we do not observe the 1991 episode where batse observes an enhancement in the 2055kev pulsed flux accompanied by smooth , rapid spin up , the close similarity between the asm and batse light curves indicate that the batse pulsed flux traces the bolometric flux , supporting the correlation between torque and luminosity suggested by the batse observations . the details of the connection between torque and luminosity in persistent pulsars remain unknown , and will require continued , simultaneous monitoring of the x - ray flux and spin frequency of these objects .
we present x - ray light curves and energy spectra for the persistent accreting pulsars 4u 162667 and gx 301 - 2 measured by the all - sky monitor ( asm ) on _ ginga _ from 1987 march 1991 october . we compare these with simultaneous and near simultaneous measurements of spin frequency and flux by other instruments , principally the burst and transient source experiment ( batse ) on the compton gamma ray observatory ( cgro ) . a dramatic change in the shape of the x - ray spectrum and a @xmath020% decrease in the 120 kev x - ray flux accompany the 1990 transition from steady spin up to steady spin down in 4u 162667 . the _ ginga _ asm is the only instrument to observe 4u 162667 during both spin up and spin down . we show that the distance to 4u 162667 is @xmath15kpc . if 4u 162667 is a near - equilibrium rotator and if the 0.04hz quasi - period oscillations seen during spin up are magnetospheric beat - frequency oscillations , then the distance to the source is @xmath05kpc , assuming a neutron - star mass of 1.4@xmath2 , radius 10 km , and moment of inertia @xmath3g@xmath4 . the x - ray flux of gx 3012 measured with the asm varies with orbital phase . the flux peaks shortly before periastron , with a secondary maximum near apastron . such variations were seen previously in the 2050kev pulsed flux with batse . the asm observations confirm that the 2050kev pulsed flux in gx 3012 is a good tracer of the bolometric flux . the x - ray flux in gx 3012 was a factor of @xmath02 larger than average during the periastron passage prior to an episode of persistent spin up in 1991 july observed with batse that lasted half an orbit and resembled outbursts seen in transient x - ray pulsars . no asm observations were available during the spin - up episode .
Madonna has admitted to being too lenient with her kids but her latest pic of son, Rocco, has fans wondering if she’s gone further than ever in letting them grow up too fast. “The party has just begun! Bring it! 2014,” the singer wrote Saturday night along with a picture of Rocco, 13, and two of his friends, all holding bottles of booze! In the pic, Rocco, Madonna’s son by director Guy Richie, is seen showing off a bottle of Bombay Sapphire gin. His two friends are each cradling bottles of Belvedere vodka. The young teens aren’t seen actually drinking in the pic but the idea that Madonna had the boys pose with the alcohol while tweeting “the party has just begun” has some fans in an uproar. PHOTOS: Ten Sexy Singers Before They Were Famous “It is poor judgment to glorify substances to children,” wrote one. The image comes just days after Rocco posted his own New Year’s day pictures of himself holding what appears to be a glass of Champagne and another, posing in front of a liquor cabinet. Two years ago in an interview with Brian Williams, Madonna said of her parenting style, “I’m probably not as tough as I should be,” although she did say she drew the line with her daughter smoking. That Was Quick! 25 Of Hollywood’s All-Time Shortest Marriages ||||| CTRL-C or CMD-C, then press Enter. Click/tap elsewhere to exit, or press ESC. Instagram Madonna is no stranger to controversy, but it doesn't typically involve her children. The Material Girl shared an Instagram picture of her son Rocco Ritchie holding a bottle of gin on Saturday, Jan. 4. Taken in the Swiss Alps, the snapshot also shows the 13-year-old surrounded by two vodka bottles. "The party has just begun! Bring it! 2014," Madonna, 55, captioned. The legal drinking age in the United States is 21 years old, and in Switzerland, it's 16 years old. The picture generated more than 1,200 comments in less than 24 hours. "Booze is not something to make jokes about. Period!" one user wrote. Another Madonna fan cautioned, "You gotta know what kind of signals it sends not only to her son, but also to kids around the world." NEWS: Beyoncé gives Madonna's daughter Mercy James a kiss in concert Some fans came to the family's defense, however. "Rocco is adorable and I think it's really cool that Madonna, and her family, post any pics. Madonna may be a worldwide icon, but she is human like the rest of us," another user reminded the singer's critics. "The picture is what it is. Let it be. Don't speak." The "Superstar" singer addressed the backlash herself on Sunday, Jan. 5. "No one was drinking we were just having fun! Calm down and get a sense of humor!" Madonna wrote in another Instagram caption. "Don't start the year off with judgement!" NEWS: Rocco Ritchie shows off his abs and parties with Madonna During a 2012 appearance on The Ellen DeGeneres Show, Ritchie said Madonna runs a tight ship. "She's a good mother. Yes. That's all I have to say," he revealed. "She's very strict, but in a good way." Share Share Tweet Email <> Embed CTRL-C or CMD-C, then press Enter. Click/tap elsewhere to exit, or press ESC. PHOTOS: Madonna's ex-boyfriends FROM AROUND THE WEB MORE ON EONLINE RELATED VIDEOS:
– Not a great idea: being an international pop star and posting a photo to your Instagram account showing your 13-year-old son holding a bottle of gin. Yet that's what Madonna did Saturday, along with the caption, "The party has just begun! Bring it! 2014." In addition to her teen son Rocco Ritchie holding the bottle of what appears to be Bombay Sapphire, the picture shows two of his friends holding bottles of vodka, E! reports. The picture was taken in Switzerland, where the drinking age is 16. More than 1,200 people commented in the first day it was posted, many of them unhappy. But Madonna responded yesterday, insisting on another Instagram post, "No one was drinking we were just having fun! Calm down and get a sense of humor! Don't start the year off with judgement!" But Radar notes that days earlier, Rocco had posted a picture of himself with what appeared to be a glass of champagne, plus another picture in which he's standing in front of a liquor cabinet.
Drones aren't something most residents worry about on a day-to-day basis. But they may be flying over the skies of Alameda County soon if Sheriff Gregory Ahern gets his way. Ahern is looking at buying a surveillance drone, an unmanned aircraft system, for search and rescue missions, bomb threats, SWAT operations, marijuana grows, fires and natural disasters. But the proposal has already drummed up backlash from privacy advocates worried that drone technology is outpacing safeguards. So far Alameda County has only tested them. Ahern is eyeing a unit weighing four pounds with a four-foot wing span in the $50,000 to $100,000 price range. He and his deputies will have the chance to test others next week at the Urban Shield regional disaster-preparedness exercises. The department, however, can't buy one until they receive Federal Aviation Administration authorization. The units can be outfitted with high-powered cameras, thermal imaging devices, license plate readers and laser radar. Police and sheriffs already use some of those tools. However, combined with a hard-to-detect drone, they offer authorities unprecedented capabilities for mass surveillance using militarized equipment. "The law hasn't caught up with the technology," said Trevor Timm of the Electronic Frontier Foundation, a privacy rights group. "There are no rules of the road for how they operate these things." The units would be unarmed and, according to Ahern, are cheaper than a helicopter, which are not suited to hover low over a crime scene as drones are. Advertisement The department offered no cost analysis or helicopter usage data. But Sgt. J.D. Nelson said the money would come the Department of Homeland Security, one of the lead agencies pushing the expansion of domestic drones. Training is included in the price of the equipment, Nelson said. He said the department would consider using a drone during mutual aid operations on a "case-by-case basis." The ambiguity alarmed Oakland resident Mary Madden. "I don't want drones flying over my backyard," she said Thursday on the steps of Oakland City Hall. Members of the Electronic Frontier Foundation, Critical Resistance and the ACLU gathered there to challenge Ahern. Opponents said if Ahern does not reconsider, they will go to the courts, City Council members and the Alameda County Board of Supervisors, which have to approve of grants received by the sheriff. Although the issue has not come before the supervisors yet, District 1 Supervisor Scott Haggerty said, especially with budget cuts, drones could be useful for policing rural unincorporated areas like Livermore. But he said in urban areas, particularly, there has to be a process in place to protect people's privacy that involves public input. In the meantime, the ACLU filed a public records request with the sheriff's office seeking information about the department's proposed acquisition. The fundamental question is whether a drone is necessary, ACLU staff attorney Linda Lye said. Occupy Oakland protests showed that when law enforcement has powerful and dangerous tools, they will use them, said Lye, referring to the use of tanks and long-range acoustic devices, capable of intensely loud tones, for crowd control. "The best practices on paper are meaningless if they are violated in the field," she said. About a dozen U.S. law enforcement agencies already have or are using a drone, including the Seattle Police Department. Ben Miller, the unmanned aircraft program manager for the sheriff's office of Mesa County, Colo., dismissed the privacy concerns but said his department tried to be transparent with their residents who worried about the use of their Draganflyer X6 and Falcon UAV. They use them in search and rescue missions, as well as in homicide investigations. "A bird's-eye view is huge," Miller said, reflecting pitches developed by the industry, which recognized early on they would have to sell the public on the advantages of domestic drones. The Association for Unmanned Vehicle Systems International formulated a code of conduct. The International Association of Chiefs of Police Aviation Committee also has recommended guidelines for the use of unmanned aircraft, which include community engagement, rules for use, search and seizure guidelines and how data will be retained. ||||| OAKLAND, Calif. (AP) — The Alameda County Sheriff's Department is hoping to become one of the handful of local law enforcement agencies that have received federal clearance to use unmanned aerial drones to fight crime, a goal that already is arousing concerns among privacy advocates. Civil liberties and privacy groups revealed Thursday that Sheriff Greg Ahern is seeking Department of Homeland Security funding to buy a small remote-controlled drone called a Dragon Fly. If the money comes through and the Federal Aviation Administration permits the department to test the device, Alameda would be the first public safety agency in California to deploy technology first developed for spying on U.S. enemies overseas. A memo that one of Ahern's captains prepared over the summer, obtained by the Freedom of Information Act web site MuckRock, says the drone would be equipped with a long-distance camera, live video downlink and infrared sensors that could be used for monitoring bomb threats, fires, unruly crowds, search and rescue operations, and marijuana grows. Only four local law enforcement agencies in the United States have received FAA approval to train officers, deputies and volunteer pilots to operate aerial drones, according to Don Roby, a police captain in Baltimore County, Maryland who chairs the aviation committee of the International Association of Chiefs of Police. They are in Miami, Seattle, Mesa County, Colo., and Arlington, Texas, although only the Colorado agency has permission to use them routinely, Roby said. "There is a lot of interest in it, but more people are taking a wait-and-see-attitude," he said. Congress has ordered the FAA to develop safely regulations that would allow both public agencies and commercial operators to fly unmanned aircraft by 2015. Ahern's spokesman, Sgt. J.D. Nelson, told the San Francisco Chronicle (http://bit.ly/TiWMXv ) that the Dragon Fly model the department tested costs between $50,000 and $100,000 but would save money now spent on spent on sending helicopters into the sky. The Electronic Frontier Foundation and the ACLU of Northern California want the sheriff to provide more details about why the drone is needed and how the department would use it. The San Francisco Police Department also has expressed interest in acquiring a drone, although its grant application was rejected, Electronic Frontier Foundation activist Trevor Timm said. In some communities, the public only has learned about a local agency's plan to acquire a drone by filing Freedom of Information Act requests and then lobbying lawmakers to ask questions, Timm said. "This is the kind of pattern we are seeing, and the most effective way to put the brakes on these projects is to get local government involved," he said. Ben Gielow, general counsel for the Association for Unmanned Vehicle Systems International, an unmanned trade group, said opponents of unmanned aerial vehicles should understand that battery-operated drones are incapable of traveling the long distances that would be required to follow a car or a person beyond the limited areas covered under FAA permits. "These are not military systems that stay in the air for extended periods of time," Gielow said. "They are small systems that stay in the back of the squad car and are used like a canine unit." The International Association of Chiefs of Police has developed model guidelines for departments planning to use drones. The guidelines include maintaining a public log of the flight hours drones put in and other steps to ensure officers are not misusing the equipment. A an Associated Press-National Constitution Center poll released last month found that 44 percent of those surveyed supported allowing police to use drones inside the U.S., while 35 percent said they were "extremely concerned" or "very concerned" that domestic use of the technology for law enforcement surveillance would erode personal privacy.
– It's beginning to look like surveillance drones are destined to become a routine part of police operations in the US. Alameda County—home to Oakland and Berkeley—is the latest to sign on, with Sheriff Greg Ahern planning to buy a small, unmanned drone to help with things like search-and-rescue missions, SWAT operations, and pot busts, reports the Oakland Tribune. The cost would be somewhere between $50,000 and $100,000, but the FAA still must approve. The AP says four other police agencies already have gotten FAA approval to train employees to operate drones—Miami, Seattle, Arlington, and Mesa County, Colorado, though only the latter has the green light to use them on a regular basis. The Electronic Frontier Foundation, the ACLU, and other privacy advocates are worried about flying cameras peering down on backyards and want Alameda County to provide more details on how the drone might be used. Congress, meanwhile, has ordered the FAA to come up with regulations by 2015.
the mechanism driving the emergence of a quantum macroscopic order that is able to resist to the decoherence effect of high temperatures remains a major topic of research in condensed matter . the realization of this macroscopic quantum phase in doped cuprates close to the mott insulator regime has stimulated a large amount of investigations on the physics of strongly correlated metals . most of theoretical papers treated models of a homogeneous system made of a single electronic band ( or models of multiple hybridized bands reduced to a single effective band ) , with a large hubbard repulsion . there is a growing agreement that the solution of the problem of high-@xmath2 superconductivity requires the correct description of the normal state where spin , charge , orbital , and lattice degrees of freedoms compete , with the formation of nanoscale puddles of spin density wave stripes , puddles of charge density wave stripes , and/or puddles of ordered mobile oxygen interstitials . a lot of researchers feel very strongly that the minimum model to capture the essential physics of high - temperature superconductors needs to take into account both the presence of two electronic components with different orbital symmetry " @xcite , and a nanoscale phase separation " @xcite involving also the spatial segregation of the charge density , the orbital symmetry , and the lattice local symmetry @xcite . therefore , a multiband model is needed to describe the functional superconducting phase emerging in a complex system with multiple electronic components . @xcite the effects of strong correlations in multiband systems were actively treated using the hubbard model . @xcite a particular interesting feature of the multiband hubbard model is that it predicts the emergence of phase separation . @xcite in 1994 a topological lifshitz transition @xcite was first proposed to appear around 1/8 doping in cuprates @xcite and a theory for high-@xmath2 superconductivity based on the shape resonances between a bcs - like superconducting gap and a second gap in the bec - bcs crossover regime in the new appearing band was formulated . @xcite there is now compelling experimental evidence that the high temperature superconductivity emerges in the proximity to a topological lifshitz transition . @xcite here we provide a theoretical model for the phase diagram region where the nanoscale phase separation emergences in a two - band scenario of two strongly correlated electronic fluids in the proximity of a topological lifshitz transition ( so called 2.5 order transition ) . this simple model captures the key physics of the anomalous normal phase in cuprates exhibiting the phase separation as a function of charge density and the energy splitting between the two bands . this provides an additional insight into specific features of superconducting phases in different cuprate families , i.e. , the new 3d phase diagram where the critical temperature depends on the doping and misfit strain between the active atomic layers and the spacer layers . @xcite there exists an evidence of two types of phase separation in cuprates ( a ) the phase separation in the underdoped regime , near the mott phase , between a hole - poor antiferromagnetic phase and a metallic hole - rich phase and ( b ) the phase separation between two metallic phases , namely , between a hole - poor phase with doping close to 1/8 and a hole - rich phase with doping close to 1/4 . the cuprates at optimum doping present the second type of phase separation as we have proposed before . @xcite recently it has been found that some cuprate systems like la@xmath0cuo@xmath1 show scale invariance of the distribution of oxygen interstitials that suggests a scale invariant phase separation typical of a system near the critical point . therefore , it is possible that the criticality in la@xmath0cuo@xmath3 results from a quantum critical point . @xcite we discuss the phase diagram of a two - band system as a function of two variables : the charge density and the energy shift between the two bands . in this phase diagram , we first determine a line of quantum critical points for a lifshitz transition of the type appearing of a spot " of a new sheet of the fermi surface when one more band comes into play . second , we identify the electronic phase separation for two strongly correlated bands in the proximity of the line of lifshitz transition . finally , we identify the critical point , where the phase invariance in the coexistence of the two phases appears . this last point is proposed to be a possible explanation for the regime of scale invariance in nanoscale phase separation in high-@xmath2 superconductors . the existence of the two types of the strongly correlated charge carriers in cuprates can be described in terms of the two - band hubbard model . the hamiltonian of such a system can be written as @xcite @xmath4 here , @xmath5 and @xmath6 are the creation and annihilation operators for electrons corresponding to bands @xmath7 at site @xmath8 with spin projection @xmath9 , and @xmath10 . the symbol @xmath11 denotes the summation over the nearest - neighbor sites . the first term in the right - hand side of eq . corresponds to the kinetic energy of the conduction electrons in bands @xmath12 and @xmath13 with the hopping integrals @xmath14 . in our model , we ignore the interband hopping . the second term describes the shift @xmath15 of the center of band @xmath13 with respect to the center of band @xmath12 ( @xmath16 if the center of band @xmath13 is below the center of band @xmath12 ) . the last two terms describe the on - site coulomb repulsion of two electrons either in the same state ( with the coulomb energy @xmath17 ) or in the different states ( @xmath18 ) . the bar above @xmath19 or @xmath9 denotes _ not _ @xmath19 or _ not _ @xmath9 , respectively . the assumption of the strong electron correlations means that the coulomb interaction is large , that is , @xmath20 . the total number @xmath21 of electrons per site is a sum of electrons in the @xmath12 and @xmath13 states , @xmath22 , and @xmath23 is the fermi energy potential . below , we consider the case @xmath24 relevant to cuprates . the model eq . predicts a tendency to the phase separation in a certain range of parameters , in particular , in the case when the hopping integrals for @xmath12 and @xmath13 bands differ significantly ( @xmath25 ) @xcite . this tendency results from the effect of strong correlations giving rise to dependence of the width of one band on the filling of another band . in the absence of the electron correlations ( @xmath26 ) , the half - width @xmath27 of @xmath12 band is larger than @xmath28 ( @xmath29 is the number of the nearest neighbors of the copper ion ) . due to the electron correlations , the relative width of @xmath12 and @xmath13 bands can vary significantly @xcite . the schematic band structure and all notation are presented in fig . [ figbands ] . ) and a narrow ( @xmath13 ) correlated ( lower hubbard ) bands shown by the solid cosine - like curves . the half - widths of these bands are @xmath30 ( @xmath31 ) , where @xmath32 are half - widths of the bare ( non - correlated ) bands shown by the dotted cosine curves , and @xmath33 are given by eq . . the center of the wide band is chosen as zero energy . the center of the narrow band is shifted by the value @xmath34 . the lifshitz parameter @xmath35 is defined as the position of the fermi level @xmath23 relative to the bottom of the narrow band @xmath36 ( in units of @xmath37 ) . ] following ref . we considered the limit of strong correlations and introduce the one - particle green s function , @xmath38 where @xmath39 is the time - ordering operator . the equations of motion for the one - particle green s function with the hamiltonian eq . include the two - particle green s functions . however , in the limit of strong on - site coulomb repulsion , the presence of two electrons at the same site is unfavorable , and the two - particle green s function is of the order of @xmath40 , where @xmath41 . in turn , the equation of motion for the two - particle green s functions includes the three - particle terms , which are of the order of @xmath42 and so on . we use for the two - particle green s functions the hubbard i approximation and neglect the terms of the order of @xmath42 . in so doing , we get a closed system for the one- and two - particle green s functions @xcite . this system is solved in a standard manner by passing from the space time @xmath43 to the momentum frequency @xmath44 representation . in the case of superconductors the number of electrons per site @xmath45 . the upper hubbard sub - bands are empty , and we can proceed to the limit @xmath46 . in this case , the one - particle green s function is independent of @xmath47 and can be written in the form @xcite @xmath48 where @xmath49 , @xmath50 , @xmath51 @xmath52 is the average number of electrons per site in the state @xmath53 , and @xmath54 is the spectral function depending on the lattice symmetry . in the main approximation in @xmath40 , the magnetic ordering does not appear and we can assume that @xmath55 and @xmath56 . for simplicity and for more direct comparison with the results of ref . , we use here the dispersion law corresponding to the tight - binding band in the simple cubic lattice , @xmath57/3 $ ] , where @xmath58 is the lattice parameter . we checked that the qualitative results do not significantly affected by the specific choice of the dispersion law . however , for a more detailed comparison of the model predictions with the actual experimental data , it is necessary to use realistic electronic characteristics . this work is now in progress . it follows from eqs . and that the filling of band @xmath12 depends on the filling of band @xmath13 and _ vice versa_. really , using the expression for the density of states @xmath59 , we get the expression for the numbers of electrons in bands @xmath12 and @xmath13 @xmath60 where @xcite @xmath61 and @xmath62d^3\mathbf{k}/(2\pi)^3}$ ] is the density of states for free electrons . the fermi level , @xmath23 , in eq . is found from the equality @xmath63 . in iron - based superconductors , as it was shown in refs . , the region of high @xmath2 appears in the neighborhood of the lifshitz transition where the local fermi surface spot disappears . the lifshitz transition is a common feature of many types of superconductors and in its neighborhood the standard bcs approach is hardly applicable . the situation here bears a similarity with the bec bcs crossover widely studied in the physics of ultracold atomic systems . in the specific case of strongly correlated electron systems including two bands ( two types of charge carriers ) , the shift of the chemical potential due to the relative shift of the bands and/or the variation of charge density implies the relevant renormalization of the effective width of both bands . this strongly nonlinear renormalization leads to the electronic phase separation . since in the high-@xmath2 superconductors an increase of the critical temperature occurs at a substantiable distance from the lifshitz transition , it is tempting to associate the region of the phase separation with that corresponding to high values of the critical temperature . the experimental evidence suggests that the phase separation goes together with the high-@xmath2 superconductivity . in this paper , we calculate the region of the phase separation as a function of the lifshitz parameter . poles of the green s function eq . give two energy bands of our model . the lifshitz parameter @xmath64 determines how far is the position of the fermi level @xmath23 from the bottom @xmath36 of the narrow band @xmath13 ( see fig . [ figbands ] ) . for @xmath65 the charge carriers of the @xmath13 type exist in the system . at fixed doping level @xmath66 , the occupation numbers , @xmath67 and @xmath68 depend on the value of @xmath35 . the dependence of the filling of bands @xmath12 and @xmath13 on the lifshitz parameter is non - trivial for strongly correlated bands because the widths of these bands , in turn , depend on the fillings @xmath67 and @xmath68 . we calculate the dependence of @xmath67 and @xmath68 on the lifshitz parameter @xmath35 according to the approach developed in refs . . the obtained curves for three different doping levels @xmath69 are shown in fig . these dependences are qualitatively similar . electrons appear in band @xmath13 if @xmath65 . simultaneously , the number of electrons in the @xmath12 band starts to decrease and it goes to zero at some critical value of the lifshitz parameter . between the fermi level and the bottom of the upper narrow band ( lifshitz parameter ) versus the shift @xmath15 between two bands at different doping @xmath70 . ] we postulated that the ground state of the system is homogeneous when obtaining the above results . the analysis performed in refs . shows , however , that this is not so in general case . indeed , the energy of the system in the homogeneous state , @xmath71 , is the sum of electron energies in all filled bands . we can write @xmath71 in the form @xcite @xmath72 the analysis of these equations reveals that within a certain @xmath21 range the system compressibility is negative , @xmath73 , @xcite which means a possibility for the charge carriers to form two phases with different electron concentrations . the electronic phase separation occurs in a wide range of model parameters and doping levels . at fixed doping , the phase - separated state is the ground state of the system if the lifshitz parameter lies within definite limits @xmath74 ( see vertical lines in figs . [ fig1]a - c ) . the separated phases are @xmath75 with total ( @xmath12 and @xmath13 ) electron concentration @xmath76 , and @xmath77 having a different electron concentration @xmath78 . for the phase @xmath12 ( @xmath13 ) the electrons of @xmath12 ( @xmath13 ) type are dominant , that is , @xmath79 ( @xmath80 ) . the volume fraction @xmath81 of the phase @xmath75 , as well as concentrations @xmath76 and @xmath78 , can be found by the minimization of the system s energy , @xmath82 with the condition @xmath83 . the value of @xmath81 decreases from @xmath84 down to zero for @xmath35 changing from @xmath85 to @xmath86 as shown in fig . [ fig1 ] . . ] the lifshitz parameter depends both on the doping @xmath69 ( via the position of the fermi level ) and the energy shift between the centers of two bands @xmath15 . at fixed doping level , there is one - to - one correspondence between @xmath15 and @xmath35 . typical curves @xmath87 are shown in fig . [ fig2 ] for different @xmath69 . the phase separation exists in the region restricted by two black dotted curves . in ref . , the phase diagram of the two - band hubbard model in the plane ( @xmath21,@xmath15 ) has been obtained in the limit of large @xmath47 . using these results and the relation between @xmath35 and @xmath15 for different doping levels , we can rebuild this phase diagram in the plane ( @xmath35,@xmath69 ) . the result is shown in fig . the phase separation exists within the region restricted by the ( red ) solid contour . . the charge neutrality breaking in the phase separated state substantially reduces this region . it shrinks with the growth of @xmath88 , that is , with the growth of the long - range coulomb interaction . ] the phase separation discussed above gives rise to the breaking of the local charge neutrality since the charge carrier concentration is different in different phases . thus , we should take into account an additional electrostatic contribution in the free energy , @xmath89 , which is governed by the long - range coulomb interaction ( this contribution has been neglected in the above discussion ) . this term in the wigner seitz approximation was calculated in refs . . if @xmath90 , it can be written as @xmath91 , where @xmath92 , @xmath93 is the characteristic energy of the intersite coulomb interaction , @xmath94 is the elementary charge , @xmath95 is the long - range permittivity , and @xmath96 is the radius of the spherical droplet of the phase @xmath75 surrounded by the shell of the phase @xmath77 . in the case @xmath97 , we should replace @xmath98 and @xmath99 . the value of @xmath89 decreases with decreasing a spatial scale of the inhomogeneous state . however , the smaller is the characteristic size of inhomogeneity , the higher is the energy of the phase interface @xmath100 . we assumed above that the phase with lower volume fraction @xmath81 forms spheres of the radius @xmath96 located in the matrix of another phase . in this case , the energy of the phase interface @xmath100 can be written as @xmath101 , where @xmath9 is the interface tension , which we calculate using the balian bloch perturbative approach @xcite . such calculations are described in detail in ref . . minimizing @xmath102 with respect to @xmath96 , we obtain the characteristic scale of the phase - separated state and get more realistic estimate for the free energy of the inhomogeneous system @xcite . the optimized value of @xmath103 is given by the following relation @xcite : @xmath104^{1/3}\,.\ ] ] as follows from this formula , the new contribution to the total free energy depends on the long - range coulomb repulsion parameter as @xmath105 . the region of parameters , where the phase separation is favorable , shrinks with the increase of @xmath103 , that is , with the growth of the long - range coulomb repulsion @xmath106 and disappears if this value is above some threshold . in other words , the long - range coulomb interaction induces a shrinkage of the phase separation region together with the scale of the phase separation . hence we can say that here we deal with the frustrated ( or arrested ) phase separation . note that the term frustrated phase separation " was first introduced by emery and kivelson @xcite for strongly correlated electron systems and is rather widely used in this field ( see , e.g. refs . ) , whereas the synonym of this term , namely , arrested phase separation " has been used long before but mainly in relation to colloidal solutions and gels ( see , e.g. refs . ) and now it is used in a more general context . @xcite we believe that the word arrested " is more adequate here and prefer to use it . the phase separation region is shown in fig . [ fig4 ] in the plane @xmath107 for different values of @xmath108 . the long - range coulomb repulsion affects significantly the phase separation region ( if @xmath109 for the chosen range of parameters ) . the area of the inhomogeneous state rapidly shrinks ( if @xmath110 in fig . [ fig4 ] ) and totally disappears if @xmath111 ( @xmath112 in fig . [ fig4 ] ) . the values @xmath113 in fig . [ fig4 ] are realistic for high-@xmath2 cuprates . @xcite the phase separation in the two - band model is possible only in the vicinity of the lifshitz transition , that is , in definite range of parameter @xmath35 . in fig . [ fig4 ] the lines of constant @xmath35 are shown by dotted lines . the phase separation is evidently possible only if @xmath65 . in fig . [ fig3d ] , the region of the phase separation is shown in three - dimensional phase diagram in the space ( @xmath35,@xmath69,@xmath114 ) . this figure summarizes the results of our calculations . the inhomogeneous state exists in a definite region of doping and lifshitz parameter . this region decreases with the increase of the long - range coulomb repulsion parameter @xmath108 and shrinks to zero if @xmath111 . we can say that the shrinkage of the phase - separation region allow the charge carrier densities in the phase - separated state to be closer to the line of lifshitz transition . now the point is where high @xmath2 occurs in a two - band scenario . the detailed discussion of this issue is given in refs . and . let us move the bottom @xmath36 of the second band relative to the fermi level and we shall deal with the following two regimes . \1 . the system is boson - fermion regime with a low @xmath2 , where a first bcs condensate " resonates with a bec condensate " , for the negative lifshitz parameter , @xmath115 , where @xmath116 , @xmath117 is the cutoff energy for the pairing interaction , and @xmath37 is the width of the first band . \2 . at the shape resonance " in an optimum regime , where a first bcs condensate " in an electron - rich band resonates with a second condensate at the bec - bcs crossover " occurring for a positive values of @xmath35 , the critical temperature starts increasing and attains maximum at @xmath35 of the order of @xmath118 . now the problem is that in this range of the tuning of the chemical potential , the phase separation also occurs . moreover , in oxygen doped system we have identified , where the critical point for phase separation appears and it is quite near to the @xmath119 range . therefore the distance in energy ( @xmath35 in our notation ) of the critical point from the band edge could be a measure of the unknown energy cutoff for the pairing interaction in cuprates . these ideas are illustrated by the figures presented in the previous section . the undoped state of the cuprates corresponds to one electron per site ( @xmath120 ) in the model used in ref . . the number of itinerant holes @xmath69 is related to @xmath21 as @xmath121 . in general , the relationship between @xmath21 and @xmath69 could be more complicated @xcite , however , for the present considerations such corrections are not of principal importance . in conclusion , we can say that our simplified model provides a good illustration for general ideas that high-@xmath2 superconductivity is an inherent feature of functional heterostructures at atomic limit " made of atomic units , where four essential ingredients are well tuned . ( 1 ) two or more electronic components give multiple fermi surface spots with different symmetry so that ( a ) single electron interband hopping is forbidden while ( b ) interband exchange - like pair transfer is allowed . ( 2 ) the fermi energy of one of the components is close to the band edge so the system is close to the 2.5 order lifshitz ( metal - to - metal ) transition . ( 3 ) the lattice and electronic structure show the complex granular superstripes " matter : a nanoscale phase separation made of superconducting puddles coexisting with normal stripes with charge order ( cdw ) and/or magnetic puddles with spin order ( sdw ) , which does not suppress but enhances the stability of the macroscopic quantum order . ( 4 ) intragrain high-@xmath2 superconductivity is controlled by the shape resonances " between a first bcs condensate and a second condensate in the bec - bcs crossover . therefore , further essential details are needed to investigate in the scenario of multi - condensates superconductivity in the regime of percolation superconductivity corresponding to establishing the long - range coherence in scale - free networks . @xcite in this work , we have shown that the synthesis of a two - band strongly correlated multi - condensate superconductor " , where a first bcs condensate in a large fermi surface coexists with a second condensate at the bec - bcs crossover in a new appearing small fermi surface ( like in cuprates and iron - based superconductors ) @xcite should also exhibit an intrinsic arrested nanoscale phase separation . in fact , this type of complex superconductivity appears in a two - band metal at a critical distance from the topological lifshitz transition . moreover , the control of long - range coulomb interaction @xcite , determined by the screening in the different materials surrounding metal units , is a needed key parameter to bring the system to a self - similar phase @xcite , which will also promote @xcite the high-@xmath2 superconductivity . the work was supported by superstripes institute , dutch fom and nwo foundations , and russian foundation for basic research , project nos . 12 - 02 - 00339 and 14 - 02 - 00276 . n.p . acknowledges support from the marie curie intra - european fellowship . bednorz j g and mller k a 1988 _ rev . phys . _ * 68 * 585 bianconi a , castellano a , de santis m , rudolf p , lagarde p , flank a m and marcelli a 1987 _ solid state commun . _ * 63 * 1009 bianconi a , de santis m , di cicco a , flank a , fontaine a , lagarde p , katayama - yoshida h , kotani a and marcelli a 1988 _ phys . b _ * 38 * 7196 bianconi a , budnick j , chamberland b , clozza a , dartyge e , demazeau g , de santis m , flank a m , fontaine a , jegoudez j , lagarde p , lynds l l , michel c , otter f a , tolentino h , raveau b and revcolevschi a 1988 _ physica c _ * 153 - 155 * 113 bianconi a , de santis m , di cicco a , clozza a , congiu castellano a , della longa s , gargano a , delogu p , dikonimos m t , giorgi r , flank a m , fontaine a , lagarde p and marcelli a 1988 _ physica c _ * 153 - 155 * 115 pellegrin e , ncker , fink j , molodtsov s , gutirrez a , navas e , strebel o , hu z , domke m , kaindl g , uchida s , nakamura y , markl j , klauda m , saemann - ischenko g , krol a , peng j , li z and greene r 1993 _ phys . rev . b _ * 47 * 3354 gorkov l p and teitelbaum g b 2006 _ phys . rev . lett . _ * 97 * 247003 aruta c , ghiringhelli g , dallera c , fracassi f , medaglia p , tebano a , brookes n , braicovich l , balestrino g 2008 _ phys . rev . b _ * 78 * 205120 chen c c , sentef m , kung y f , jia c j , thomale r , moritz b , kampf a p and devereaux t p 2013 . _ phys . b _ * 87 * 165144 bianconi a 2013 _ nature phys . _ * 9 * 536 de mello e v l 2012 _ europhys . lett . _ * 98 * 57008 . de mello e v l and kasal r b 2012 _ physica c _ * 472 * 60 bianconi g 2013 _ europhys . lett . _ * 101 * 26003 bianconi g 2012 _ phys e _ * 85 * 061113 littlewood p 2011 _ nature mater . _ * 10 * 726 mller k a 2007 _ j. phys . : condens . matter _ * 19 * 251002 mller k a and bussmann - holder a , eds . 2005 _ superconductivity in complex systems _ ( springer , berlin / heidelberg ) kresin v , ovchinnikov y and wolf s 2006 _ phys . rep . _ * 431 * 231 bianconi a and missori m 1994 _ solid state commun . _ * 91 * 287 bianconi a , missori m , oyanagi h , yamaguchi h , nishiara y and della longa s 1995 _ europhys . lett . _ * 31 * 411 lanzara a , saini n , brunelli m , valletta a and bianconi a 1997 _ j. supercond . * 10 * 319 bianconi a , saini n , lanzara a , missori m , rossetti t , oyanagi h , yamaguchi h , oka k and ito t 1996 _ phys . _ _ 76 _ 3412 bianconi a , di castro d , bianconi g , pifferi a , saini n l , chou f c , johnston d c , colapietro m 2000 _ physica c _ * 341 - 348 * 1719 bianconi a 2000 _ int . j. mod . b _ * 14 * 3289 fratini m , poccia n and bianconi a 2008 _ j. phys . : conf . ser . _ * 108 * 012036 geballe t h and marezio m 2009 _ physica c _ * 469 * 680 poccia n , fratini m , ricci a , campi g , barba l , vittorini - orgeas a , bianconi g , aeppli g and bianconi a 2011 _ nature mater . _ * 10 * 733 poccia n , chorro m , ricci a , xu w , marcelli a , campi g and bianconi a 2014 _ appl . lett . _ * 104 * 221903 caivano r , fratini m , poccia n , ricci a , puri a , ren z - a , dong x - l , yang j , lu w , zhao z - x , barba l and bianconi a 2009 _ supercond . _ * 22 * 014004 ricci a , poccia n , campi g , joseph b , arrighetti g , barba l , reynolds m , burghammer m , takeya h , mizuguchi y , takano y , colapietro m , saini n l and bianconi a 2011 _ phys . b _ * 84 * 060511 bendele m , barinov a , joseph b , innocenti d , iadecola a , bianconi a , takeya h , mizuguchi y , takano y , noji t , hatakeda t , koike y , horio m , fujimori a , ootsuki d , mizokawa t and saini n rep . _ * 4 * 5592 fratini m , poccia n , ricci a , campi g , burghammer m , aeppli g and bianconi a 2010 _ nature _ * 466 * 841 poccia n , ricci a , campi g , fratini m , puri a , di gioacchino d , marcelli a , reynolds m , burghammer m , saini n l , aeppli g and bianconi a 2012 _ proc . _ * 109 * 15685 ricci a , poccia n , campi g , coneri f , caporale a s , innocenti d , burghammer m , zimmermann m and bianconi a 2013 _ sci . rep . _ * 3 * 2383 hirsch j and marsiglio f 1991 _ phys . b _ * 43 * 424 kresin v and wolf s 1992 _ phys . rev . b _ * 46 * 6458 bussmann - holder a , genzel l , simon a and bishop a r 1993 _ z. phys . b _ * 91 * 271 ; ibid . * 92 * 149 golubov a a , dolgov o v , maksimov e g , mazin i i and shulga s v 1994 _ physica c _ * 235 - 240 * 2383 yamaji k , shimoi y and yanagisawa t 1994 _ physica c _ * 235 - 240 * 2221 eskes h and sawatzky g 1991 _ phys . rev . b _ * 44 * 9656 bang y , kotliar g , raimondi r , castellani c and grilli m 1993 _ phys . rev . b _ * 47 * 3323 wagner j , hanke w and scalapino d 1991 _ phys . rev . b _ * 43 * 10517 bulut n , scalapino d and scalettar r 1992 _ phys rev . b _ * 45 * 5577 yu r , trinh k t , moreo a , daghofer m , riera j a , haas s and dagotto e 2009 _ phys . b _ * 79 * 104510 maier t 2012 _ j. supercond . magn . _ * 25 * 1307 su s - q , summers m s and maier t a 2012 _ aps march meeting abstracts _ b23.00005 grilli m , raimondi r , castellani c , di castro c and kotliar g 1991 _ phys . rev . _ * 67 * 259 lorenzana j , castellani c and di castro c 2002 _ europhys . _ * 57 * 704 kugel k i , rakhmanov a l and sboychakov a o 2005 _ phys . lett . _ * 95 * 267210 sboychakov a o , kugel k i and rakhmanov a l 2007 _ phys . * 76 * 195113 kaganov m i and lifshitz i m 1960 _ uspekhi fiz . nauk _ * 129 * 487 [ _ sov . - uspekhi _ * 2 * 831 ] novikov s p and maltsev a y 1998 _ uspekhi fiz . * 168 * 249 [ _ phys . - uspekhi _ * 41 * 231 ] bianconi a and missori m 1994 _ j. phys . i ( france ) _ * 4 * 361 bianconi a 1994 _ solid state commun . _ * 89 * 933 bianconi a 1994 _ solid state commun . _ * 91 * 1 bianconi a , valletta , a , perali a and saini n l 1997 _ solid state commun . _ * 102 * 369 bianconi a 2005 _ j , supercond . * 18 * 625 innocenti d , poccia n , ricci a , valletta a , caprara s , perali a and bianconi a 2010 _ phys . b _ * 82 * 184528 innocenti d , caprara s , poccia n , ricci a , valletta a and bianconi a 2011 _ supercond . * 24 * 015012 perali a , innocenti d , valletta a and bianconi a 2012 _ supercond . techn . _ * 25 * 124002 liu c , palczewski a d , dhaka r s , kondo t , fernandes r m , mun e d , hodovanets h , thaler a n , schmalian j , budko s l , canfield p c and kaminski a 2011 _ phys . b _ * 84 * 020509 borisenko s v , zabolotnyy v b , kordyuk a a , evtushinsky d v , kim t k , morozov i v , follath r and bchner b 2012 _ symmetry _ * 4 * 2514 kordyuk a a 2012 _ fiz . _ * 38 * , 1119 [ _ low temp ( kharkov ) _ * 38 * 888 ] kordyuk a a , zabolotnyy v b , evtushinsky d v , yaresko a n , bchner b and borisenko s v 2013 _ j. supercond . nov . _ * 26 * 2837 ideta s , yoshida t , nishi i , fujimori a , kotani y , ono k , nakashima y , yamaichi s , sasagawa t , nakajima m , kihou k , tomioka y , lee c , iyo a , eisaki h , ito t , uchida s and arita r 2013 _ phys . rev . lett . _ * 110 * 107007 ideta s , yoshida t , nakajima m , malaeb w , kito h , eisaki h , iyo a , tomioka y , ito t , kihou k , lee c h , kotani y , ono k , mo s k , hussain z , shen z x , harima h , uchida s and fujimori a 2014 _ phys . b _ * 89 * 195138 lalibert f , chang j , doiron - leyraud n , hassinger e , daou r , rondeau m , ramshaw b j , liang r , bonn d a , hardy w n , pyon s , takayama t , takagi h , sheikin i , malone l , proust c , behnia k and taillefer l 2011 _ nature commun . _ * 2 * 432 leboeuf d , doiron - leyraud n , vignolle b , sutherland m , ramshaw b j , levallois j , daou r , lalibert f , cyr - choinire o , chang j , jo y j , balicas l , liang r , bonn d a , hardy w n , proust c , taillefer l , 2011 _ phys . rev . b _ * 83 * 054506 bianconi a , saini n l , agrestini s , castro d d and bianconi g 2000 . b _ * 14 * 3342 bianconi a , agrestini s , bianconi g , di castro d and saini n l 2001 _ j. alloys comp . _ * 317 - 318 * 537 poccia n , ricci a and bianconi a 2010 _ adv . matter phys . * 2010 * 261849 kugel k i , rakhmanov a l , sboychakov a o , poccia n and bianconi a 2008 _ phys . b _ * 78 * 165124 kugel k i , rakhmanov a l , sboychakov a o , kusmartsev f v , poccia n and bianconi a 2009 _ supercond . * 22 * 014007 sboychakov a o 2013 _ physica b _ * 417 * 49 emery v j and kivelson s a 1993 _ physica c _ * 209 * 597 jamei r , kivelson s and spivak b 2005 _ phys . lett . _ * 94 * 056805 ortix c , lorenzana j and di castro c 2006 _ phys . rev . b _ * 73 * 245117 halperin a 1991 _ macromolecules _ * 24 * 1418 foffi g , de michele c , sciortino f and tartaglia p 2005 _ j. chem . phys . _ * 122 * 224903 zaccarelli e , lu p j , ciulla f , weitz d a and sciortino f 2008 _ j. phys . : condens . matter _ * 20 * 494242 poccia n , campi g , fratini m , ricci a , saini n l and bianconi a 2011 _ phys . b _ * 84 * 100504 poccia n , campi g , ricci a , caporale a s , di cola e , hawkins t a and bianconi a 2014 _ sci . rep . _ * 4 * 05430
the arrested nanoscale phase separation in a two - band hubbard model for strongly correlated charge carriers is shown to occur in a particular range in vicinity of the topological lifshitz transition , where the fermi energy crosses the bottom of the narrow band and a new sheet of the fermi surface related to the charge carriers of the second band comes into play . we determine the phase separation diagram of this two - band hubbard model as a function of two variables , the charge carrier density and the energy shift between the chemical potential and the bottom of the second band . in this phase diagram , we first determine a line of quantum critical points for the lifshitz transition and find criteria for the electronic phase separation resulting in an inhomogeneous charge distribution . finally , we identify the critical point in presence of a variable long - range coulomb interaction where the scale invariance of the coexisting phases with different charge densities appears . we argue that this point is relevant for the regime of scale invariance of the nanoscale phase separation in cuprates like it was first observed in la@xmath0cuo@xmath1 .
despite significant advances in early detection and treatment of many malignancies , pancreatic cancer remains the most lethal form of cancer with an overall 5-year survival rate of approximately 7% . this dismal survival rate is attributed to several factors , including the lack of effective treatment regimens and inefficient screening technologies for detecting the disease during early stages . however , the overall 5-year survival rate is significantly improved ( 26% ) for patients diagnosed during initial disease stages , when the primary tumor is localized with no metastatic lesions . in addition to inefficient screening techniques , treatment of pancreatic cancer remains elusive as these highly heterogeneous and aggressive tumors swiftly develop resistance to available chemotherapeutics and radiation therapy . while surgical resection offers the best survival rate and only potential cure , only 1520% of patients are candidates for surgical intervention at the time of diagnosis . for patients presenting with advanced stage disease , treatment options are limited to chemotherapy and radiation therapy , both minimally effective . in 2015 , an estimated 48,960 patients will be diagnosed with pancreatic cancer in the united states , along with 40,560 attributed deaths . for comparison , pancreatic cancer is the fourth leading cause of cancer - related death worldwide , yet the pancreatic cancer action network predicts that pancreatic malignancies will become the second leading cause of cancer - related death by 2020 . most patients are asymptomatic during initial disease stages , attributing to the high percentage of patients diagnosed with advanced disease . currently , there is active research in discovering novel methods for enhancing the early detection of pancreatic malignancies , yet no reliable tools exist at this time . screening of high - risk patients ( e.g. , cigarette smokers , family history of pancreatic cancer , personal history of chronic pancreatitis ) could potentially lower the number of late diagnoses , yet high cost and limited known risk factors have hindered this approach . the purpose of this review article is to examine the recent advancements in molecular imaging of pancreatic cancer for early disease detection and therapeutic monitoring with antibody - based imaging agents . effective imaging techniques facilitate early detection of malignancies and allow for noninvasive monitoring of therapeutic response in real time . thus , there is a dire need for novel imaging contrast agents in the clinic . researchers have applied several strategies for the development of new imaging agents , effectively targeting tumor tissue using small proteins , peptides , viruses , and antibodies , among other targeting entities . historically , the first radiolabeled antibody utilized for cancer imaging was approved by the fda in 1993 for imaging of prostate cancer . highly specific imaging contrast agents are required for noninvasive visualization of biomolecular processes through molecular imaging . traditionally , ex vivo and in vitro techniques have been utilized for assessing protein expression , yet molecular imaging can provide similar details without requiring animal euthanasia or complex cell - based studies . while researchers have designed hundreds of imaging contrast agents for both cancer diagnostics and therapeutic surveillance , many of these novel probes are limited by suboptimal tumor accumulation . there are several properties that make antibodies suitable molecular imaging probe candidates , including their high specificity for specific antigens , potentially low immunogenicity , and high clinical relevance . currently , there are several fda - approved therapeutic antibodies for cancer treatment , and several other antibody - based treatments are seeking approval . also , antibodies are less likely to cause the off - target toxicity often associated with common chemotherapeutics , due to their high specificity for the protein of interest . while full antibodies are commonly adapted as molecular imaging probes , many studies have noted long blood circulation times and slow tumor accumulation as limiting factors in their potential clinical application . the serum half - life of different immunoglobulin isotypes ranges from 2.5 days for ige to 23 days for igg in humans . for this reason , construction of imaging probes using smaller antibody fragments ( e.g. , fab , scfv , and f(ab)2 ) has become common practice ( figure 1 ) . in addition , combinations of smaller antibody fragments have been constructed for optimized pharmacokinetic profiles . these include diabodies ( divalent sc(fv)2 or trivalent [ sc(fv)2]2 ) , minibodies that consists of two scfv fragments genetically linked to a ch3 domain , and triabodies created through genetically linking two scfv to an fc fragment . antibody fragments often display enhanced pharmacokinetics profiles in comparison to full antibodies , attributed to their shortened serum half - life and faster tumor accumulation . a previous study using a murine antibody clearly displayed the different pharmacokinetic profiles of antibody fragments and full antibodies . it was shown that fab ( 0.2 days ) cleared circulation faster than f(ab)2 ( 0.5 days ) , which were both significantly faster than the whole antibody ( 8.5 days ) . in humans , whole antibodies display circulation times ranging from days to weeks , resulting in optimal tumor accumulation between 2 and 5 days postinjection . while whole antibodies normally result in higher tumor accumulation as compared to fragmented antibodies , the time frame is not optimal for clinical purposes , as nuclear imaging would require multiple patient visits . in general , fragmented antibodies display shorter blood circulation times with maximum tumor accumulation normally occurring between 2 to 24 h. lastly , several researchers have investigated methods for improving the pharmacokinetics of antibody - based imaging agents , including the development of recombinant bispecific antibody fusion molecules . these imaging agents contain an antibody fragment fused to a protein ( e.g. , albumin ) or two antibody fragments chemically conjugated together . these antibody constructs can display prolonged circulation times in vivo , increased accumulation in tumor tissue , and potentially decreased immunogenicity . construction of an antibody - based molecular imaging probe requires a contrast agent specific for the imaging modality . some examples of antibody fragments include f(ab)2 , fab , single - chain variable fragment ( scfv ) , and nanobody ( sdab ) . radioisotopes are employed for positron emission tomography ( pet ) and single - photon emission computed tomography ( spect ) imaging . fluorescent dyes and quantum dots are utilized for optical and photoacoustic ( pa ) imaging . magnetic ( e.g. , iron oxide ) nanoparticles are commonly used in magnetic resonance imaging ( mri ) . several factors regarding the type of antibody ( i.e. , monoclonal , polyclonal , bispecific ) and antibody class ( i.e. , igg1 , igg2 ) should be considered before designing an antibody - based imaging agent . monoclonal antibodies are more commonly employed as molecular imaging agents as they are highly monospecific , recognizing a single epitope of an antigen . in comparison , polyclonal antibodies are more rapidly produced , yet lack the purity levels obtained with monoclonal antibodies . also , polyclonal antibodies do not meet the regulatory guidelines set forth for human use . several other molecular constructs of antibodies are used to enhance the pharmacokinetic properties of the antibodies in vivo , including bispecific antibodies , tetrabodies , and diabodies . also , the class of antibody can alter its biodistribution and metabolism in vivo . several characteristics must be considered when designing novel antibody - based imaging agents . first , the antibody should be human monoclonal or humanized to reduce possible immunogenicity . this is accomplished through the transfer of complementarity - determining region residues from the donor mouse antibody to the human antibody template . the binding properties of humanized antibodies are determined through affinity measurements , competitive binding assays , and biosensor analysis methods . antibodies that fail to meet the required binding properties are modified or eliminated , while antibodies that display unaltered binding properties are examined for their biological activity . second , the antibody should display optimal kinetic profiles for targeting and clearance . this may be achieved by using fragmented antibodies or through enhanced neonatal fc receptor ( fcrn ) binding . antibody stability is often modified through stability engineering of constant or variable domains and the addition of charged fusion tags . lastly , the antibody should be bivalent to assist in tissue targeting and retention , if possible . while the characteristics listed above specifically apply to antibody - based imaging agents , some features of optimized molecular imaging probes include rapid clearance from the blood to reduce background signal , high tissue permeability , increased selectivity and specificity for targeted tissues , fast clearance from nontargeted tissues , high reproducibility for clinical purposes , and simple pharmacokinetic profiles to allow for quantitative modeling . in addition to antibodies , there are several other classes of ligands commonly employed for targeting cancer . some examples include viruses , peptides , low molecular weight proteins , and nanoparticles . for example , several cytokines have been investigated as potential imaging agents , as they are small and undergo rapid clearance from circulation . also , peptides and aptamers are commonly employed as targeting ligands for imaging agents , yet glomerular transit and proteolysis often limit their use in preclinical applications . most other targeting ligands are constrained by lower binding affinity and specificity , in comparison to antibodies . lastly , antibody - based imaging agents offer another advantage , as they can be used to help deliver cytotoxic radionuclides to malignancies . currently , there are over 35 antibody - based treatment options approved for use in various cancer types , with a growth market around 2030 billion dollars each year . the safety profiles of these antibodies have been evaluated at pharmacological doses by the food and drug administration ( fda ) . for this reason , fda - approved antibodies are expected to function as suitable imaging agents , as doses required for molecular imaging are much lower than therapeutic doses . molecular imaging is the noninvasive examination of the cellular function and monitoring of molecular processes in vivo using specialized imaging agents . nuclear medicine evolved during the late 1950s with a predominant shift from anatomical imaging , using plain films and scintigraphy , to functional and hybrid imaging modalities . for molecular imaging , specific molecular pathways this allows for the noninvasive characterization and monitoring of disease progression , investigation of cellular processes occurring in real time , assessment of drug / receptor interactions , and evaluation of the biodistribution of various compounds . also , molecular imaging may lessen the burden of identifying patients that may benefit from specific antibody treatment regimens , as invasive biopsies are currently used to identify patients molecular imaging requires the use of specialized imaging contrast agents with enhanced targeting capabilities to ensure optimal tissue contrast . there are two key components of molecular imaging constructs , including a contrast agent for visualization and a tissue - specific ligand for actively targeting the tumor or diseased tissue of interest ( figure 1 ) . the composition of contrast agents vary based upon the imaging modality , yet some common examples include positron - emitting isotopes , fluorescent dyes , and various nanoparticle platforms . in most situations , these imaging agents are targeted to cell surface receptors upregulated in the disease of interest . in this review , we discuss the molecular imaging of pancreatic malignancies with antibody - based imaging constructs ( e.g. , radiolabeled antibodies , antibody - targeted nanoparticles , and fluorescent - labeled antibodies ) . for example , mesothelin is a membrane glycoprotein expressed in more than 90% of pancreatic cancers . also , cholecystokinin , gastrin , and progastrin have also been shown to be expressed in more than 90% of pancreatic cancers . pd - l1 is another target recently explored for imaging purposes , as it is highly expressed in pancreatic tumor cells and the microenvironment . some imaging agents have been targeted to signaling pathways in the epithelial layer of pancreatic cancer , including the epidermal growth factor receptor ( egfr ) and insulin - like growth factor 1 receptor ( igf1r ) . targeting to the tumor stroma has also been accomplished through vascular endothelial growth factor ( vegfr ) , cyclooxygenase-2 ( cox-2 ) , matrix metalloproteinases ( mmps ) , and hedgehog signaling ( through the tumor suppressor patched and oncogenic protein smoothened ) . other potential targets previously investigated in pancreatic cancer include urokinase - type plasminogen activator receptor ( upar ) , plectin-1 , and muc1 . several of these targets and others will be discussed in more detail later in this section . for more information regarding potential biological targets in pancreatic cancer , readers imaging of pancreatic cancer is crucial for improving patient survival , as most patients are diagnosed after the disease has metastasized to other organs . while antibody - based imaging agents may enhance early detection , their use in identifying patients more likely to respond to certain therapeutics and monitoring treatment response will significantly enhance the current survival rate . molecular imaging utilizes specialized instrumentation for the diagnosis and therapeutic monitoring of disease progression including pet , single photon emission computed tomography ( spect ) , mri , optical imaging ( e.g. , bioluminescence and fluorescence ) , and photoacoustic ( pa ) imaging ( figure 2 ) . while this review focuses on detection of pancreatic malignancies , these versatile imaging modalities are commonly utilized for detection of most solid tumors and other diseases . five molecular imaging modalities employed for cancer screening and therapeutic monitoring include positron emission tomography ( pet ) , single - photon emission computed tomography ( spect ) , magnetic resonance ( mr ) , optical , and photoacoustic ( pa ) imaging . reprinted with permission from refs ( 208211 ) . copyright 2014 macmillan publishers limited , 2014 american chemical society , 2014 macmillan publishers limited , and 2011 american society of gene & cell therapy . in pet imaging , the administered contrast agent is radiolabeled with an isotope that decays by positron emission . pet detection is based on the coincidence detection of two antiparallel 511 kev gamma photons resulting from the positron a tomographic reconstruction of all detected lines of response is then performed to obtain an image of the three - dimensional distribution of the tracer . pet imaging provides high sensitivity and excellent tissue penetration , which allows for quantitative detection of pet tracers in the picomolar range . several positron - emitting isotopes have been evaluated as potential radiosynthons for imaging pancreatic malignancies , including o , c , f , cu , cu , and zr . pet tracers are typically generated through covalent attachment of the isotope to an electrophilic group present in the biological molecule of interest , or via coordination with a suitable chelator . targeting of cell surface receptors upregulated in cancer remains the most promising strategy for designing molecular imaging probes . overexpression of grp78 is linked to increased tumor growth , rapid drug resistance , and the development of highly metastatic disease . while grp78 is overexpressed in most pancreatic cancers , it is expressed at low levels in normal pancreatic tissue and precancerous pancreatic lesions . the novel antibody ( mab159 ) was conjugated to cu using the chelator 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid ( dota ) . mab159 was raised against the glucose - related immunoglobulin heavy - chain binding protein ( grp78 ) and used for specific targeting of grp78-expressing bxpc-3 pancreatic subcutaneous xenograft tumors . peak intratumoral accumulation of 18.3 1.0% id / g was obtained at 48 h postinjection ( figure 3 ) , as shown by pet imaging ( figure 3a ) and biodistribution ( figure 3b ) . for comparison , nontargeted radiolabeled human igg was injected as control and displayed a tumor accumulation of only 7.5 0.7% id / g ( figure 3c ) . similar upregulated proteins have been investigated as potential targets for pet imaging of therapeutic response . for example , mesothelin is a small glycoprotein highly expressed in the majority of pancreatic adenocarcinomas , yet not expressed in most precancerous lesions . kobayashi et al . developed an anti - mesothelin antibody ( 11 - 25 ) as a novel agent for pet imaging of subcutaneous xenograft tumor - bearing mice with three pancreatic cancer cell lines ( bxpc-3 , cfpac-1 , and panc-1 ) , with varying levels of mesothelin expression . the mab 11 - 25 was produced in hybridoma cells previously generated by immunizing mice with a recombinant mesothelin protein . cell binding assays showed that dota-11 - 25 mab and the native antibody displayed similar antigen reactivity , and pet imaging revealed that cu - dota-11 - 25 mab accumulated higher in mesothelin - expressing bxpc-3 and cfpac-1 subcutaneous xenograft tumors . ( a ) pet images were decay corrected , with 3 time points shown at 1 , 17 , and 48 h postinjection of cu - dota - mab159 ( targeting grp78 ) or cu - dota - igg ( control ) . ( b ) biodistribution of cu - dota - mab159 and cu - dota - igg , through direct tissue sampling , at 48 h postinjection . ( c ) pet quantification of cu - dota - mab159 and cu - dota - igg in major organs at three imaging time points ( 1 , 17 , and 48 h ) . zr is a relatively new radionuclide that has been employed for pet imaging of multiple cancers , as the isotope has become widely accessible during the past decade with several available chelating agents . this unique isotope was utilized by sugyo et al . to image the transferrin receptor in transferrin - positive tumor - bearing mice using the monoclonal antibody tsp - a01 . the antibody was radiolabeled with zr , using p - isothiocyanatobenzyl - desferrioxamine ( dfo ) as the chelator , and the biodistribution and specificity were determined by pet . the transferrin receptor - positive tumor subcutaneous xenograft tumor model ( miapaca-2 ) was accurately identified using the zr - labeled antibody with a peak uptake of 12.5 2.3% id / g obtained at 2 days postinjection . this study demonstrated the potential use of this imaging probe for selecting patients that may benefit from anti - transferrin therapy . employed zr for imaging of cd147-expressing pancreatic tumors in tumor - bearing mice using an antibody targeting cd147 called 059 - 053 . cd147 , also called emmprin , is an immunoglobulin transmembrane protein highly expressed in malignant pancreatic cancer and expressed at low levels in precancerous lesions and pancreatitis . it is involved in lymphocyte activation , induction of monocarboxylate transporters , and induction of several metalloproteinases ( mmps ) . the antibody 059 - 053 was obtained from a large - scale human antibody library constructed using phage - display and was shown to inhibit the proliferation of pancreatic cancer cells . miapaca-2 subcutaneous xenograft tumors , shown to highly express cd147 , displayed an uptake of 11.0 1.3% id / g at 24 h postinjection , with a peak uptake of 16.9 3.2% id / g occurring 6 days postinjection . also , an orthotopic mouse model of miapaca-2 was established and displayed an uptake of 8.6% id / g at 6 days postinjection . antibodies are widely employed for the treatment of several other types of cancer and diseases . these fda - approved antibodies are excellent candidates for molecular imaging as they may be used for concurrent treatment and imaging of disease . for example , boyle et al . examined the potential utilization of panitumumab , an fda - approved human anti - egfr antibody , for imaging of patient - derived pancreatic cancer xenograft and orthotopic tumors . pancreatic cancer , precancerous lesions , and chronic pancreatitis often overexpress egfr , making it a suitable marker for early disease detection and therapeutic monitoring . to accomplish this task , f(ab)2 fragments of panitumumab were produced through proteolytic digestion , before labeling with cu . at 48 h postinjection , tumor uptake values of cu - nota - panitumumab - f(ab)2 were 12.0 0.9% id / g and 11.8 0.9% id / g in xenograft and orthotopic tumor models , respectively . in another study , viola - villegas et al . modified an antibody targeting the tumor - associated cancer antigen 19 - 9 ( ca19.9 ) , known as 5b1 . the antibody 5b1 was previously generated and characterized from blood lymphocytes of patients immunized with the sle - klh vaccine . in this study , 5b1 was radiolabeled with zr using dfo as the chelator and evaluated for the detection and staging of pancreatic cancer . pet imaging revealed that zr-5b1 displayed significantly higher uptake in orthotopically implanted bxpc-3 tumors in comparison to f - fdg , with tumor uptake values of 30.7 6.6% id / g and 4.8 1.3% id / g , respectively , at 48 h postinjection . also , a diabody of anti - ca19.9 was engineered by girgis et al . from the variable regions of the monoclonal murine antibody 116-ns-19 - 9 using the ns116.19.9 hybridoma cell line . the diabody was radiolabeled with i , and tumor uptake was compared between pancreatic subcutaneous xenograft tumors expressing low ( miapaca-2 in right shoulder ) and high levels ( capan-2 or bxpc-3 in left shoulder ) of ca19.9 . since the long serum half - life of full antibodies can potentially hinder the contrast between tumor and blood pools , this study employed a smaller antibody fragment ( 55 kda ) . the diabody displayed enhanced tumor accumulation with positive - to - negative tumor ratios of 11:1 and 6:1 for bxpc-2 and capan-2 tumors at 20 h postinjection , respectively . also , there was 5-fold more radioactivity in the tumor as compared to blood , which was adequate contrast for delineation between tumor tissue and background . while ca19.9 is overexpressed in pancreatic cancer and some precursor pancreatic lesions , overexpression in non - neoplastic conditions , ranging from benign obstructive jaundice to chronic pancreatitis , has limited its use as a diagnostic imaging marker . utilized the upregulation of tissue factor in pancreatic cancer as a potential target for molecular imaging . tissue factor is a transmembrane glycoprotein that activates the clotting cascade in nondiseased states , yet is known to cause thrombosis , tumor growth , and angiogenesis in cancerous tissue . tissue factor can be targeted for early detection of pancreatic lesions and monitoring of therapeutic response as it is highly expressed in precancerous pancreatic lesions , including 77% of pancreatic intraepithelial neoplasias ( panins ) . targeting of tissue factor was accomplished using alt-836 , a chimeric monoclonal antibody developed by altor biosciences , which is currently in human clinical trials ( nct01325558 ) . in bxpc-3-derived subcutaneous xenograft tumor - bearing mice , tumor accumulation of cu - nota - alt-836 reached a peak of 16.5 2.6% id / g at 48 h postinjection . as stated by the authors , this was the first utilization of molecular imaging for visualizing tissue factor expression in vivo . carcinocinoembryonic antigen - related cell adhesion molecule 6 ( ceacam-6 ) is a cell surface glycoprotein known to be highly expressed in most cancers , thus researchers have adapted this antibody as a potential imaging agent for therapeutic monitoring . several studies have demonstrated strong correlations between high ceacam-6 expression and increased rates of tumor metastasis and drug resistance . recently , niu et al . exploited the overexpression of ceacam-6 for molecular imaging of bxpc-3-derived subcutaneous xenograft tumors by employing a full - length , heavy chain , and single domain antibody radiolabeled with cu - dota . the heavy chain portion of the antibody was shown to be far superior to both the whole antibody and single domain antibody for imaging purposes , with higher tumor uptake and lower liver uptake of the contrast agent . similarly , the scfv - fc fragment of an antibody targeting ( carcinoembryonic antigen ) cea was investigated by girgis et al . as a potential pet imaging agent , since high expression of cea was found in 84% of human pancreatic cancer specimens . the fragmented antibody displayed a significantly decreased serum half - life in comparison to the full antibody at 27 h and 10 days , respectively . also , a tumor / blood ratio of 4.0 was achieved , which is comparable to clinical studies and allowed for the clear delineation of tumor boundaries . while pet imaging relies upon the detection of positron - emitting isotopes , spect imaging detects single radiation using an array of gamma cameras . several 2-d projections of the patient are acquired at multiple angles and later reconstructed using tomographic reconstruction algorithms to form a 3-d image of radiotracer biodistribution . while pet / ct imaging technologies in general offer superior resolution and quantitative capabilities , spect / ct technologies are more accessible in the clinic at a lower cost for patients . also , there is a wider range of approved radiotracers for spect imaging in comparison to pet imaging . some common gamma emitters employed for spect imaging include tc , in , i , and tl . availability of mo / tc generators has significantly improved the accessibility of spect in limited access areas with no previous access to this imaging modality . incorporation of ct with spect or pet imaging modalities enhances disease detection by accounting for attenuation , resolution effects , and motion artifacts . several studies have revealed synergistic improvements in disease detection and treatment monitoring with combined imaging modalities , as compared to single imaging techniques . currently , spect / ct is not commonly employed for detection of pancreatic malignancies in the clinic , yet improved imaging agents may promote its use in the future . recently , clinical imaging of mesothelin - expressing pancreatic cancer was monitored in six patients using an in - labeled chimeric monoclonal antibody , known as amatuximab . the antibody - based imaging probe , investigated in four patients with malignant mesothelioma and two patients with pancreatic adenocarcinoma , produced a tumor to background ratio 1.2 , sufficient for distinguishing between tumor and normal tissue . furthermore , this was the first clinical trial examining the safety and biodistribution of in - amatuximab , and the imaging tracer displayed a favorable dosimetry profile and was tolerated well in patients . axl receptor tyrosine kinase ( rtk)-targeted antibodies were evaluated by leconet et al . as a potential treatment option for pancreatic cancer.axl rtk since axl rtk is highly expressed in 76% of pancreatic adenocarcinoma patient samples , development of novel antibody - based therapies targeting this receptor could significantly advance the treatment of pancreatic malignancies . the inhibitory effects of these novel antibodies were evaluated using spect / ct imaging with i - labeled antibody in pancreatic subcutaneous and orthotopic xenograft mouse models . tumor growth and migration were significantly hindered by the antibody in vitro , thus demonstrating that anti - human axl antibodies could be used for simultaneous imaging and immunotherapy of pancreatic malignancies in the future . ferritin is an iron storage protein targeted by sabbah et al . for concurrent imaging and treatment of pancreatic tumors . amb8lk , an antibody targeting ferritin , was conjugated with in for spect / ct imaging using either dota or dtpa , as the chelating agent . spect / ct imaging showed high uptake of in - dtpa - amb8lk in mice with capan-1 subcutaneous xenograft tumors , with 23.6 3.9% id / g at 72 h postinjection ( figure 4 ) . in comparison to in - dtpa - amb8lk , in - dota - amb8lk accumulation peaked at 48 h postinjection with 12.6 3.9% id / g ( figure 4 ) . while it was shown in vitro that in - dtpa - amb8lk exhibited higher binding to ferritin and cells expressing the antigen , in comparison to in - dota - amb8lk , the authors did not provide a reason why the pharmacokinetics differed between dtpa- and dota - labeled amb8lk . further explored the use of in for targeting pancreatic malignancies using a murine / human chimeric antibody . nd2 is a murine igg1 antibody produced against the mucin fractions of sw1990-derived xenograft tumors . mucins function by limiting the activation of inflammatory responses , and mucin inhibitors have been shown to block the survival and tumorigenicity of human cancers in mouse models . this study employed the mouse / human chimeric construct of nd2 , known as c - nd2 , to investigate its imaging and therapeutic potential in human pancreatic cancer . as expected , specific uptake of c - nd2 was detected 3 days postinjection in 12 out of 14 patients , resulting in a sensitivity of 85.7% . also , c - nd2 displayed low immunogenicity with no cases of human antichimeric antibody ( haca ) response in patients , which is known to alter the pharmacokinetic profile of antibodies . spect imaging of ferritin expression in pancreatic cancer using the novel antibody amb8lk . capan-1 xenograft mice were injected with in - dtpa - amb8lk and imaged at 1 , 24 , and 72 h postinjection . reprinted with permission from ref ( 90 ) . copyright 2007 elsevier . in another study , claudin-4 was targeted by foss et al . using an antibody conjugated with i for spect / ct imaging , which displayed optimal tumor accumulation 5 days postinjection . claudin-4 is a membrane protein located in the tight junctions of cells and was shown to be overexpressed in most pancreatic cancers and many precancerous pancreatic lesions , making it a suitable biomarker for early disease detection . similarly , an antibody was constructed to recognize and inhibit the adhesion of tumor cells to extracellular matrix proteins , with the overall purpose of inhibiting tumor growth . the in - dota radiolabeled antibody ( 14c5 ) , targeting v5 integrin , displayed a tumor uptake of 35.84 8.64% id / g at 48 h postinjection while being investigated as a potential spect imaging agent in nude mice with capan-1-derived subcutaneous xenograft tumors . immunoscintigraphy is an imaging modality similar to spect , using a 2d planar gamma camera . while this technique was widely employed before the advent of spect , several studies have utilized immunoscintigraphy for imaging of pancreatic malignancies using antibody - based imaging agents . for example , an antibody targeting tumor - associated glycoprotein-72 ( tag-72 ) , named b72.3 , was radiolabeled with i for detection of subcutaneous xenografts of human pancreatic carcinomas in nude mice . while previous studies showed promising results , this study revealed the insufficient accumulation of the antibody - based probe in tumor tissue . however , a similar study successfully utilized a novel full and fragmented antibody ( a7 ) labeled with i and tc for imaging of nude mice bearing human pancreatic cancer subcutaneous xenograft tumors . the ratio of radioactivity in tumor tissue , as compared to blood , was significantly higher than that in normal tissue , with the full antibody displaying higher tumor uptake as compared to the antibody fragment . magnetic resonance imaging ( mri ) relies on the ability of the magnetic dipoles of water protons to align under the influence of a strong magnetic field . briefly , when a strong magnetic field is applied , typically in the range of 17 t , proton spins tend to adopt one two orientations , parallel or antiparallel with respect to the main magnetic field ( b0 ) . given that parallel alignment is slightly energetically favored , a difference in population and energy between the two states is created . to produce an mr signal , the proton ensemble is perturbed from its equilibrium state through the use of radio frequency ( rf ) excitation pulses . upon termination of the excitation pulse , a proton returns to its original state by a process called relaxation , in which energy is released as rf that can be detected by the mr scanner . lattice relaxation that is characterized by a t1 time constant , and transversal or spin spin relaxation , described by a t2 time constant . mr contrast arises from the difference in relaxation times t1 and t2 between various tissues . additionally , contrast agents can manipulate the t1 and t2 times , effectively creating larger contrasts in t1-weighted or t2-weighted images . readers are directed to detailed reviews for more detailed coverage of mr physical principles , image acquisition , and processing . a significant advantage of mri , in comparison to ct , is its superiority in soft tissue contrast and capability to provide additional details regarding tissue function , structure , and blood perfusion . mri is used for diagnosing pancreatic malignancies when confounding results are obtained from standard diagnostic techniques ( e.g. , ultrasound and multidetector computed tomography ) . while effective for imaging pancreatic cancer , the signal - to - noise ratio and presence of motion artifacts that arise from relatively slow acquisition times should be improved . more effective targeting strategies that limit the off - target accumulation of imaging probes will enhance the sensitivity of mri . the amount of contrast agent required for mri is dependent upon the tumor model , as orthotopic xenograft models more closely resemble the biologic characteristics ( e.g. , hypovascular tumors ) found in human malignancies . engrafted models tend to underestimate the dose required for obtaining an adequate mri signal , as this hypervascularized model leads to increased intratumoral accumulation of injected agents . as an example , preclinical investigations of superparamagnetic iron oxides nanoparticles ( spions ) for pancreatic cancer imaging have required doses ranging from approximately 2.5 g of fe / kg to more than 5 g of fe / kg . while nanoparticles are commonly utilized in drug delivery , novel theranostic nanoparticles allow for concurrent imaging and treatment of disease . for example , deng et al . developed a multifunctional nanoimmunoliposomal platform for simultaneous loading of spions and the anticancer agent doxorubicin . this novel theranostic nanoplatform was targeted to pancreatic malignancies using an anti - mesothelin antibody , and imaging was evaluated in panc-1-derived subcutaneous xenograft tumors . targeted nanoparticles often displayed an enhanced transverse relaxivity that results in enhanced t2-weighted mr contrast . wang et al . further explored the application of spions for imaging pancreatic malignancies using an antibody targeting plectin-1 . antibody - modified spions showed highly specific uptake by panc-1 cells expressing plectin-1 with excellent biocompatibility , serum stability , and high relaxivity in vitro . chemokine receptor 4 ( cxcr4 ) plays a vital role in early embryonic development , yet expression in cancer cells facilitates the growth and spread of tumors . additionally , cxcr4 expression was shown to be specific for pancreatic cancer tissue with minimal expression in normal pancreatic tissue . he and colleagues modified ultrasmall spions for mr imaging of pancreatic cancer using a monoclonal antibody specific for cxcr4 . the targeted probe cxcr4-spio displayed enhanced t2 ratio in vitro , allowing for semiquantitative assessment of cxcr4 expression in four pancreatic cancer cell lines ( aspc-1 , bxpc-3 , cfpac-1 , and panc-1 ) . as cxcr-4 is expressed in over 75% of human panins , this imaging probe could be used for early disease detection and therapeutic monitoring . in a similar study , yang et al . examined the biodistribution and tumor uptake of iron oxide ( io ) nanoparticles modified with an egfr - targeted single - chain antibody ( scfvegfr ) in mice bearing egfr - positive ( miapaca-2 ) orthotopic xenograft tumors ( figure 5 ) . as egfr is commonly overexpressed in most pancreatic malignancies and precursor lesions , egfr - targeted probes could be used for both early disease detection and therapeutic monitoring . the single - chain anti - egfr antibody , consisting of the heavy and light chain variable domains linked by a small peptide , was only 20% the size of a normal antibody ( 25 kda ) , yet the fragment maintained both high binding specificity and affinity for egfr . scfvegfr - ios were synthesized by coating 10 nm io nanoparticles with amphiphilic copolymers containing short polyethylene glycol ( peg ) chains , before the addition of the fragmented antibody ( figure 5a ) . scfvegfr - io accumulation in tumor tissue resulted in enhanced mri contrast at 5 and 30 h postinjection , allowing for delineation of tumor boundaries ( figure 5b ) . for comparison , nontargeted nanoparticles did not show any mri signal decrease in the tumor after nanoparticle injection ( figure 5b ) , thus proving that scfvegfr - io uptake was dependent upon egfr expression . targeting iron oxide ( io ) nanoparticles with a single - chain egfr ( scfvegfr ) antibody for mri . ( a ) nanoparticles were constructed by coating io nanoparticles with an amphiphilic copolymer containing short polyethylene glycol chains . second , nanoparticles were functionalized with scfvegfr in the presence of ethyl-3-dimethyl aminopropyl carbodiimide ( edac ) . ( b ) mr images displayed enhanced pancreatic tumor contrasts ( yellow arrow ) in mice 5 and 30 h postinjected with scfvegfr - io nanoparticles . also , ex vivo confirmation of cancerous lesions within the pancreas is shown ( blue arrow ) . ( c ) for comparison , minimal contrasts differences are seen postinjection of nontargeted io nanoparticles . magnevist ( gadopentetate dimeglumine ) , a commonly utilized paramagnetic imaging agents in cancer diagnostics to visualize lesions with abnormal vascularity , was employed by pirollo et al . in the development of a novel theranostic liposomal nanoplatform for synchronized mri and drug delivery . magnevist was successfully loaded into liposomal complexes targeted with an anti - transferrin receptor single - chain antibody ( tfrscfv ) . in capan-1-derived orthotopic pancreatic tumor models , tfrscfv - targeted nanoparticles loaded with magnevist showed both increased resolution and image intensity , as compared to freely circulating magnevist . in another report , chen et al . targeted neutrophil gelatinase - associated lipocalin ( ngal ) for imaging and therapy of pancreatic cancer by encapsulating gold nanoshells in silica epilayers doped with iron oxide and indocyanine green dye . this novel platform , containing two imaging agents , displayed enhanced contrast for both optical imaging and t2-weighted mri with higher tumor contrast in nude mice bearing aspc-1-derived subcutaneous xenografts , as compared to nontargeted nanoparticles . as ngal is expressed in malignant pancreatic cancers and early dysplastic lesions of the pancreas , newly developed ngal - targeting imaing agents may be employed for both early disease detection and therapeutic monitoring . optical imaging has grown significantly over the past decade as a more cost - efficient molecular imaging modality that utilizes the excitation properties of fluorophores . increased spatial resolution and real - time imaging are main advantages of optical imaging , in comparison to pet and spect imaging . also , optical imaging does not require administration of ionizing radiation to patients , which eliminates unnecessary radiation exposure and allows for multiple dose administrations . instead , optical imaging utilizes the light properties of fluorescent or bioluminescent compounds for in vivo imaging . while effective for preclinical investigation of pancreatic malignancies , a major drawback for the clinical application of optical imaging is the limited depth penetration into tissue . contrast agents designed for optical imaging are within the wavelength range 6501450 nm , commonly termed the optical imaging window . an optical imaging window is a spectral region where light can penetrate tissue more deeply , yet is not affected by the autofluorescence of water or other endogenous chromophores ( e.g. , hemoglobin , melanin ) found between 200 and 650 nm . commonly utilized contrast agents for fluorescence imaging include near - infrared ( nir ) dyes , quantum dots , and gold nanoparticles . while identification of both primary and metastatic disease significantly impacts patient survival , current imaging modalities often fail to provide sufficient visualization of tumor margins . for this reason , pancreatic tumors are often incompletely resected during surgical procedures and many laparoscopies result in incorrect disease staging . to improve visualization of pancreatic malignancies during laparoscopies , many researchers have employed optical imaging agents for assisting surgeons in identifying tumor margins and potentially locating metastatic lesions . for this purpose , cao and collaborators investigated an anti - cea fluorophore - conjugated antibody for detection of both primary and metastatic bxpc-3-derived orthotopic pancreatic xenografts in nude mice using fluorescence laparoscopy ( figure 6 ) . tumors could be identified much faster using fluorescence laparoscopy ( fl ) , as compared to traditional bright field laparoscopy ( bfl ) ( figure 6a , b ) . also , the sensitivity of each platform for detecting metastatic lesions was compared , with fl displaying higher sensitivity in comparison to bfl at 96.3% and 40.4% , respectively . while larger tumors were easily detected by both fl and bfl , fl was superior in detecting metastatic disease or smaller tumors deeper in the tissue ( figure 6c ) , as confirmed by ex vivo studies . ( a ) during laparoscopy , malignancies were easily visualized using the fluorescence mode ( fl ) with a fluorescent - labeled antibody . the visualization of tumors using the bright field ( bfl ) mode was hindered , in comparison to fl . ( b ) time to identify the primary tumor using fl and bfl showed that fl was a much faster technique . ( c ) using fl , both primary and metastatic lesions were easily visualized in each case . the corresponding images of both primary tumors ( 4 and 5 ) and metastatic disease ( 1 , 2 , and 3 ) are shown individually . update medical publishing athens . in a similar study , boonstra and colleagues exploited the overexpression of cea , found in the majority of pancreatic cancers , for visualizing pancreatic tumors . a novel cea - targeted near - infrared fluorescent tracer was established by attaching a single - chain antibody fragment to 800cw . the single - chain variable fragment was constructed from the humanized version of mfe-23 , the first single - chain antibody molecule to be used in clinical trials . single - chain antibody fragments were utilized in this study for their rapid blood clearance through the kidneys and uniform tumor penetration , which allowed for imaging at early time points with high tumor - to - background ratios . they found a peak tumor - to - background ratio of 5.1 0.6 at 72 h postinjection , noted to be suitable for discriminating tumor boundaries in mice bearing bxpc-3-derived orthotopic pancreatic xenografts . similar investigated have described the potential use of cea - targeting antibodies to improve fluorescence - guided surgical resection of pancreatic malignancies . currently , the tumor marker ca19.9 is used to help differentiate between pancreatic malignancies and other diseases ( e.g. , pancreatitis ) , for assessing cancer progression , treatment efficacy , and monitoring cancer recurrence . additionally , ca19.9 has been investigated as a potential target for molecular imaging . in one study , mcelroy et al . developed an antibody targeting ca19.9 conjugated with a green fluorophore , for enhancing the intraoperative visualization of primary and metastatic pancreatic lesions in bxpc-3-derived orthotopic tumor models . the fluorescent labeled antibody allowed for clear visualization of the primary tumor at 24 h postinjection . additionally , small metastatic lesions within the spleen and liver were also visualized . in a similar study , hiroshima et al . further evaluated the potential targeting of ca19.9 for imaging of patient - derived orthotopic xenografts during fluorescence - guided surgical procedures . while ca19.9 functions as a tumor marker found in patient serum , it suffers from low sensitivity and high false positives . for these reasons , newer a potential candidate is muc1 , a membrane - bound glycoprotein expressed in over 90% of pancreatic cancers , commonly associated with increased lethality . park et al . targeted muc1 using a fluorescent antibody , by attaching the antibody ct2 to dylight 550 . the new imaging tracer was successfully employed for optical imaging of both bxpc-3-derived orthotopic and subcutaneous xenograft tumors in nude mice . previously , muc1 was shown to be expressed at low levels in normal pancreatic tissues , high levels in primary and metastatic pancreatic ductal adenocarcinomas , and moderate to high levels in panins . for this reason , muc1-based imaging agents may be potentially utilized for early disease detection and therapeutic monitoring . also , quantum dots have been exploited as potential optical imaging agents for their high quantum yields , in combination with excellent biostability and photostability . for example , yong et al . constructed non - cadmium - based quantum dots modified with anti - claudin 4 for imaging of miapaca-2 cells . non - cadmium - based quantum dots have been shown to be less toxic than commonly utilized cadmium quantum dots , which release cadmium and selenium into the biological environment during degradation . they evaluated the toxicity by incubating varying concentrations of indium phosphide ( core)zinc sulfide ( shell ) , or inp / zns , quantum dots with miapaca-2 cells and found the quantum dots to be nontoxic at high concentrations ( i.e. , 10 and 100 mg / ml ) . in many instances , developed a multimodality contrast agent for pet and optical imaging using an antibody against mesothelin , cofunctionalized with cu and alexa fluor 750 . as expected , imaging revealed significant fluorescence signal in mesothelin - positive pancreatic subcutaneous xenograft tumors in balb / c nu / nu mice ( panc-1 , cfpac-1 , and bxpc-3 ) , while those models with low mesothelin expression exhibited minimal fluorescence signal . in a similar study , egfr was targeted by kampmeier et al . with a single - chain antibody fragment of cetuximab , constructed using the snap - tag technology , and further functionalized for optical imaging with an nir dye ( bg-747 ) . rapid and highly specific accumulation of the tracer was exhibited at 10 h postinjection , with a tumor to background ratio of 33.2 6.3 . the fragmented antibody showed enhanced tumor uptake and faster clearance in comparison to the full - length antibody . compared to other imaging modalities described in this review , pa imaging is considered to be relatively new , as it was first introduced for biomedical imaging purposes in 1981 by theodore bowen . pa imaging is based on the formation of acoustic pressure waves from electromagnetic energy . simply , the patient s tissue is exposed to short laser pulses at several wavelengths , resulting in the formation of ultrasound waves detected by an ultrasonic transducer . the rapid thermoelastic and thermal expansion of the tissue caused by the absorbance of laser photons causes the production of ultrasound waves . similar to optical imaging , exposure to ionizing x - ray radiation is not needed , making it possible to image patients multiple times with no health hazards . there are several advantages to pa imaging as it combines both optical and ultrasound imaging into a single instrument . some of these benefits included high spatial resolution , high tissue contrast , and enhanced spectroscopic - based specificity . recent advances in pa tomography have made whole - body small animal imaging feasible , allowing for real - time tracking of imaging agents in vivo . pa imaging offers a unique capability in addition to imaging of nonendogenous imaging agents . there are several endogenous chromophores in biological tissue capable of producing pa signals , including hemoglobin , myoglobin , certain lipids , and melanin . for this reason , it is possible to monitor many biological processes in vivo , including angiogenesis during tumor formation , development of intratumoral hypoxia , and visualization of blood flow within tissues . while endogenous chromophores make it possible to visualize tumor vasculature , nonendogenous imaging agents are needed for specifically targeting tumor cells or surrounding vasculature . as a dual imaging modality , pa systems do not rely upon the ballistic photons required for optimal imaging . for this reason , previous studies have demonstrated that penetration depths of 46 cm are feasible , with the use of highly efficacious contrast agents within an optimal wavelength range . similar to optical imaging , nir wavelength range contrast agents allow for optimal tissue depth penetration , as tissue absorption is minimized in this wavelength range . examples of previously developed pa imaging agents include nir dyes , carbon nanotubes , gold nanoparticles , spios , methylene blue , and indocyanine green . since few studies have employed pa imaging for visualization of pancreatic malignancies , this section includes other targeting ligands besides antibodies . recently , lakshman and needles described a methodology for screening and quantifying the tumor microenvironment of orthotopic pancreatic tumors using the vevo pa imaging system . in this study , intratumoral perfusion was investigated using gas - filled microbubbles , with peripheral regions of the tumor showing high perfusion and core regions showing minimal perfusion . in 2012 , homan et al . synthesized antibody - conjugated silver nanoplates using biocompatible chemical reagents ( figure 7 ) . the nanoparticles displayed a maximum peak absorbance near 900 nm , making them optimal for pa imaging . the edge length and thickness of the silver nanoplates were shown to be 128 25.9 nm and 18 2.7 nm through transmission electron microscopy ( tem ) , respectively ( figure 7a ) . an egfr - targeted antibody was attached via the fc portion to the silver nanoplates , allowing for optimal targeting capabilities . dark field microscopy confirmed the targeting efficiency and high specificity between the egfr - nanoplates and pancreatic cancer cells ( mpanc-96 and l3.6pl ) in vitro . cellular uptake of egfr - targeted silver nanoplates was higher than uptake of polyethylene glycol ( peg)-modified nanoplates . however , this further confirmed the high specificity of the antibody - based platform for targeting egfr ( figure 7b ) . a combination of ultrasonography and pa imaging was utilized to acquire images with laser pulses between 740 and 940 nm ( figure 7c ) . multiplex imaging of nonendogenous and endogenous contrast agents was accomplished , with egfr - modified nanoplates depicted in yellow , oxygenated blood shown as red , and deoxygenated blood illustrated as blue . two - dimensional cross sections and 3-d reconstructions were shown , proving that nanoplates selectively accumulated in the tumor and were easily differentiated from endogenous blood components ( figure 7d ) . while it was visually determined that uptake of egfr - targeted silver nanoplates was higher than uptake of polyethylene glycol ( peg)-modified nanoplates in vivo , these data were not quantified . ( a ) the edge lengths of silver nanoplates were 218 35.6 nm . ( b ) darkfield microscopy showed increased cellular uptake of antibody - modified nanoplates ( left ) in comparison to pegylated nanoplates ( right ) . ( c ) two - dimensional cross sections of orthotopic tumors allowed for delineation of organs and produced a photoacoustic signal from antibody - modified silver nanoplates ( yellow ) , oxygenated blood ( red ) , and deoxygenated blood ( blue ) . ( d ) image reconstruction produced a 3-dimensional representation of orthotopic pancreatic tumor model with the photoacoustic signal . several studies have successfully employed pa imaging for detecting and monitoring pancreatic malignancies using non - antibody - based imaging agents . for example , homan et al . developed a novel metallodielectric nanoplatform , by entrapping silica cores in silver cage nanoparticles , shown to enhance pa imaging contrasts in pancreatic tissues . also , protein - based pa imaging agents have been constructed for targeting egfrb and sigma-2 receptor in pancreatic cancer . in the future , pa imaging may be employed for examining anticancer treatment response using theranostic nanoparticles , in combination with monitoring of the pharmacokinetic properties of diagnostic and therapeutic agents in vivo . as discussed , the high specificity and small size of antibodies make them suitable imaging candidates , yet all imaging agents require optimization before utilization in animal studies . several factors have been shown to influence the pharmacokinetics and targeting efficiency of antibodies for imaging purposes , including the molecular weight , fc domains , valency , and specificity . for example , the presence of fc domains increases the circulation time of antibody - based imaging agents in vivo . while this provides more time for the imaging agent to interact with the target receptor , faster clearance leads to enhanced contrast and sensitivity for molecular imaging purposes . also , antibodies may undergo binding to nontargeted cells , decreasing the amount of imaging agent available for tumor binding . the number of target antigens per cell and the rate of internalization are additional factors known to influence the pharmacokinetics of antibody - based imaging agents . additionally , the imaging agent dosage will need to be adjusted if the target protein is present at low concentrations in the blood , as this may decrease the blood circulation time of the imaging probe . targeting of tumor cells remains difficult for most antibody - based imaging agents , as the harsh microenvironment of solid tumors may limit the access and binding of imaging agents to tumor cells . previous studies have shown that solid tumors display limited extravasation of molecules across the capillary walls , due to high interstitial fluid pressure . some researchers have attempted to bypass the need for extravasation by selectively targeting the tumor vasculature ( e.g. , cd105 ) . regions of highly heterogeneous pancreatic tumor tissue display various levels of hypoxia and necrosis , which may limit the access of imaging agents to portions of the tumor . also , the highly acidic microenvironment may cause irreversible damage to antibody conformation and function , resulting in decreased binding affinity . in addition to the pharmacokinetic challenges , the high production cost associated with producing monoclonal antibodies is another factor limiting the use of antibody - based imaging agents . companies developing antibodies for clinical applications are required to strictly adhere to several costly procedures and standards . manufacturers must harvest the cell cultures needed for antibody production , before undergoing several steps to ensure the purification standards required for fda approval of monoclonal antibodies . currently , the retail price of therapeutic antibodies ranges from $ 700 for bevacizumab ( 100 mg ) to $ 1700 for eculizumab . while smaller quantities of the antibody are required for molecular imaging in comparison to therapy , manufacturers must consider the expensive production costs associated with radioisotope production and other requirements . as pancreatic cancer is a highly heterogeneous and genetically complex disease , it is difficult to identify potential biomarker targets for molecular imaging of all pancreatic cancer patients . many of the biomarkers currently being investigated are expressed in a portion of pancreatic cancers , making them unsuitable for the entire population . for this reason , the discovery of biomarkers expressed in the majority of pancreatic cancers is critically needed . also , visualization of pancreatic metastases requires increased presence of antigen on the surface of malignant cells , as compared to the primary tumor . despite effective targeting strategies , antibodies may be hindered by the dense tumor stroma found in pancreatic tumors , consisting of increased amounts of stromal cells and extracellular matrix proteins . for more information regarding the biological barriers of pancreatic cancer , as molecular imaging is in the infant stages of development , clinical imaging of pancreatic cancer is dependent upon standardized procedures . pancreatic cancer is detected using several clinical imaging techniques , often dependent on the expertise of the physician , instrument availability , and patient symptoms . currently , multisectional computed tomography ( ct ) is the most widely employed technique for assessing possible pancreatic disease , as this instrument offers high spatial resolution with moderately fast scan times . newer multislice helical computerized tomography ( ct ) scanners have displayed superior detection and staging accuracy of pancreatic cancer , as compared to traditional ct imaging , with detection accuracies of 9095% . the procedure for ct imaging of pancreatic cancer includes the use of oral water as a negative intraluminal contrast and intravenously injected iodinated contrast material . ultrasonography ( us ) examination is another imaging modality commonly utilized for diagnosing pancreatic cancer , yet this method lacks the sensitivity and reliability needed for staging the disease . us is often the initial test used in symptomatic patients , as it remains inexpensive and highly accessible . for patients with jaundice or biliary ductal dilatation , endoscopic retrograde cholangiopancreatography ( ercp ) also , this technique may be used to biopsy the tumor and provide physicians information for determining treatment plans . endoscopic us ( eus ) is another reliable imaging modality for detecting pancreatic cancer when performed by trained professionals . some studies have suggested that eus may be as useful as ct imaging for detecting and staging pancreatic cancer , with an overall staging accuracy greater than 85% . this clinical imaging modality requires highly trained specialists and is not readily accessible worldwide . in combination with eus , fine - needle aspiration ( eus fna ) is useful for taking biopsies of abnormal pancreatic lesions . if these imaging modalities fail to provide consistent results , mri or pet imaging is employed to confirm the diagnosis and stage of the disease . for mri , patients receive an intravenous injection of gadolinium , as pancreatic cancer is hypointense on gadolinium - enhanced t1-weighted images . this is due to the hypovascularity and increased fibrous stroma found in pancreatic tumors . in addition , diffusion - weighted mri ( dwi ) was shown to accurately differentiate pancreatic cancer from pancreatitis in patients . due to the movement from breathing and bowel peristalsis , motion artifacts have limited the use of mri for clinical imaging of pancreatic cancer . pet imaging using f - fdg may be more sensitive for detecting early malignancies , as changes in tissue metabolism ( i.e. , glucose metabolism ) usually predate any structural changes of the pancreas . while newer dual modality pet ct imaging systems are becoming widely available worldwide , the high cost associated with these instruments remains a limiting factor . recent advances in molecular imaging have altered the way we diagnose and monitor several diseases , including highly metastatic and drug - resistant pancreatic malignancies . while overall survival rates have improved for most cancers , pancreatic cancer remains the most lethal form of cancer in the united states . despite significant research efforts , current treatment strategies remain limited and ineffective in most cases , resulting in a 5-year mortality rate of 93% this high mortality is attributed to inefficient early detection strategies coupled with ineffective first line treatments . to assist in the development of novel imaging therapeutic agents , researchers have evaluated several targeting entities as potential imaging agents , ranging from small molecular weight proteins to highly specific antibodies . advances in molecular imaging of pancreatic cancer may provide imperative information regarding genotypic and phenotypic properties of the tumor and associated microenvironment . in return , this novel knowledge can be utilized for both enhancing cancer diagnoses and furthering our exploration of therapeutic monitoring in the future . extensive examination of molecular imaging has occurred during the past two decades , yet several challenges in the field remain unsolved . the key limitation of molecular imaging is the development of exogenous imaging agents , as developing or discovering novel entities for receptor targeting can be both expensive and time - consuming . since molecular imaging relies heavily upon active imaging agents , additional research into the development of novel molecular imaging agent is required . another limitation is the current instrumentation , as both low spatial resolution and sensitivity can significantly hinder successful disease monitoring , even with effective imaging agents . also , current molecular imaging instrumentation is costly and unavailable to many academic and research facilities . for molecular imaging to become standard practice , these modalities must be accessible by more researchers in the future . lastly , clinical translation of these imaging modalities remains unclear and highly debatable , which may be resolved through added collaborative efforts from researchers in combination with standardization of imaging practices and multicenter clinical trials . in this review , five molecular imaging modalities were examined . when designing future research studies involving molecular imaging , the current limitations of each modality should be considered . a limiting factor of pet imaging is that it requires short - lived radioisotopes that must be created in costly cyclotrons . also , radiation can produce harmful health hazards , yet minimization of these health risks is accomplished by limiting the patient exposure through lower doses of radioactivity . lastly , pet imaging suffers from low spatial resolution , which limits our visualization of malignancies in some instances . for example , the invasion of adjacent structures of pancreatic tissue and vasculature may be unnoticeable with pet imaging , making it difficult for physicians to plan surgical procedures . in comparison to pet , spect is limited by both lower resolution and less sensitivity . about the strengths of pet and spect imaging , mri has several advantages including its unlimited depth penetration , high spatial resolution , and excellent soft tissue contrast , and it does not require radioactive exposure . while a useful imaging modality , mri suffers from poor sensitivity and long acquisition times . next , optical imaging has become widely available in many research institutes in the past decade , yet the clinical translation remains uncertain at present . while optical imaging combines high sensitivity with no ionizing radiation requirement and low cost , this system is limited by low sensitivity and light attenuation at increased tissue depths . as the multimodality instrument combining optical and ultrasound imaging , pa imaging unlike optical imaging , the spatial resolution of pa imaging is not significantly affected by tissue depth . in comparison to the limitless penetration of mri , pet , and spect , the limited depth of penetration for pa imaging remains a critical hindrance to potential clinical translation . in the future , both molecular imaging instrumentation and tracers will become more widely accessible for research purposes . as a pathway to personalized medicine , patients at risk for certain diseases may be screened using molecular imaging agents highly specific for certain disease models . for diseases with high mortality rates attributed to late symptom onset , early screening the field of molecular imaging is expected to see significant growth , attributed to the development of improved imaging agents and instrumentation .
development of novel imaging probes for cancer diagnostics remains critical for early detection of disease , yet most imaging agents are hindered by suboptimal tumor accumulation . to overcome these limitations , researchers have adapted antibodies for imaging purposes . as cancerous malignancies express atypical patterns of cell surface proteins in comparison to noncancerous tissues , novel antibody - based imaging agents can be constructed to target individual cancer cells or surrounding vasculature . using molecular imaging techniques , these agents may be utilized for detection of malignancies and monitoring of therapeutic response . currently , there are several imaging modalities commonly employed for molecular imaging . these imaging modalities include positron emission tomography ( pet ) , single - photon emission computed tomography ( spect ) , magnetic resonance ( mr ) imaging , optical imaging ( fluorescence and bioluminescence ) , and photoacoustic ( pa ) imaging . while antibody - based imaging agents may be employed for a broad range of diseases , this review focuses on the molecular imaging of pancreatic cancer , as there are limited resources for imaging and treatment of pancreatic malignancies . additionally , pancreatic cancer remains the most lethal cancer with an overall 5-year survival rate of approximately 7% , despite significant advances in the imaging and treatment of many other cancers . in this review , we discuss recent advances in molecular imaging of pancreatic cancer using antibody - based imaging agents . this task is accomplished by summarizing the current progress in each type of molecular imaging modality described above . also , several considerations for designing and synthesizing novel antibody - based imaging agents are discussed . lastly , the future directions of antibody - based imaging agents are discussed , emphasizing the potential applications for personalized medicine .
many authors showed that conventional icl insertion with peripheral iridotomy had no significant effect on postoperative iop and that it resulted in a narrower angle width without increasing trabecular pigmentation ( compared with values after laser iridotomy ) . the v4c model of icl has got the ks - aquaport introduced to the center of the icl optic , which improved aqueous humor circulation between posterior and anterior chambers . the nonoccludable 0.36 mm centraflow diminished the risk of postoperative pupillary block that may occur following closure of peripheral surgical or laser iridotomy . the distance between the posterior icl surface and the anterior crystalline lens pole is termed the icl vault , which is crucial regarding the incidence of anterior subcapsular cataract formation . the ideal postoperative vault must create a space over the entire anterior crystalline lens surface , with 1.00 to 1.50 central corneal thickness ( cct ) , on slit lamp examination . whilst poor vault ( < 250 m ) increases risk of cataract development , excessive vault ( > 750 m ) may result in pupillary block - angle closure glaucoma . icl vault is determined by patient age , white - to - white ( w - w ) measurement , and icl shape / design . the latter remains the most crucial factor in determining postoperative vault , with the v4 model resulting in higher vault , compared to the flatter v3 design icl . the central hole in v4c may affect the amount of piol vault , which plays a vital role in determining the safety of the piol implantation technique . nongonioscopic screening tests on limbal anterior chamber depth for the detection of occludable angles include van herick , ultrasound pachymetry , and optical pachymetry . limitations of van herick include inter- and intraobserver variability in the measurement itself , lack of angle morphology description , and dependence on clear integral peripheral cornea . our study aims to evaluate the correlation between icl power and vault , with postoperative iop and anterior chamber angle width . icl implantation procedure was performed at aseer magrabi eye hospital , ( meh ) in kingdom of saudi arabia ( ksa ) in the period of december 2012 to november 2013 . comprehensive discussion with the patients was undertaken before surgery , explaining for them the details of the procedure and its benefit and complications and an informed written consent was obtained from all patients . a single surgeon who is the first author performed all surgeries . the exclusion criteria were an anterior chamber depth ( acd ) less than 3.0 mm , history and/or clinical signs of iritis or uveitis , macular or retinal involvement , glaucoma or pigmentary dispersion , monocular vision , lens opacity , pseudoexfoliation , endothelial cell count less than 2500 cells / mm , and w - w less than 11.00 mm . all patients were evaluated for iop using goldmann applanation tonometer and anterior chamber angle width using both van herick slit lamp grading system and scheimpflug tomography imaging ( oculus pentacam ) . follow - up of the aforementioned variables was at 1 , 6 , and 18 months postoperatively , together with icl vault measurements . all preoperative and postoperative investigations were performed at morning shift ( 912 a.m. ) under mesopic - light illumination conditions . the icl vault was estimated clinically relative to cct ( 1.00x ) using slit beam , and the central vault of the piol over the crystalline lens was objectively evaluated with a rotating scheimpflug camera ( pentacam hr , oculus optikgerte gmbh ) 1 month , 6 months , and 18 months postoperatively . using the image analysis software program with this device , without cycloplegia , an experienced optometrist calculated the amount of the central vault of the piol over the crystalline lens as the distance between the posterior surface of the piol and the anterior surface of the crystalline lens . table 1 shows the foster modification of the van herick grading system for limbal anterior chamber depth , using slit lamp biomicroscopy , applied in our study by the author . preoperative scheimpflug imaging was mandatory in all cases , to measure the anterior chamber depth ( acd ) , central corneal thickness ( cct ) , anterior chamber angle width , and white - to - white ( w - w ) measurement . we calculated the power of icl ( modified vertex calculation formula ) by entering patients ' acd , manifest refraction spherical equivalent ( mrse ) , back vertex distance , and k readings on staar company online calculator and ordering system ( ocos ) . for the piol proper sizing , the system requires data entry by the user , for w - w measurement , acd , cct , beside the birth date , and any history of previous intervention like iol implantation . icl size was determined by the manufacturer nomogram ( ocos ) in all cases , without making any changes by the assigned surgeon . mean w - w measurement was 11.7 0.4 mm , and mean icl diameter was 12.6 0.5 mm . a 3.20 mm temporal tunneled clear cornea incision was created , and the anterior chamber was filled with viscoelastic material ( microvisc 1% ; bohus biotech ab ) . the pc piol ( visian icl v4c ; staar surgical inc . , monrovia , ca ) was loaded into the cartridge and injected very slowly to allow controlled slow lens unfolding ; then an iris manipulator was used to tuck the footplate haptics of the lens within the posterior chamber . no peripheral iridectomy was needed , as ks - aquaport ensures dynamic regular aqueous flow between posterior and anterior chambers . postoperatively , patients were prescribed gatifloxacin 0.5% eye drops ( zymar , allergan , inc . , fort worth , texas ) four times daily and prednisolone acetate 1% ( pred forte allergan , inc . , irvine , ca , usa ) four times daily for 2 weeks . statistical analysis data were presented as mean sd for normally distributed data and medians ( quartiles ) for abnormally distributed data . variables were analyzed in relation to baseline values using analysis of variance ( anova ) for repeated measures . pearson correlation analysis was performed for normally distributed data ; spearman correlation was performed for abnormally distributed data . the study enrolled 54 eyes of 27 patients of mean age 29 2.30 years . the mean baseline iop of 11.69 2.15 showed a statistically significant ( p = 0.002 ) increase in iop at 1 month postoperatively , which remained nearly unchanged at 6 and 18 months postoperatively , with mean value of 16.07 4.12 , 16.07 4.10 , and 16.07 4.13 , respectively , as shown in figure 1 . one case presented with toxic anterior segment syndrome ( tass ) 4 days postoperatively , with iop of 47 mmhg , corneal edema from limbus to limbus , and ucva of light perception . iop after 18 months was controlled to 23 mmhg on medications and reached 34 mmhg on no treatment . after 1 month , the iop was higher than 20 mmhg in 7 eyes ( 4 patients ) during the study . at discharge , apart from the case with tass syndrome , no eye had an iop that was 10 mmhg or more above the preoperative iop measurement 18 months postoperatively . of the 7 eyes with iop higher than 20 mmhg , only 2 eyes with iop of 22 and 23 mmhg required topical beta blockers , which controlled iop over 1 month and stopped thereafter . the icl vault was estimated clinically on slit lamp relative to the central corneal thickness and measured using scheimpflug tomography in m . the proper amount of piol vault is considered to be identical to the thickness of the central cornea ( approximately 500 m ) . the mean icl vault measured at 1 , 6 , and 18 months postoperatively was 1.187 0.279 ( 593 135 m ) , 1.20 0.275 ( 601 133 m ) , and 1.274 0.253 ( 637 125 m ) , respectively . the variance was not statistically significant ( p = 0.076 ) at 6 months and ( p = 0.064 ) at 18 months , but there was a tendency for the piol vault to slightly increase over time . no eye had a postoperative piol vaulting grade of 0 ( piol in contact with the lens ) or 4 ( excessive angular narrowing due to anterior displacement of the iris ) , both of which would have required piol explantation . the distribution of the piol vaults of the 52 eyes over the crystalline lens , relative to the cct , 18 months postoperatively is shown in figure 2(a ) , and pentacam - measured vault ( in ) is shown in figure 2(b ) . strong correlation was found between slit lamp ( 1x cct ) and pentacam icl vault in microns ( r = 0.81 ) . increase in mean iop , pentacam icl vault , and coincident decrease in ac angle width at the same time points is demonstrated in figure 3 . mean icl power ( in diopters ) of 6.85 2.30 showed no correlation ( p value of 0.131 ; r value 0.212 ) with mean iop at 18 months , as shown in figure 4 . pentacam ac angle width in degrees showed a statistically significant decrease at 1 ( p = 0.025 ) , 6 ( p = 0.016 ) , and 18 ( p = 0.010 ) months postoperatively . mean preoperative value of 40.14 5.49 decreased to 25.280 5.33 , 25.469 5.44 , and 25.492 5.38 , at 1 , 6 , and 18 months , respectively , as shown in figure 5 . modified van herick limbal acd grade at baseline ( 3.394 0.069 ) decreased to 2.759 0.065 ( p = 0.741 ) , 2.740 0.066 ( p = 0.689 ) , and 2.692 0.061 ( p = 0.557 ) at 1 , 6 , and 18 months postoperatively , which is considered a nonstatistically significant decrease . pentacam - aided assessment of ac angle width showed no correlation with modified van herick grading system of limbal anterior chamber depth at all time points . mean pentacam ac angle width at 18 months showed no correlation with iop , as shown in figure 6 . mean icl vault showed moderate correlation with pentacam ac angle width at 1 ( r = 0.435 ) and 6 ( r = 0.424 ) months and weak correlation ( r = 0.271 ) at 18 months . figure 7 demonstrates clearly ac angle width in patient number 8 of our case series , before and after icl implantation . intraocular pressure elevation following icl implantation may be secondary to pupillary block , pigment dispersion , and steroid use . report that inserting an implantable collamer piol alters the dynamics of the aqueous humor and results in iop elevation . higueras - esteban et al . in 2013 compared iop following icl implantation , in v4b and v4c groups . they found neither pupillary block nor significant iop increase 3 months following v4c implantation in 18 eyes . they reported no intra- or intergroup significant difference in the mean iop , between the conventional group and centraflow group , with mean preoperative iop values of 11.5 2.8 mmhg in the v4b group and 11.9 2.7 mmhg in the v4c group , compared to mean iop after 3 months of 12.4 1.8 mmhg in the v4b group and 13.8 2.2 mmhg in the v4c group . despite lack of a peripheral iridectomy in the v4c group , spectral - domain oct images showed icl - iris contact in both v4b and v4c icls , without association with pigment dispersion . baseline iop was 14.6 3.4 mmhg ( range 8 to 26 mmhg ) before surgery . postoperatively , the mean iop was 14.5 4.6 mmhg ( range 6 to 30 mmhg ) at 1 day , 14.2 4.2 mm hg ( range 6 to 29 mmhg ) at 1 week , and 12.3 3.4 mmhg ( range 9 to 24 mmhg ) at 1 month . no statistically significant alterations were detected over time after implantation ( p > 0.2 ) . in the current study , the mean baseline iop of 11.69 2.15 showed a statistically significant ( p = 0.002 ) increase in iop at 1 month postoperatively , which remained nearly unchanged at 6 and 18 months postoperatively , with mean value of 16.07 4.12 , 16.07 4.10 , and 16.07 4.13 , respectively . after 1 month , the iop was higher than 20 mmhg in 7 eyes ( 4 patients ) during the study . at discharge , apart from the case with tass syndrome , no eye had an iop that was 10 mmhg or more above the preoperative iop measurement 18 months postoperatively . consistent with other authors , we attribute the early increase in iop during the first month after surgery to the effect of postoperative inflammation , trabeculitis , and topical steroids , which may interpret the statistically significant increase in iop 1 month postoperatively in the current study . furthermore , highly myopic patients are more prone to steroid - related increases in iop , especially with high intraocular penetration like topical prednisolone acetate . despite the lack of a strong correlation between increased postoperative iop with decreased ac angle width or increased icl vault throughout the 18 months of follow - up , figure 3 interpretation may attribute the elevated iop following icl implantation to the coincident decreased ac angle width at the same time points . in the current study , the presence of the 0.36 mm aquaport with its known fountain effect on back of collamer lens may explain the mild increase in icl vault throughout the 18 months of follow - up . however , the presence of a central hole did not preclude the reported iop elevation and reduced ac angle width at 1 , 6 , and 18 months postoperatively . in our study , the icl vault variance was not statistically significant ( p = 0.076 ) at 6 months and ( p = 0.064 ) at 18 months , but there was a tendency for the piol vault to slightly increase over time . mean icl vault showed moderate correlation with pentacam ac angle width at 1 ( r = 0.435 ) and 6 ( r = 0.424 ) months and weak correlation ( r = 0.271 ) at 18 months . icl vaulting measurements in our study disagreed with kamiya et al . who reported nonsignificant decrease over months of follow - up . increase in follow - up vault in our study may be described as dynamic vaulting , possibly explained by the fountain effect , secondary to a constant aqueous - pushing force to overlying collamer lens . however , kamiya et al . assumed that the presence of an artificial hole does not significantly affect the amount of the piol vault because of the continuous pressure exerted by the back surface of the iris on the piol . despite the statistically significant decrease in ac angle width following v4c icl implantation , mean pentacam ac angle width 18 months following surgery showed no correlation with increase in iop . however , long - term follow - up of ac angle width should be performed to detect potential angle closure . besides , we recommend the addition of ac angle width ( in degrees ) to the preoperative data required by the staar company during online lens calculation and ordering . in a study by chung et al . , one - month postoperative angle opening distance values were significantly smaller than preoperative values by 31.8% ( p < 0.001 ) , but no significant progressive changes were observed thereafter . in our study , late postoperative increase in icl vault and decrease in ac angle width are not necessarily the result of overestimated preoperative w - w and subsequent large lens diameter calculation . instead of resting on ciliary sulcus , icl footplate haptics might be supported by the ciliary body that could lead to narrowing of the ac angle . besides , age - related increase in ciliary muscle thickness might alter the late postoperative icl lens position by a forward shift of the icl , with subsequent vault increase in the later periods after icl implantation , which may add a possible explanation to the vault increase in the current study . no cases of pupillary block due to obstruction of the central port were reported in the current study or in any of the few previously published studies ( theoretical and in vivo ) . the collamer lens power is a reflection of its thickness that theoretically may result in ac shallowing and iop elevation . however , in our study , mean icl power ( in diopters ) of 6.85 2.30 showed no correlation ( p value of 0.131 ; r value 0.212 ) with mean iop at 18 months . one of the advantages in this study was the dynamic noncycloplegic vault measurements , which did not interfere with the accommodation - induced changes . since the piol optic remains in contact with the back surface of the iris , the latter pushes the piol toward the crystalline lens before mydriasis , and the anterior surface of the crystalline lens shifts posteriorly after mydriasis . the limitations of this study include the lack of a comparative study group with conventional posterior chamber phakic lens and the lack of correlation of icl vault with pupil diameter before and after icl insertion . besides , vaulting and ac angle measurements would have been better triple - checked with an additional imaging modality like anterior segment oct that permits high - resolution cross - sectional anterior segment imaging with excellent reproducibility of measurements by using the interference profile of the reflections from the cornea , iris , and crystalline lens . in addition , pentacam - aided assessment of ac angle width in our study showed no correlation with van herick grading system of limbal anterior chamber depth . to our knowledge , this is the first study to assess the correlation between icl vault , ac angle width , and intraocular pressure following v4c lens implantation , with follow - up up to 18 months postoperatively . a long - term careful observation is required to compare the piol vault with and without cycloplegia in both v4b and v4c collamer lenses , with correlation to ac angle width . in summary , our noncomparative study demonstrated that implantation of visian icl with a central hole resulted in a decrease in ac angle width and increase in iop , within acceptable physiological values at all time points , with zero incidence of pupillary block - angle closure glaucoma .
purpose . to assess intraocular pressure ( iop ) , lens vaulting , and anterior chamber ( ac ) angle width , following v4c implantable collamer lens ( icl ) procedure for myopic refractive error . methods . a prospective case series that enrolled 54 eyes of 27 patients that were evaluated before and after v4c phakic posterior chamber collamer lens implantation for correction of myopic refractive error . preoperative measurement of iop was done using goldmann applanation tonometer and anterior chamber angle width using both van herick slit lamp grading system and scheimpflug tomography imaging ( oculus pentacam ) . follow - up of the aforementioned variables was at 1 , 6 , and 18 months postoperatively , together with icl vault measurements . results . the mean baseline iop of 11.69 2.15 showed a statistically significant ( p = 0.002 ) increase after 1 month that remained unchanged at 6 and 18 months postoperatively , with mean value of 16.07 4.12 , 16.07 4.10 , and 16.07 4.13 , respectively . pentacam ac angle width showed a statistically significant decrease at 1 ( p = 0.025 ) , 6 ( p = 0.016 ) , and 18 ( p = 0.010 ) months postoperatively , with mean preoperative value of 40.14 5.49 that decreased to 25.28 5.33 , 25.46 5.44 , and 25.49 5.38 , at 1 , 6 , and 18 months , respectively . mean icl vault showed moderate correlation with pentacam ac angle width at 1 ( r = 0.435 ) and 6 ( r = 0.424 ) months . conclusion . v4c icl implantation resulted in decrease in ac angle width and increase in iop , within acceptable physiological values at all time points .
he past quarter century has seen the pioneering work of tassiulas and ephremides @xcite , lin and shroff @xcite , lin , shroff and srikant @xcite , and neely , modiano and rohrs @xcite on max - weight and backpressure based scheduling policies for communication networks that are provably throughput optimal , attaining any desired maximal throughput vector on the pareto frontier of the feasible throughput region . in this paper we develop completely decentralized and tractable scheduling policies that achieve any desired maximal throughput vector of packets that meet specified hard end - to - end relative deadlines for packets under average - power constraints on nodes . to see why this is a challenge , one may consider the situation depicted in figure [ fig2 ] . suppose that node @xmath0 needs to decide whether to serve the packet of flow @xmath1 or the packet of flow @xmath2 . packets of flow @xmath1 are experiencing downstream congestion , in contrast to packets of flow @xmath2 which face no downstream congestion . thus node @xmath0 is better off serving the packet of flow @xmath2 rather than flow @xmath1 since the latter would anyway get delayed and not make it to its destination on time . therefore network state is useful information . the central contribution of this paper shows just how to obtain an optimal distributed scheduling policy when nodes face average - power constraints . when nodes face peak - power constraints we provide a distributed approximately optimal policy that approaches optimality in a precisely quantifiable way . the main contribution is the design of completely decentralized optimal routing and scheduling policies which can attain any desired maximal throughput vector of packets that are delivered end - to - end by their stipulated deadlines , when nodes have average - power constraints . that is , the policies are precisely optimal with respect to the throughput delivered under hard end - to - end delay constraints . these results addressing per - packet hard delay bounds are obtained by considering a decomposition of the lagrangian of the constrained network - wide markov decision process that is intrinsically stochastic and different from a fluid - based analysis . we show that a policy where each node makes decisions based only on the age of the packets present at it , and a prior computable price of transmission , oblivious to all else in the network , is optimal . this vastly simplifies the network operation . if the nodes instead have peak - power constraints , then the decentralized policy can be simply truncated to yield a policy that is near - optimal in the same quantifiable sense as whittle s relaxation for multiarmed bandits @xcite . in this paper we address the case where links are unreliable . this is of interest in networks with directional antennas , or networks of microwave repeaters , or even networks composed of unreliable wireline links . in a companion paper @xcite we address the case where the links face interference . delivering packets on time is of great interest in emerging applications such as cyber - physical systems , where control - loops are closed over networks , are sensitive to delays . similarly , quality of service ( qos ) requirements for real - time applications such as video streaming , voip , surveillance , sensor networks , mobile ad - hoc networks ( manets ) , and in - vehicular networks , all entail that packets should be delivered on time @xcite . the rest of this paper is organized as follows . in section [ problem - considered ] we describe the problem considered and the main results . in section [ pw ] we summarize previous work and set this work in context . in section [ sm ] we describe the system model . in section [ rr ] we describe the constrained network - wide mdp . in section [ a1 ] we show the packet - level decomposition of the network - wide lagrangian . in section [ spsf ] we describe the single - packet transportation mdp that arises . in section [ pmdp ] we consider the dual problem of the constrained network - wide mdp and establish strong duality . in section [ solution ] we show that the implementation of the packet - level optimal transportation policies yields an overall optimal policy for the entire system . in section [ direct - sol ] we show how the overall optimal policy is obtained through a tractable linear program . in section [ threshold ] we show that there is a further simplifying threshold structure for the optimal policy . in section [ sec : optpol ] we show how the optimal prices can be precomputed . in section [ sec : linkcap ] we address the problem with link capacities or , equivalently , peak - power constraints . in section [ sec : peakpow ] we establish the asymptotic optimality of the truncated policy . in section [ wirelessfading ] we address the problem when the channel condition changes with time . in section [ jointnonreal ] we address the problem where there are both real - time flows as well as non - real - time flows . in section [ examples ] we provide examples showing how the theory can be used to determine optimal distributed policies , and also present a comparative simulation study of the performance of the truncated policy for link - capacity constraints . we conclude in section [ conclusion ] . we consider multi - hop , multi - flow networks in which packets of all flows have a hard end - to - end relative deadlines . ( the relative deadline is the remaining time - till - deadline when a packet arrives ) . nodes can transmit packets at varying power levels . since the wireless channel is unreliable , the outcome of packet transmissions is modeled as a random process . nodes can transmit and receive packets simultaneously . the throughput of packets of a flow that meet the end - to - end relative deadline constraint is called the timely - throughput . we consider the following two types of nodal constraints in this paper : a ) an average - power constraint on each node in the network , or b ) a link - capacity constraint on each network link which bounds the number of concurrent packets that can be transmitted on it at any given time @xmath3 , or , equivalently , a peak - power constraint at each node . our goal is to design a decentralized , joint scheduling , transmission and routing policy , abbreviated as a scheduling policy , " that maximizes the weighted sum of the timely - throughputs of the flows , for any given nonnegative choice of weights . that is , it can attain any point on the pareto frontier of the timely - throughput vector . to be optimal , the policy should be dynamic enough and take into account in an online fashion the following factors : * _ routing _ : the policy will need to dynamically route packets so as to avoid paths that have a higher delay or nodes with lower power budgets . * _ scheduling _ : the policy will need to prioritize packet transmissions based on their age , the channel conditions , and the congestion at the nodes lying on the paths to their destinations . * _ energy efficiency and channel reliability _ : it will need to choose the power levels of packet transmissions to balance between reliability and energy consumption . if channel states are time - varying , the policy will have to carry out packet transmissions opportunistically when the channel states are good " so that the maximum throughput under deadline constraints is attained in an energy - efficient manner @xcite . this also involves a trade - off between packets missing their deadlines on account of bad channel conditions , and between spending more energy to transmit it in a bad channel state so that it reaches the destination within its deadline . the main result is the determination that a markov decision process for a certain single - packet transportation problem " governs the behavior of each packet , oblivious to all other traffic or network state . in this standalone problem , a single packet optimizes its progress through the network , paying prices to nodes every time it requests transmission , but is compensated with a reward if it reaches its destination prior to the hard deadline . the only manner in which this optimal single - packet transportation problem is coupled to the overall network , nodal power constraints , other flows and other packets , is through predetermined prices for nodal transmissions . the optimal prices can be tractably computed off - line and stored . determining the optimal policies for all packets is also of tractable complexity , involving a linear program with the number of variables equal to the product of the square of the number of nodes , the number of flows and the maximum relative deadline , rather than the network state that is exponential in the above quantity . thereby , we obtain optimal distributed policies for maximizing the network s timely - throughput of packets meeting hard per - packet deadlines under average - power constraints on the nodes . the key to these results is to pursue a fundamentally stochastic approach that considers the lagrangian of the constrained network - wide markov decision process ( mdp ) governing the entire network , and showing how it decomposes into packet - by - packet decisions . this decomposition approach allows treatment of the intrinsically variability related aspects such as delay , in sharp contrast to the backpressure approach that considers the decomposition of the lagrangian of the fluid model . thus the approach of this paper is able to address delay rather than just throughput . through this novel decomposition , we can address timely - throughput optimality of packets that meet hard per - packet delay deadlines , rather than just throughput optimality . moreover , it does so producing a completely distributed and tractable policy , for systems with average - power constraints at nodes and unreliable links . when nodes have peak - power constraints in addition to , or in place of , average - power constraints , one can simply truncate the above optimal policy dynamically to respect the peak - power constraint , and obtain a policy that is quantifiably near - optimal . specifically , it is asymptotically optimal in the same manner as whittle s relaxation for multi - armed bandits @xcite . the exposition proceeds as follows . we begin with the overall problem with average nodal power constraints and invoke the scalarization principle to pose the problem of maximizing the network s weighted timely - throughput subject to nodal average - power constraints as a constrained network - wide markov decision process ( mdp ) @xcite . we then solve this problem by considering the lagrangian dual of this constrained network - wide mdp . the lagrange multipliers associated with the average - power constraints are interpreted as prices paid by a packet for utilizing energy every time its transmission is attempted by a node . as recompense , a packet collects a reward equal to the weight of its flow when it reaches its destination within its specified hard deadline . this results in a very convenient packet - level decomposition into an optimal single - packet transportation problem . the markov decision process for this problem has a small - sized state - space : ( number of nodes)*(bound on relative deadline ) , much smaller than the exponentially large number of states in the network - wide problem . the prices can be calculated off - line just by price tattonement that drives the excess power consumption " at nodes to zero . the optimal policies for all packets of all flows can be determined by solving a linear program with a small number of states . importantly , the overall approach yields a completely distributed solution , where a node only needs to know the remaining times - till - deadlines of packets present at that node , that is timely - throughput optimal under average - power constraints at nodes . when the constraints are on link - capacity and not its average utilized capacity , or equivalently on peak - power and not average - power , one obtains a near - optimal policy by simply truncating the average power - optimal policy . the result is asymptotically optimal as the network capacity is increased , in the same sense as whittle s indexability @xcite approach is asymptotically optimal as the population of bandits increases in proportion @xcite . over the past twenty - five years there have been several notable advances in the theory of networking . in pioneering work , tassiulas and ephremides @xcite , lin and shroff @xcite , lin , shroff and srikant @xcite , and neely , modiano and rohrs @xcite have shown that scheduling networks based on max - weight and backpressure are throughput optimal . the backpressure policy emerges naturally from a decomposition of the lagrangian for the fluid network . in another breakthrough , jiang and walrand @xcite have designed a novel adaptive carrier sensing multiple access algorithm for a general interference model that achieves maximal throughput through completely distributed scheduling under slow adaptation , without slot synchrony , if packet collisions are ignored . combined with end - to - end control it also achieves fairness among the multiple flows . in another seminal contribution , kelly , maulloo and tan @xcite have shown that the problem of congestion control of the internet can be formulated as convex programming problem and have provided a quantitative framework for design based on primal or dual approaches . eryilmaz , ozdaglar and modiano @xcite have developed a throughput optimal randomized algorithm for routing and scheduling of the common two - hop interference model that can be implemented in a distributed way with polynomial complexity . they have also developed such a policy for inelastic flows that takes flow control into account and results in a fair allocation of the network s capacity . any effort at developing a delay - optimal scheduling policy needs to take into account the time - till - deadline of packets in the network . the csma algorithm does not do so . the backpressure policy schedules packets only on the basis of rate - weighted queue lengths of nodes and provides no delay guarantees . in fact , fluid - based policies such as the backpressure policy , should be expected to and have been shown to be throughput optimal , but they should not be expected to provide delay optimality . they can perform poorly with regard to delay performance @xcite . for optimal delay performance , one needs to start with a fundamentally stochastic framework that takes fluctuations into account . this is akin to the difference between the law of large numbers and the central limit theorem . such a stochastic framework is the path that is pursued in this paper . there has been considerable progress on the problem of scheduling an access point , in which multiple one - hop flows with hard relative deadlines share a wireless channel . the pareto optimal frontier of timely - throughput vectors has been characterized , and simple optimal policies have been determined @xcite . li and eryilmaz @xcite consider the problem of scheduling deadline - constrained packets over a multi - hop network . however , the proposed policies are not shown to have any provable guarantees on the resulting timely - throughput . to the best of the authors knowledge , mao , koksal and shroff @xcite is the only work which provides a provable sub - optimal policy for deadline - constrained networks , though the policies proposed therein guarantee only a fraction , @xmath4 , of the timely - throughput capacity region . we consider networks in which the data - packets have a hard deadline constraint on the time within which they should be delivered to their destination nodes if they are to be counted in the throughput . the communication network of interest is described by a directed graph @xmath5 as shown in figure [ fig1 ] , where @xmath6 is the set of nodes that are connected via communication links . a directed edge @xmath7 signifies that node @xmath0 can transmit data packets to node @xmath8 . we will call this link @xmath9 . flows . flow @xmath10 , with source @xmath11 and destination @xmath12 , has several feasible routes . its end - to - end relative deadline is @xmath13 . node @xmath0 has an average - power constraint @xmath14 . a packet transmitted on link @xmath15 has a probability @xmath16 of being successfully received by node @xmath8 . though not so indicated , the probability @xmath17 may depend on the power level of the packet s transmission by node @xmath0 . , width=288 ] we assume that time is discrete , and evolves over slots numbered @xmath18 . one time - slot is the time taken to attempt a packet transmission over any link in the network . there are a finite number of transmit power levels at which a packet can be transmitted . for convenience , we normalize each time slot to 1 second , so that power and energy of a transmission are interchangeable . the outcome of a transmission over a link between any two nodes is allowed to be random , which enables us to model unreliable channels . if a packet transmission occurs on the link @xmath19 at a certain power level that consumes energy @xmath20 , then the transmission is successful with probability @xmath21 , which is monotone decreasing in @xmath20 . we can model the phenomena of wireless fading by allowing the success probability @xmath22 to also be a function of time that can be assumed to be governed by a finite - state markov process , whose state is known at the transmitting node . however , for simplicity of exposition , we consider time - invariant @xmath21 s only . in this paper we do not consider contention for the transmission medium ; the case of interference is considered in the companion paper @xcite . the network is shared by @xmath23 flows . packets of flow @xmath10 have source node @xmath11 and destination node @xmath12 . they may traverse one of several alternative routes . the numbers of packet arrivals of a flow at its source node are i.i.d . across time - slots , though the distribution can vary from flow to flow . for simplicity of exposition we suppose that these distributions have bounded support , i.e. , the number of arrivals is bounded , though we can relax this to merely assuming they are finite valued . packets across flows are independent . the analysis below carries over to the case when the arrivals and relative deadlines ( detailed below ) are governed by a finite - state markov process . we will denote the average arrival rate of flow @xmath10 in packets / time slot by @xmath24 . each packet of flow @xmath10 has a relative - deadline , " or allowable delay " @xmath13 . if a packet of flow @xmath10 arrives to the network at time @xmath3 , then it needs to be either delivered to its destination node by time - slot @xmath25 , or else it is discarded from the network at time @xmath25 if it has not yet reached its destination @xmath12 . we suppose that all relative deadlines of packets are bounded by a quantity @xmath26 . we can allow packets of a flow to have random independent and identically distributed ( i.i.d . ) deadlines , independent across flows , however , for simplicity of exposition we will suppose that all packets of flow @xmath10 have the same relative deadline @xmath13 . the timely - throughput " @xmath27 attained by a flow @xmath10 under a policy is the average number of packets delivered prior to deadline expiry per unit time , i.e. , @xmath28 where the random variable @xmath29 is equal to the number of packets of a packet of flow @xmath10 that are delivered in time to their destination at time @xmath3 , with the expectation taken under the policy being applied . the vector @xmath30 is called the timely - throughput vector . " a timely - throughput vector @xmath31 that can be achieved via some scheduling policy will be called an achievable timely - throughput vector " . the set of all achievable timely - throughput vectors constitutes the rate - region , " denoted by @xmath32 . given weights @xmath33 for the timely - throughput of each flow @xmath34 , we will define the weighted timely - throughput " as @xmath35 , where @xmath36 . in sections [ rr]-[sec : optpol ] we consider an average - power constraint on each node @xmath37 . if the total energy consumed by all the concurrent packet transmissions on link @xmath19 at time @xmath3 is @xmath38 units of energy , then the nodal average - power constraints are given by , @xmath39 the second summation above is taken over all links @xmath19 , where @xmath15 for some node @xmath8 . for simplicity of exposition , we suppose that the number of power levels available to choose from for any transmission is finite . we note that the above constraint on the average - power allows a node to transmit packets simultaneously over several outgoing links , which can be achieved via employing various techniques such as tdma , ofdma , cdma etc . we suppose that nodes can simultaneously receive any number of packets while they are transmitting . we will derive completely distributed scheduling policies that maximize the weighted timely - throughput for any given weight vector @xmath40 for the unreliable multi - hop network under the average - power constraint ( [ shannon ] ) on nodes . as an alternative to , or in addition to it , in section [ sec : linkcap ] we will consider peak - power constraints on each link : @xmath41 alternatively , we can constraint the number of concurrent packets that can be transmitted on a link @xmath19 at each time @xmath3 . for either of these situations , we will obtain quantifiably near - optimal distributed scheduling policies . in order to characterize the network s rate - region @xmath32 , it is sufficient to characterize the set of pareto - optimal vectors @xmath42 is a timely - throughput vector and @xmath43 such that @xmath44 , since @xmath32 is simply its closed convex hull . the problem of obtaining @xmath32 therefore reduces to that of finding scheduling policies which maximize weighted timely - throughputs of the form @xmath35 . the latter problem can be posed as a constrained network - wide markov decision process ( cmdp ) @xcite . the state of an individual packet present in the network at time @xmath3 is described by the flow @xmath10 to which it belongs , and the two tuple @xmath45 , where @xmath0 is the node at which it is present , and @xmath46 is the time - to - go till its deadline . the state of the network at time @xmath3 , @xmath47 , is described by specifying the state of each packet present in the network at time @xmath3 . since the time spent by a packet in the network is bounded by @xmath26 , and since the number of arrivals in any time slot is also bounded due to the bounded support assumption , the system state @xmath47 takes on only finitely many values , though it will be exponentially large . a scheduling policy @xmath48 has to choose , at each time @xmath3 , possibly in a randomized way , which packets to transmit at each node from the set of available packets , and over which links and at what powers . the link choice allows routing to be optimized . the choice made at time @xmath3 will be denoted @xmath49 . since the probability distribution of the system state @xmath50 at time @xmath51 depends only @xmath47 and @xmath49 , the problem of maximizing the timely - throughput subject to node - capacity constraints is a cmdp , where a reward of @xmath52 is received when a packet of flow @xmath10 is delivered to its destination before its deadline expires : @xmath53 above , @xmath29 is the number of packets of flow @xmath10 delivered in time to @xmath12 at time @xmath3 . the above cmdp , parameterized by the vector @xmath54 , is optimized by a stationary randomized policy @xcite . since the numbers of states and actions , are both finite , there is a finite set @xmath55 of stationary randomized policies such that for each value of @xmath40 , there is a policy that belongs to this set and solves the cmdp @xcite . let @xmath56 be the vectors of timely - throughputs associated with the policies @xmath57 . we then have the following characterization of @xmath32 : [ lemma1 ] @xmath58 note that , though finite , using the above lemma is intractable for computation since the number of stationary markov policies is exponentially large in the following parameter : maximum possible number of packets in the network @xmath59 maximum path length of the flows @xmath59 maximum possible relative deadline . we therefore seek to design significantly lower complexity decentralized scheduling policies that achieve the region @xmath32 . since we can restrict ourselves to stationary randomized policies , we can replace @xmath60 and @xmath61 by @xmath62 : maximizing the timely - throughput subject to nodal average power constraints can equivalently be posed as the following cmdp : @xmath63 defining @xmath64 as the lagrange multiplier associated with the power constraint on node @xmath0 , and @xmath65 , we can write the lagrangian for ( [ op ] ) as , @xmath66 where the expectation is w.r.t . the policy @xmath48 that is being used , the random packet transmission outcomes , and the randomness of the packet arrivals and relative deadlines , if the latter are random . denoting by @xmath67 the amount of energy spent on transmitting the @xmath68-th packet of flow @xmath10 at time @xmath3 on link @xmath19 , we have , @xmath69 the lagrangian ( [ lagr ] ) reduces to , @xmath70 this can be decoupled completely on a packet - by - packet basis for any fixed value of the vector @xmath71 , as follows . let packets@xmath72 set of all packets of flow @xmath10 . we will denote a packet by @xmath73 . let packets@xmath74 subset of packets of flow @xmath10 that arrive before time @xmath3 . let @xmath75 total energy consumed by packet @xmath73 at node @xmath0 , @xmath76 flow that @xmath73 belongs to , and @xmath77 be the random variable that assumes value one if packet @xmath73 reaches its destination before its deadline and zero otherwise . since the relative deadlines of packets are bounded , @xmath78 can be rewritten as a sum over packets : @xmath79 the term corresponding to packet @xmath73 of flow @xmath10 , @xmath80 can be interpreted as follows . the packet incurs a payment of @xmath64 for using unit energy at node @xmath0 , and accrues a reward of @xmath52 if it reaches its destination before its deadline expires . this _ optimal single - packet transportation problem _ is of very low complexity and is addressed further in section [ spsf ] . let @xmath81 denote its optimal expected cost . due to the decomposition of ( [ lagr2 ] ) over packets , we can optimize packet - by - packet . hence we obtain , @xmath82 since @xmath24 is the arrival rate of packets of flow @xmath10 . therefore , for designing the policy @xmath48 for maximizing the lagrangian , we can simply solve the optimal single - packet transportation problem and let each packet make its own decision at each node on whether it wants to be transmitted , and if so , at what power level . no network state knowledge is needed by a packet to determine its optimal decision . importantly , each packet s actions are independent of the actions chosen for all other packets in the network . it is very noteworthy that this results in a completely scheduling decentralized policy . the optimal single - packet transportation problem is described as follows . a single packet of flow @xmath10 is generated at time @xmath83 at source node @xmath11 , with state @xmath84 , where @xmath13 is the time - to - deadline . at each time step thereafter , the time - to - deadline is decremented by one . if it is not delivered to the destination node @xmath12 by time @xmath85 , then it is discarded from the network . a reward of @xmath52 units is accrued if the packet reaches the destination node @xmath12 by time @xmath13 . a price @xmath64 per unit energy has to be paid by the packet for transmission over an outgoing link at node @xmath0 . with the state of the packet described by the two tuple @xmath45 , where @xmath0 is the node at which it is present , and @xmath46 is the time - till - deadline , we can use dynamic programming to solve for the value function @xmath86 , @xmath87 solving for the maximizer on the rhs yields the optimal action , i.e. , whether to transmit or not , and if so , at what level , for the packet of flow @xmath10 in the state @xmath45 . it is important to note that this is a low complexity problem . each packet s state only consists of the two tuple , ( node it is currently at , time remaining to its deadline ) . this is simply a dynamic programming problem over a time horizon of @xmath26 , with @xmath88 states , in contrast to the exponentially large size , @xmath89 , of the original network s state - space . the dual function is @xmath90 importantly , it can be obtained in a decentralized fashion for any price vector @xmath71 , due to the decomposition into a collection of optimal single - packet transportation problems coupled only through the node prices @xmath64 . the dual problem is : @xmath91 there is no duality gap . the cmdp can equivalently be posed as a linear program , in which the variables to be optimized are the _ occupation measures _ @xcite induced by the policy @xmath48 on the joint state - action space . being a linear program , the duality gap is zero . we now elaborate on how the optimal single - packet transport problem yields the overall network - wide optimal joint routing , scheduling and transmission policy . the key is to use randomization when packets are indifferent to being transmitted at two power levels . this arises when two different choices both attain the maximum of the rhs of the dynamic programming equation ( [ bell ] ) . in such cases , the action taken can be chosen randomly from one of the maximizers . such randomization allows satisfaction of the power constraints with equality , or , to put it another way , it allows us to fully use up all the power that is available at a node if beneficial . let @xmath92 be a price vector . denote by @xmath93 an optimal randomized policy for packets of flow @xmath10 , and by @xmath94 the policy that implements @xmath93 for each packet belonging to flow @xmath10 . suppose that , at every node @xmath0 , either the average - power constraint ( [ shannon ] ) is satisfied with equality by @xmath94 , or @xmath95 . then @xmath94 is optimal for cmdp , and @xmath96 is optimal for the dual problem . the dual function is @xmath97 the result simply follows from complementary slackness @xcite since the primal problem can be written as a linear program over variables that are occupation measures . [ optimality - of - decentralized - policy ] the optimal policy for cmdp is fully decentralized . in order for any node @xmath0 to make a decision regarding a packet @xmath73 present with it at any time @xmath3 , the node only needs to know the flow @xmath10 that the packet belongs to , and the time - to - deadline of the packet . the optimal policy for node @xmath0 simply consists of implementing the policy @xmath98 for packets of flow @xmath10 , where @xmath96 is the optimal price . we may observe the following key features of the solution . in order to solve the primal problem in its original form , the network is required to make decisions based on the knowledge of the network state @xmath47 . the size of the state - space in which the network state @xmath47 resides is exponential , @xmath89 , since there can be @xmath99 packets in the network , with each being in one of @xmath100 states . moreover , the optimal policy requires the entire network state information to be instantaneously known at each node at each slot . indeed , one of the key reasons why optimal policies for communication networks ( and other distributed systems ) are generally intractable is that every decision requires instantaneous knowledge of the complete network state , which is something that can not be obtained since the entire purpose of determining the optimal policy is to communicate information with deadlines . thus , an approach based on directly solving the constrained network - wide mdp would have been computationally and implementationally futile . these serious limitations have led us to instead formulate the optimal single - packet transportation problem with the nodal transmission energy prices @xmath101 . this reduces the computational complexity from _ exponential _ to _ linear_. moreover the resulting solution can be implemented locally at each node . it is highly decentralized with no coupling between flows or nodes or even packets . after establishing the above completely decentralized structure in theorem [ optimality - of - decentralized - policy ] , we can directly obtain the network - wide optimal policy by embedding the very low - dimensional single - packet transportation problems of each flow into one flow - level problem , and then optimally allocating the power at each node among all the flows . we do this by considering the linear program involving state - action probabilities " @xcite . in this approach we do not need to first explicitly solve for the optimal prices . let us consider a single packet of flow @xmath10 . from theorem [ optimality - of - decentralized - policy ] we can restrict attention to randomized markov policies where the packet is transmitted with a certain probability over a certain outgoing link , or not transmitted at all , with the probabilities defending only on the state @xmath102 of the packet . note also that links are unreliable ; a packet transmitted over link @xmath103 reaches @xmath8 with probability @xmath104 . under a randomized markov policy , the packet moves stochastically through the network . we can delete a packet and remove it from the network as soon as @xmath46 hits 0 . let @xmath105 denote the probability that the packet is transmitted over link @xmath103 when its time - till - deadline is @xmath46 , where we use the convention that @xmath106 is the probability that it is not transmitted , and also define @xmath107 correspondingly . ( note that @xmath108 means that the this is the last allowed transmission of the packet , and the packet will be deleted after this transmission ) . the state - action probabilities " @xmath109 satisfy the constraints : @xmath110 with the initial starting state @xmath11 captured by the equation @xmath111 . the probability that the packet reaches its destination @xmath12 before deadline expiry is @xmath112 . if this policy is applied to all packets of flow @xmath10 , then the average reward per unit - time obtained by this policy is obtained by simply multiplying by the arrival rate @xmath24 , @xmath113 . the energy consumed by a single packet at node @xmath0 is @xmath114 . the power consumption at node @xmath0 due to the packets of flow @xmath10 is @xmath115 . combining all the flows , we obtain the following _ direct linear program _ to determine the optimal reward and the optimal markov randomized policy : @xmath116 this linear program directly determines the optimal power allocation over flows at each node , and the optimal transportation policy for each packet . it also randomizes the actions of all packets of a particular identically so that the power available is utilized optimally . this is a low complexity linear program with only @xmath117 variables and @xmath118 constraints . this is a dramatic reduction of the complexity , and is eminently tractable being a linear program . in fact , there is further structure that further reduces the complexity . for simplicity we illustrate this when there is only one transmit power level that corresponds to a fixed energy usage @xmath20 for any transmission . we show that each packet s decision is simply governed by a threshold on time - to - deadline . for each flow @xmath10 , and node @xmath0 , there is a threshold @xmath119 , such that the optimal decision for a packet of flow @xmath10 at node @xmath0 with a time - to - deadline @xmath46 is to be transmitted / not transmitted according to whether the time - to - deadline @xmath46 is strictly greater than / equal to or lesser than the threshold @xmath119 . in a state where the decisions to transmit / not transmit are both optimal ( i.e. , the minimizer on the rhs of the dynamic programming equation ( [ bell ] ) is not unique ) , we choose not to transmit , " so that we thereby obtain an optimal policy that uniquely assigns an optimal action to each state . we will prove the following property ( p ) of this optimal policy , from which the theorem readily follows : ( p ) if the optimal decision is to not transmit a packet at a node , then it is optimal to never again transmit that packet at that node . the reason is that one can then simply define @xmath119 as the maximum value of @xmath46 at which the decision to not transmit is the optimal action . now we prove property ( p ) by using stochastic coupling . suppose that for a packet of flow @xmath10 at a node @xmath0 it is optimal to not transmit it at time - to - deadlines equal to @xmath120 , but it is optimal to transmit it when its time - to - deadline is @xmath121 . consider a packet , called packet-1 , that follows this optimal policy . it waits for @xmath122 slots at node @xmath0 , and then gets transmitted when its time - to - deadline is @xmath123 . now consider another packet , called packet-2 , that waits no time at node @xmath0 , and is transmitted when its time - to - deadline is @xmath73 . we will couple the subsequent experiences of packet-1 and packet-2 , i.e. , whether a transmission at a link is failure or success , after that transmission . then , if packet-1 reaches the destination @xmath124 in time , then so does packet-2 . hence the reward accrued by packet-2 is no less than the reward accrued by packet-1 , while its costs are the same . hence the decision of packet-2 to immediately get transmitted at time - to - deadline @xmath73 is optimal . however , one should not search for an optimal policy by trying to find the optimal thresholds . it is far better to search for optimal _ prices _ since they are the _ same _ for all packets of all flows , but the optimal threshold is only for a specific flow . that is , the prices result in the right trade - off between packets of different flows . however , if one obtains a set of thresholds that is person - by - person " ( or nash ) optimal for the flows , in the sense that each threshold is optimal for a particular flow when the thresholds of the other flows are fixed , then the entire set of thresholds may not be team " optimal . now we consider the problem of determining the correct prices @xmath125 to be charged by the nodes . the key point to note is that price - determination can be done offline ; prices can be pre - computed and stored . one method is to just obtain them simply as the sensitivities of the power constraints in the direct linear program . this requires a good model of the network and its reliabilities and the demands . however , one can also obtain them by tatonnement " over a running system . such price discovery is based on the dual function . first we discuss a hybrid of optimization and simulation , and subsequently a purely learning approach . price discovery can be performed offline by repeated simulation . since the dual problem is convex , each node can use subgradient descent to converge to the optimal price vector @xmath126 . the sub - gradient iteration is simply walrasian tatonnement @xcite , which drives the excess power consumption " at each node towards zero . specifically , at the @xmath68-th iterate of the price vector , node @xmath0 chooses @xmath127.\end{aligned}\ ] ] which simply amounts to a standard subgradient iteration @xcite with a step - size @xmath128 . for any fixed price vector @xmath71 one can solve the dynamic programming equations for the optimal packet policy @xmath129 . instead of using simulation - based optimization to determine the optimal prices , one can determine both the optimal policy as well as the optimal policy contingent on that price , by using two time - scale stochastic approximation @xcite . the faster time - scale stochastic approximation for the policy is @xmath130 where @xmath131 is the maximum , over all links @xmath15 , of @xmath132 in the above , @xmath133 assumes the value @xmath1 if the packet - state at iteration @xmath68 is @xmath102 . @xmath134 is a positive sequence that satisfies @xmath135 . we can use a slower time - scale stochastic approximation for the prices , @xmath136 where @xmath137 is the vector consisting of nodal power bounds , @xmath138 is the average - power utilization at node @xmath0 under @xmath129 , and the sequence @xmath139 satisfies @xmath140 , as well as @xmath141 , where @xmath142 @xcite . the iterations converge to the optimal prices @xmath96 @xcite . when network parameters are not known , one can both solve for the optimal policy @xmath94 as well as the optimal nodal prices @xmath126 in a decentralized manner . one way to achieve this task is to perform the value iterations using reinforcement learning for each price vector @xmath71 until convergence , and then to update the price @xmath71 using a gradient descent method . the analysis so far has addressed the nodal _ average_-power constraints . in this section , we impose more stringent peak link - capacity constraints on the number of concurrent packets that can be scheduled at any given time slot @xmath3 . @xmath143 , now re - defined as the number of packets transmitted on link @xmath19 at time @xmath3 , has to satisfy @xmath144 the more stringent problem that results is , @xmath145 we now proceed to construct a distributed policy with a provably close approximation to optimality . we begin with the policy that is optimal for the _ average _ power constraint @xmath146 this is a distributed policy , as we have shown in the preceding sections , and is moreover tractable to compute . however it only ensures that the constraint is met on average , and _ not _ at each time @xmath3 . on the occasions that the number of packets that it prescribes for concurrent transmission does not exceed the constraint , we just transmit all the packets specified by that policy . however , on the occasions that it specifies an excessive number of transmissions exceeding the rhs of , we simply truncate the list of packets in the manner to be specified below , and only transmit a total of @xmath147 of those packets . clearly , this leads to a policy that does satisfy the constraint at each time instant . what we will show is that this policy is nearly optimal in a certain precise sense to be quantified below . we first note the connection of our approach to that advocated by whittle @xcite for multiarmed bandits . since there is no simple index policy @xcite that is optimal when one is allowed to pull @xmath68 arms concurrently , if @xmath148 , whittle has suggested relaxing this constraint for _ each _ time @xmath3 to a constraint that the _ average _ number of arms concurrently pulled is @xmath68 . this relaxed problem has a tractable solution under an indexability " condition @xcite . importantly , it is near - optimal when the number of arms available goes to infinity , with the proportion of arms of each type held constant @xcite . our approach can be regarded as an extension to multi - hop networks . we first take care of one detail . the average - power constraints in are nodal , while the link - capacity constraints in are link - dependent . to reconcile this , we consider the following version of the problem which involves average _ link - wise _ power constraints , @xmath149 the optimal policy for the above problem can be obtained in exactly the same fashion as for the problem , except that now there are link - based prices @xmath150 , instead of nodal prices @xmath64 . now we define the truncation policy for the problem which involves link - capacity constraints @xmath151 : if the policy @xmath152 specifies that node @xmath0 transmit more than @xmath153 packets at some time @xmath3 , then the truncation policy can pick any @xmath153 of these packets and transmit them . moreover , we eject from the network those packets which @xmath152 dictated to be scheduled , but were not picked for transmission . ( discarding the packets is not strictly required , but it simplifies the discussion ) . let us denote this modified policy by @xmath154 . it may be noted that under this policy , the evolution of the network is not independent across different packets , as was the case with @xmath152 . for the policy @xmath152 , let us denote by @xmath155 the probability that under @xmath156 a packet ( of flow @xmath10 ) with age @xmath157 time - slots would be attempted on link @xmath19 . since the arrival rate of flow @xmath10 packets is @xmath24 , then , on account of the fact that the policy @xmath152 satisfies the average - power - constraints @xmath147 imposed by the network , we have [ l4 ] @xmath158 next , we determine the level of sub - optimality of @xmath159 . we will now obtain lower bounds on the performance of the policy @xmath159 . the following arguments are based on analysis of the evolutions of policies on an appropriately constructed probability space . let us denote by @xmath160 the ( average ) reward earned by policy @xmath152 . first note that the reward collected by @xmath159 ( denoted by @xmath161 ) does not increase if it were to , instead of dropping a packet because of capacity constraint violation , schedule it as dictated by @xmath152 , but no reward is given to it if this packet is delivered to its destination node ( denoted by @xmath162 ) . however @xmath162 is more than the reward if now a penalty of @xmath52 units per - packet was imposed for scheduling a packet via utilizing excess capacity " at some link @xmath163 , but it were given a reward in case this packet reaches the destination node ( denoted by @xmath164 ) . @xmath164 is certainly more than the reward which @xmath152 earns if it is penalized an amount equal to the sum of the excess bandwidths that its links utilize ( denoted by @xmath165 ) multiplied by @xmath52 , since any individual packet may be scheduled multiple times by utilizing excess bandwidth . thus , the difference @xmath166 is less than the sum of the excess bandwidths utilized by the links operating under the policy @xmath152 , scaled by @xmath167 s . let @xmath168 denote the number of packets of flow @xmath10 that have an age of @xmath157 time - slots , and are served on link @xmath19 at time @xmath3 under the policy @xmath156 . we thus have [ l5 ] @xmath169^{+}. \end{aligned}\ ] ] next , we will scale the network parameters in two equivalent ways , and show that the policy @xmath159 is asymptotically optimal . in the discussion below , @xmath170 is a scaling parameter . in the first approach to scaling , the link capacities @xmath147 and the mean arrival rates @xmath24 will be kept fixed , while the size of an individual packet will be scaled as @xmath171 , with the arrivals being i.i.d . with binomial parameters @xmath172 . the second equivalent formulation is to keep the size of packets fixed , while the link capacities for the @xmath170-th system are scaled as @xmath173 , with the arrivals being i.i.d . with binomial parameters @xmath174 . in the rest of the discussion , we will confine ourselves to the former formulation ; however a similar analysis can be performed for the latter case . [ th:2 ] consider the sequence of systems described in problem parameterized by @xmath170 , in which the arrivals for the @xmath170-th system are i.i.d . with binomial parameters @xmath172 . the deviation from optimality of the @xmath170-th system in the sequence operating under the policy @xmath159 is @xmath175 , and hence the policy @xmath159 is asymptotically optimal for the joint routing - scheduling problem under hard link - capacity constraints . below , @xmath176 is the mean absolute deviation of @xmath131 with respect to its mean @xmath177 . the deviation from optimality satisfies , @xmath178 @xmath179 where the the last equality follows from @xcite . we can similarly consider the problem of maximizing the network s timely - throughput subject to additional bounds on the _ peak _ power that can be utilized by each node @xmath37 . the treatment is similar to that in section [ sec : linkcap ] , in which we designed optimal policies under hard constraints on the _ number _ of packets that can be transmitted at any given time @xmath3 . the problem is formally stated as , @xmath180 where @xmath181 ( with value greater than @xmath14 ) is the peak - power constraint on node @xmath0 . denote the policy which maximizes the objective function under the average - power constraints by @xmath156 . now we modify it to a policy @xmath159 described as follows : at each time @xmath3 , each node @xmath37 looks up the decision rule @xmath156 and obtains the optimal power levels at which @xmath156 would have carried out transmissions of the packets available with it . for this purpose , each node @xmath0 only needs to have knowledge of the age of packets present with it . node @xmath0 then chooses a maximal subset of the packets present with it , such that the transmission power levels assigned to them by @xmath156 sum to less than the bound @xmath181 . one way to choose such a set of packets and associated power levels is as follows : arrange the packets in decreasing order of the transmission power assigned by @xmath156 , and label them . then @xmath159 schedules the largest index packet such that the energy of all packets upto that index sum to less that @xmath182 . the asymptotic optimality of @xmath159 follows as in theorem [ th:2 ] . consider the sequence of networks operating under the policy @xmath159 , in which the arrivals for the @xmath170-th system are i.i.d . with binomial parameters @xmath172 . the deviation from optimality of the @xmath170-th system in the sequence operating under the policy @xmath159 is @xmath175 , and hence the policy @xmath183 is asymptotically optimal for the peak - power problem . our model allows us to incorporate wireless fading . we model the channel state as a finite - state markov process @xmath184 , with the link transmission success probabilities @xmath185 a function of the channel state @xmath184 and the transmit power level . as before , we assume that the probabilities are monotone decreasing in @xmath20 . the network state is then described by a ) the state of each packet , and b ) the channel state @xmath184 . the optimal policy can be determined along similar lines as before , by augmenting the system state with the channel state @xmath184 . the optimal policy will be of the following form : the decision to be taken by a node @xmath0 at time @xmath3 will depend on the state of the packet and the channel state @xmath184 . the above assumes that the channel condition is known to each transmitter . a simplification is possible if we assume that the process @xmath184 is i.i.d . , which would eliminate the need for communicating @xmath184 . alternately it could be deterministically time - varying . a common model which can be approximated is block fading @xcite , under which the channel state need only be communicated periodically . in the previous sections we have considered networks exclusively serving real - time flows for which the utility of a packet arriving after its deadline is zero . often one is interested in networks that serve a mix of real and non - real - time flows @xcite . the system model can be easily extended . to incorporate this case , we simply set the relative deadlines of the packets belonging to the non - real - time flows as @xmath186 , so that they are never dropped . we first illustrate the amenability of the theory by explicitly hand - computing the optimal distributed policy in two examples . in the second example , the deadlines are slightly more relaxed than in the first example , and we show both how the prices change as a consequence , and how the optimal policy reacts to this . subsequently , we consider a more complex example and provide a comparative simulation illustrating the performance of the asymptotically optimal policy for the case of link - capacity constraints , comparing it with well known routing / scheduling policies such as the backpressure , shortest path , and earliest deadline first ( edf ) policies . [ ex2 ] consider the system shown in figure [ fig4 ] . . ] it consists of two flows traversing the nodes 1 , 2 , and 3 , in opposite directions . flow 1 , with source node @xmath187 and destination node @xmath188 , has an end - to - end deadline @xmath189 of 2 slots . flow 2 , with source node @xmath190 and destination node @xmath191 , also has an end - to - end deadline @xmath192 of 2 slots . packets can not afford even one failure on any transmission if they are to reach their destinations in time , since the relative deadlines for the flows are exactly equal to the total number of hops to be traversed . one packet of each flow arrives in every time slot , so @xmath193 . each packet transmission at any node is at 1 watt , so @xmath194 since all time slots are one second . nodes 1 , 2 and 3 , have average - power constraints @xmath195 and @xmath196 watts , respectively . links ( 1,2 ) , ( 2,3 ) , ( 2,1 ) and ( 3,2 ) have reliabilities of @xmath197 and @xmath198 , respectively . denoting by @xmath161 and @xmath162 the timely - throughputs of flows 1 and 2 , we wish to maximize @xmath199 , i.e. , packets of flow 1 are 2.5 times more valuable than packets of flow 2 . so @xmath200 and @xmath201 . the dynamic programming equations for the optimal single - packet transportation problem for flow @xmath1 yield : @xmath202 so @xmath203 and @xmath204^+$ ] . similarly , for flow @xmath2 , @xmath205^+$ ] and @xmath206 . packets of flow @xmath1 at node @xmath2 are more valuable than packets of flow @xmath2 at node @xmath2 , since packets of flow @xmath1 have expected reward of @xmath207 , while packets of flow @xmath2 have expected reward of @xmath208 . so we will push as many packets of flow @xmath1 as possible to node @xmath2 . in order for a packet of flow 1 to choose to be transmitted at node 2 , however , the price @xmath209 that it pays needs to be less than the expected reward ( 0.3)5 that it can obtain in the future . hence @xmath210 similarly , in order for a packet of flow 1 to choose to be transmitted at node 1 , the total expected price it expects to pay , @xmath211 ( since @xmath212 is the price it pays at node 1 , and if it succeeds to reach node 2 , which happens with probability 0.4 , it then pays a price @xmath209 ) must be less than the expected reward , which is ( 0.4)(0.3)5 . hence , @xmath213 but flow @xmath1 can only push @xmath214 of its packets to node @xmath2 . so there is spare capacity at node @xmath2 that flow @xmath2 can use . for flow @xmath2 to use that we need @xmath215 now , flow @xmath2 needs to utilize the spare capacity of @xmath216 left at node @xmath2 . so it needs to ensure a flow of @xmath216 reaches node @xmath2 . to do that it needs to transmit @xmath217 of the packets that arrive since @xmath218 . so it needs to randomize at node @xmath219 . by complementary slackness , this can only happen if packets at node 3 are indifferent to being transmitted or not . so , @xmath220 since we want to maximize @xmath221 we choose @xmath222 , and , from ( [ lambda1 ] ) , @xmath212=0.04 . therefore , we arrive at the following solution , where we denote by @xmath223 the probability with which a packet of flow @xmath10 is transmitted at node @xmath0 when the time - to - deadline is @xmath46 : @xmath224 now we verify that this policy is optimal . @xmath225 implies @xmath226 since @xmath227 . now @xmath228 implies @xmath229 and @xmath230 are both optimal , i.e. , a packet is indifferent to them , and so one may randomize between them to satisfy the average - power constraint . similarly , @xmath231 implies that both decisions @xmath232 and @xmath233 are optimal . also , @xmath234 implies both @xmath235 and @xmath236 are both optimal . so we can randomize the transmission of packets of flow 2 in state @xmath237 . the average - power usages are 0.5 watts at node 1 , 0.4 watts at node 2 , and @xmath217 watt at node 3 . the average - power constraints of @xmath238 and @xmath239 at nodes @xmath1 and @xmath2 , respectively , are met with equality . the average - power constraint at node @xmath219 is slack but @xmath240 . so complementary slackness holds . hence the policy is optimal . [ ex3 ] we now consider the same system as in example [ ex2 ] , except that we relax the relative deadlines to @xmath241 , so that every packet can afford to have one hop that is retransmitted and still make it to its destination in time . consider a packet that has just arrived at node 1 . it can either make it to its destination in two hops if both transmissions are successful the first time they are attempted , or it can fail once at node 1 and then be successful on subsequent transmissions at nodes 1 and 2 , or it can succeed the first time at node 1 , fail once at node 2 , and then succeed at node 2 on the second attempt . if it does so reach its destination , it obtains a reward of 5 . hence taking these possibilities into account , if a packet of flow 1 gets transmitted at every available opportunity , then the expected reward for a packet of flow 1 at its first visit to node @xmath2425=1.38 $ ] . similarly , expected reward for a packet of flow 2 at its first visit to node @xmath243 , expected reward for a packet of flow 1 at its second visit to node @xmath244 , expected reward for a packet of flow 2 at its second visit to node @xmath245 , expected reward for a packet of flow 1 at its first visit to node @xmath246 , expected reward for a packet of flow 2 at its first visit to node @xmath247 , expected reward for a packet of flow 1 at its second to node @xmath248 . expected reward for a packet of flow 2 at its second visit to node @xmath249 . packets of flow @xmath1 are more valuable at node @xmath2 than flow @xmath2 . so we want to maximize the throughput of packets of @xmath1 to node @xmath2 . if we transmit with probability @xmath250 on the first attempt at node @xmath1 then all power is used up . the maximum power that can be consumed by packets of flow 1 at node 2 @xmath251 watts . so there is still @xmath252 watts left at node @xmath2 that can be used by packets of flow @xmath2 . after arriving at node 2 for the first time , a packet of flow 2 can use a maximum power @xmath253 watts . so flow 2 at node 3 needs to make @xmath254 attempts which amounts to randomization with probability @xmath255 . in order to transmit a packet of flow 2 on its second visit to node 2 , the price @xmath209 can not be any more than the expected reward @xmath208 . so we could attempt some of the packets of flow 2 that arrive at node @xmath219 , and transmit some packets of flow @xmath2 that arrive at node @xmath2 . with @xmath231 , @xmath212 needs to satisfy @xmath2565 $ ] , so @xmath257 . similarly , @xmath258 needs to satisfy , @xmath2592 $ ] , which yields @xmath260 . the power constraint at node @xmath219 is slack , but @xmath240 . so the price vector is @xmath261 . the corresponding probabilities of transmission are @xmath262 the optimal single - packet transportation dynamic programming equations yield : @xmath263 in all of the below , both choices are again optimal , @xmath264 note that the power consumptions are @xmath265+\frac{1}{13}(0.6)\left[1+(0.3)1\right]=0.4 \mbox { , tight,}\\ p_3 & = \frac{1}{13}.\end{aligned}\ ] ] the last constraint is loose , but then @xmath240 , and we still have complementary slackness . so the solution is optimal . now we consider the case of link - capacity constraints ( or equivalently peak - power constraints ) . we present a comparative simulation study of the asymptotically optimal policy with respect to the following two policies : a ) earliest deadline first scheduling combined with backpressure routing ( edf - bp ) , and b ) earliest deadline first scheduling combined with shortest path routing ( edf - sp ) that routes packets along the shortest path from source to destination with ties broken randomly . we consider the systems shown in figures [ fig6 ] and [ fig6.1 ] . all link capacities are just 1 , so the asymptotically optimal policy is noteworthy for its excellent performance seen below even in the very much non - asymptotic regime . and @xmath266 . arrivals are deterministic with rates @xmath193 per time - slot . link capacities are @xmath267 packet / time - slot for all links @xmath103 shown . ] we compare the performance of the asymptotically optimal policy @xmath159 of theorem [ th:2 ] , with the following edf - sp policy : 1 . the link @xmath9 chosen for scheduling packet transmissions for flow @xmath10 lies on the shortest path that connects the source and destination nodes of flow @xmath10 . thereafter , on each link @xmath103 , it gives higher priority to packets having earlier deadlines . it then serves a maximum of @xmath153 packets in decreasing order of priority . we also compare the performance with the edf - bp policy . under the edf - bp policy , each node @xmath0 maintains queues for each flow @xmath10 and possible age @xmath46 . denoting by @xmath268 the queue length at node @xmath0 at time @xmath3 , and by @xmath269 the total number of packets of flow @xmath10 at node @xmath0 at time @xmath3 , the policy functions as follows : 1 . for each outgoing link @xmath15 , edf - bp calculates the backlogs @xmath270 of flow @xmath10 . 2 . on each link @xmath9 it prioritizes packets on the basis of the backlogs associated with their flows . for packets belonging to the same flow , higher priority is given to packets having earlier deadlines . it then serves a maximum of @xmath153 highest priority packets from amongst the packets whose flows have a positive backlog @xmath270 . both edf - sp and edf - bp eject packets that have crossed their deadlines . plots [ fig7 ] and [ fig9 ] show the comparative performances of the policies for the networks in fig . [ fig6 ] and [ fig6.1 ] as the relative deadlines of the flows are varied . the performance of the asymptotically optimal policy is superior even in the non - asymptotic regime . plots [ fig8 ] and [ fig10 ] show the comparative performance as network capacities are increased . . ] observe that for the network in figure [ fig6.1 ] , a shortcoming of edf - sp is that it is unable to utilize the path @xmath271 , and therefore performs worse than edf - bp . though it seems that in a general network the edf - bp should be able to utilize all source - destination paths , it will neither be able to efficiently prioritize packets based on their age , nor discover which paths are more efficient at delivering packets within their deadlines . as the relative deadlines of both flows are increased.,width=340 ] as link capacities and arrival rate are scaled . the relative deadlines for both flows are set at @xmath272 time - slots.,width=340 ] as the relative deadlines of flows are increased . relative deadline of flow 1 is one more than that of flow 2.,width=340 ] as link capacities and arrival rates are scaled . relative deadlines for flows 1 and 2 are @xmath272 and @xmath273 respectively.,width=340 ] we have addressed the problem of designing optimal distributed policies that maximize the timely - throughput of multi - hop wireless networks with average nodal power constraints and unreliable links , in which data packets are useful only when they are delivered by their deadline . the key to our results is the observation that if the nodes are subject to average - power constraints , then the optimal solution is decoupled not only along nodes and flows , but also along packets within the same flow at a node . each packet can be treated exclusively in terms of its time - to - deadline at a node . the decision to transmit a packet is governed by a transmission price " that the packet pays at each node , weighed against the reward that it collects at the destination if it reaches it before the deadline expires . the above policies are highly decentralized ; a node s decisions regarding a packet can be taken solely on the basis of its age and flow classification . the nodes need not share any information such as queue lengths , etc . , amongst themselves in order to schedule packets . this approach is notable since obtaining optimal distributed policies for networks has long been considered an intractable problem . thus , our work fills two important gaps in the existing literature of policies for multi - hop networks a ) hard per - packet end - to - end delay guarantees , b ) optimal distributed policies . the traditional approach to scheduling has been to consider the lagrangian of the fluid model , and interpret the queue lengths as prices . this addresses throughput optimality , but not delay , as one would expect from any fluid model - based analysis . the key to our analysis consists of posing the problem of joint routing- and scheduling packets under deadline constraints over a multi - hop network as an intrinsically stochastic problem involving unreliabilities , and consider its lagrangian and the dual . this intrinsically captures variabilities in packet movement which critically affect delays , and allows us to address the timely - throughput optimality of packets that meet hard end - to - end deadlines . the lagrange multipliers associated with the average power or rate constraints are then the prices paid by a packet to a node for transmitting its packet , rather than queue lengths . this yields a completely decentralized policy , in which decisions are taken by a packet based solely on its age and location in the network , for which the accompanying dynamic programming equations are very tractable . the overall solution is eminently tractable , being completely determined by a linear program with the number of variables equal to the product of the square of the number of nodes , the number of flows and the maximum relative deadline , rather than exponential in problem size . we also consider the case of peak - power constraints at each node , which may be present in addition to , or as a replacement of , average - power constraints . it is interesting that a minor modification of the optimal policy for the case of average - power constraints is asymptotically optimal as the network capacity is scaled . this approach of dualizing the stochastic problem has broad ramifications , as has been explored in @xcite for problems such as video transmission and energy storage . this paper has considered only the case of unreliable links , which is of interest in networks consisting of microwave repeaters , networks with directed antennas , or even unreliable wireline links . the case of networks with contention for the medium is addressed in a companion paper . l. tassiulas and a. ephremides , `` stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks , '' _ ieee transactions on automatic control _ , vol . 37 , no . 12 , pp . 19361948 , dec 1992 . j. andrews , s. shakkottai , r. heath , n. jindal , m. haenggi , r. berry , d. guo , m. neely , s. weber , s. jafar , and a. yener , `` rethinking information theory for mobile ad hoc networks , '' _ ieee communications magazine _ , vol . 46 , no . 12 , pp . 94101 , december 2008 . kelly , a. k. maulloo , and d. k. h. tan , `` rate control for communication networks : shadow prices , proportional fairness and stability , '' _ the journal of the operational research society _ 237252 , 1998 . l. jiang , m. leconte , j. ni , r. srikant , and j. walrand , `` fast mixing of parallel glauber dynamics and low - delay csma scheduling . '' _ ieee transactions on information theory _ , vol . 58 , no . 65416555 , dec 2012 . l. bui , r. srikant , and a. stolyar , `` a novel architecture for reduction of delay and queueing structure complexity in the back - pressure algorithm , '' _ ieee / acm transactions on networking _ , vol . 19 , no . 6 , pp . 15971609 , dec 2011 . g. gupta and n. shroff , `` delay analysis for multi - hop wireless networks , '' in _ proceedings of twenty - eighth annual joint conference of the ieee computer and communications societies ( infocom ) _ , april 2009 , pp . 23562364 . l. ying , s. shakkottai , a. reddy , and s. liu , `` on combining shortest - path and back - pressure routing over multihop wireless networks , '' _ ieee / acm trans . networking _ , 19 , no . 3 , pp . 841854 , jun . 2011 . , `` admission control and scheduling for qos guarantees for variable - bit - rate applications on wireless channels , '' in _ proceedings of the tenth acm international symposium on mobile ad hoc networking and computing ( mobihoc ) _ , may 2009 . , `` scheduling heterogeneous real - time traffic over fading wireless channels , '' in _ proceedings of twenty - ninth annual joint conference of the ieee computer and communications societies ( infocom ) _ , san diego , ca , march 2010 . , `` utility maximization for delay constrained qos in wireless , '' in _ proceedings of twenty - ninth annual joint conference of the ieee computer and communications societies ( infocom ) _ , march 2010 , pp . 19 . , `` utility - optimal scheduling in time - varying wireless networks with delay constraints , '' in _ proceedings of the eleventh acm international symposium on mobile ad hoc networking and computing ( mobihoc ) _ , 2010 , pp . , `` broadcasting delay - constrained traffic over unreliable wireless links with network coding , '' in _ proceedings of the twelfth acm international symposium on mobile ad hoc networking and computing ( mobihoc ) _ , 2011 , pp . 3342 . , `` optimality of periodwise static priority policies in real - time communications , '' in _ proceedings of the 50th ieee conference on decision and control and european control conference ( cdc - ecc ) _ , 2011 , pp . 50475051 . , `` queueing systems with hard delay constraints : a framework and solutions for real - time communication over unreliable wireless channels , '' _ queueing systems : theory and applications _ , vol . 71 , pp . 151177 , 2012 . , `` pathwise performance of debt based policies for wireless networks with hard delay constraints , '' in _ proceedings of the 52nd ieee conference on decision and control ( cdc ) _ , dec . 10 - 13 2013 , pp . 78387843 . , `` fluctuation analysis of debt based policies for wireless networks with hard delay constraints , '' in _ proceedings of thirty - third annual joint conference of the ieee computer and communications societies ( infocom ) _ , april 2014 , pp . 24002408 . r. li and a. eryilmaz , `` scheduling for end - to - end deadline - constrained traffic with reliability requirements in multi - hop networks , '' in _ proceedings of thirtieth annual joint conference of the ieee computer and communications societies ( infocom ) _ , april 2011 , pp . 30653073 . z. mao , c. koksal , and n. shroff , `` optimal online scheduling with arbitrary hard deadlines in multihop communication networks , '' _ networking , ieee / acm transactions on _ , vol . pp , no . 99 , pp . 11 , 2014 . v. s. borkar , `` control of markov chains with long - run average cost criterion , '' in _ the i m a volumes in mathematics and its applications _ , w. fleming and p. lions , eds.1em plus 0.5em minus 0.4em springer , 1988 , pp . d. p. bertsekas , a. e. ozdaglar , and a. nedic , _ convex analysis and optimization _ , athena scientific optimization and computation series.1em plus 0.5em minus 0.4embelmont ( mass . ) : athena scientific , 2003 . j. gittins and d. jones , `` a dynamic allocation index for the sequential design of experiments , '' in _ progress in statistics _ , j. gani , ed.1em plus 0.5em minus 0.4emamsterdam , nl : north - holland , 1974 , pp . 241266 . s. shakkottai and a. l. stolyar , `` scheduling algorithms for a mixture of real - time and non - real - time data in hdr , '' in _ teletraffic engineering in the internet era . proceedings of the international teletraffic congress - itc - i7 _ , ser . teletraffic science and engineering , n. l. d. f. jorge moreira de souza and e. a. de souza e silva , eds.1em plus 0.5em minus 0.4emelsevier , 2001 , vol . 4 , pp . 793 804 . rahul singh received the b.e . degree in electrical engineering from indian institute of technology , kanpur , india , in 2009 , the m.sc . degree in electrical engineering from university of notre dame , south bend , in , in 2011 , and the ph.d . degree in electrical and computer engineering from the department of electrical and computer engineering texas a&m university , college station , tx , in 2015 . he is currently a postdoctoral associate at the laboratory for information decision systems ( lids ) , massachusetts institute of technology . his research interests include decentralized control of large - scale complex cyberphysical systems , operation of electricity markets with renewable energy , and scheduling of networks serving real time traffic . p. r. kumar b. tech . ( iit madras , 73 ) , d.sc . ( washington university , st . louis , 77 ) , was a faculty member at umbc ( 1977 - 84 ) and univ . of illinois , urbana - champaign ( 1985 - 2011 ) . he is currently at texas a&m university . his current research is focused on stochastic systems , energy systems , wireless networks , security , automated transportation , and cyberphysical systems . he is a member of the us national academy of engineering and the world academy of sciences . he was awarded a doctor honoris causa by eth , zurich . he as received the ieee field award for control systems , the donald p. eckman award of the aacc , fred w. ellersick prize of the ieee communications society , the outstanding contribution award of acm sigmobile , the infocom achievement award , and the sigmobile test - of - time paper award . he is a fellow of ieee and acm fellow . he was leader of the guest chair professor group on wireless communication and networking at tsinghua university , is a d. j. gandhi distinguished visiting professor at iit bombay , and an honorary professor at iit hyderabad . he was awarded the distinguished alumnus award from iit madras , the alumni achievement award from washington univ . , and the daniel drucker eminent faculty award from the college of engineering at the univ . of illinois .
we consider multi - hop networks serving multiple flows in which packets not delivered to their destination nodes by their deadlines are dropped from the network . the throughput of packets that are delivered within their end - to - end deadlines is called the timely - throughput . we address the design of policies for routing and scheduling packets that optimize any specified weighted average of the timely - throughputs of several flows , under nodal power constraints . we provide a new approach which directly yields an optimal distributed scheduling policy that attains any desired maximal timely - throughput vector ( i.e. , any point on the pareto frontier ) under average - power constraints on the nodes . this completely distributed and tractable solution structure arises from a novel intrinsically stochastic decomposition of the lagrangian of the constrained network - wide markov decision process rather than of the fluid model . the derived policies are highly decentralized in several ways . all decisions regarding a packet s transmission scheduling , transmit power level , and routing , are based solely on the age of the packet , not requiring any knowledge of network state or queue lengths at any of the nodes . global coordination is achieved through a price " for energy usage paid by a packet each time that its transmission is attempted at a node . this price decouples packets entirely from each other . it is different from that used to derive the backpressure policy where price corresponds to queue lengths . the complexity of calculating the prices is tractable , being related only to the number of nodes multiplied by the relative deadline bound , and is considerably smaller than the number of network states which is exponentially large . prices can be determined offline and stored . if nodes have peak - power constraints instead of average - power constraints , then the decentralized policy obtained by truncation is near - optimal with respect to the timely - throughput as link capacities increase in a proportional way . shell : throughput optimal decentralized scheduling of multi - hop networks with end - to - end deadline constraints : unreliable links communication networks , wireless networks , delay guarantees in networks , scheduling networks , quality of service .
A 105-year-old man has made history by cycling more than 14 miles round a track in an hour. Robert Marchand set the first hour record in the over-100s category in 2012, then beat it himself two years later at the age of 102, when he covered more than 16 miles. While his distance in Wednesday’s ride was not as great as those two, the new over-105s category had been specially created for him to reflect the magnitude of his feat. Cheered by hundreds of fans, the Frenchman completed 92 laps round the velodrome at Saint-Quentin-en-Yvelines, near Paris. “I did not see the sign warning me I had 10 minutes left,” said Marchand. “Otherwise I would have gone faster, I would have posted a better time. I’m now waiting for a rival.” Facebook Twitter Pinterest Supporters cheer Marchand. Photograph: Francois Mori/AP Marchand’s fans chanted “Robert, Robert” as he neared the end of the ride and gave him a standing ovation at its conclusion, before he was mobbed by TV crews. “He could have been faster but he made a big mistake. He has stopped eating meat over the past month after being shocked by recent reports on how animals are subjected to cruel treatment,” his physiologist, Veronique Billat, told the Associated Press. Marchand was born in Amiens, near the frontline of the first world war, three years before its outbreak. He worked as a firefighter in later life, before going to live in Venezuela and Canada. Back in France in the 1960s, Marchand made a living through various jobs that left him with no time to practise sports. He was 68 when he began a series of cycling feats. While Marchand is not making plans for the future, his coach, Gerard Mistler, said he would not be surprised to see him continue cycling. “Setting goals for himself is part of his personality,” he said. “If he tells me he wants to improve his record, I’ll be game. Robert is a great example for all of us.” ||||| Media playback is unsupported on your device Media caption Robert Marchand: "I'm wondering if it's really true" He may not be the fastest cyclist round a velodrome, but he is easily one of the oldest. Robert Marchand has clocked up 105 years and now a new record for the furthest distance cycled in one hour. The French cyclist managed 22.547km (14 miles) at the national velodrome, taking the top spot in a new category - for riders over 105. Mr Marchand already holds the record for those aged over 100 - 26.927km - set in 2012. He "could have done better", he says, but missed a sign showing 10 minutes to go. "My legs didn't hurt," he told BFMTV. "My arms hurt but that's because of rheumatism." Inspired to try cycling? Find out how to get into cycling with our special guide. To be fair, he had admitted before the event at the Saint-Quentin-en-Yvelines velodrome near Paris that breaking his previous hour record would be tough. "I'm not in such good shape as I was a couple of years back," he told AFP news agency. "I am not here to be champion. I am here to prove that at 105 years old you can still ride a bike," he said. Image copyright AP Image caption Enfin - Robert Marchand completes his record-breaking hour Hundreds of spectators cheered him on trackside. Born on 26 November 1911, Mr Marchand puts his fitness down to diet - lots of fruit and vegetables, a little meat, not too much coffee - and an hour a day on the cycling home-trainer. A prisoner of war in World War Two, he went on to work as a lorry driver and sugarcane planter in Venezuela, and a lumberjack in Canada. No stranger to sport outside cycling, he competed in gymnastics at national level and has been a boxer. The current men's hour record is held by the UK's Bradley Wiggins - 54.526km - which he set in June 2015. ||||| French cyclist Robert Marchand, 105, pedals in a bid to beat his record for distance cycled in one hour, at the velodrome of Saint-Quentin en Yvelines, outside Paris, Wednesday, Jan. 4, 2017. (AP Photo/Thibault... (Associated Press) French cyclist Robert Marchand, 105, pedals in a bid to beat his record for distance cycled in one hour, at the velodrome of Saint-Quentin en Yvelines, outside Paris, Wednesday, Jan. 4, 2017. (AP Photo/Thibault Camus) (Associated Press) SAINT-QUENTIN-EN-YVELINES, France (AP) — Nearly a century ago, Robert Marchand was told by a coach that he should give up cycling because he would never achieve anything on a bike. He proved that prediction wrong again on Wednesday. In a skin-tight yellow and violet jersey, the 105-year-old Frenchman set a world record in the 105-plus age category -- created especially for the tireless veteran -- by riding 22.547 kilometers (14.010 miles) in one hour. "I'm now waiting for a rival," he said. Marchand had ridden faster in the past on the boards of the Velodrome National, a state of the art venue used to host the elite of track cycling. But he had warned before his latest attempt that his current form was not as good. "I did not see the sign warning me I had 10 minutes left," Marchand said after his effort. "Otherwise I would have gone faster, I would have posted a better time. I'm not tired. I thought my legs would hurt, but they don't. My arms hurt, you have to hurt somewhere." Three years ago at the same venue, Marchand covered 26.927 kilometers (16.731 miles) in one hour to better his own world record in the over-100s category. Still, impressed fans and chanted "Robert, Robert" during the last minutes of his ride. Marchand received a standing ovation once he completed the last of his 92 laps and was then mobbed by dozens of cameramen and TV crews. "He could have been faster but he made a big mistake. He has stopped eating meat over the past month after being shocked by recent reports on how animals are subjected to cruel treatment," Marchand's physiologist, Veronique Billat, told The Associated Press. By way of comparison, the current overall world record for one hour is 54.526 kilometers (33.880 miles) set by British rider Bradley Wiggins in 2015. But Wiggins, who smashed the previous record using the world's leading track cycling equipment, is now retired. Marchand, who lives in a small flat in a Parisian suburb with a meager pension of about 900 euros ($940), keeps pedaling and stretching every day. As if time had no effect on him. "He's got two essential qualities. A big heart that pumps a lot of blood, and he can reach high heart beat values that are exceptional for his age," said Billat, a university professor. "If he starts eating meat again and builds more muscle, he can better this mark." Marchand, a former firefighter who was born in 1911 in the northern town of Amiens, has lived through two world wars. He led an eventful life that took him to Venezuela, where he worked as a truck driver near the end of the 1940s. He then moved to Canada and became a lumberjack for a while. Back in France in the 1960s, Marchand made a living through various jobs that left him with no time to practice sports. He finally took up his bike again when he was 68 years old and began a series of cycling feats. The diminutive Marchand — he is 1.52 meters (5-foot) tall and weighs 52 kilograms (115 pounds) — rode from Bordeaux to Paris, and Paris to Roubaix several times. He also cycled to Moscow from Paris in 1992 and set the record for someone over the age of 100 riding 100 kilometers (62 miles). "If the president of his teenage club who told him he was not made for cycling because he was too small could see him today, he would kick himself," Marchand's coach and good friend Gerard Mistler told the AP. According to Mistler, the secret behind Marchand's longevity relates to his healthy lifestyle: eating a lot of fruits and vegetables, no smoking, just the occasional glass of wine and exercising on a daily basis. "He never pushed his limits, goes to bed at 9 p.m. and wakes up at 6 a.m., there's no other secret," Mistler said. "If had been doping, he would not be there anymore." To stay fit, Marchand rides every day on his home trainer and puts himself through outdoor training sessions on the road when the weather is good enough. "One needs to keep his muscles working," said Marchand, a faithful reader of communist newspaper L'Humanite. "Reading a lot keeps his mind alert," Mistler said. "He does not watch much TV, apart from the Tour de France stages." At 105, Marchand is not making plans for the future. His coach would not be surprised to see him back on the boards, though. "Setting goals for himself is part of his personality," Mistler said. "If he tells me he wants to improve his record, I'll be game. Robert is a great example for all of us."
– A French cyclist has set a new record for riding 14 miles in one hour. If that doesn't sound so remarkable—the world record is 33.8 miles—consider the category in which Robert Marchand was competing: over 105. The retired firefighter did it the first time riding in under one hour in the over-100 category in 2012, then topped it two years later at age 102, the Guardian reports. He also holds the 100-plus record for fastest 100km (4 hours, 17 minutes). He turned 105 a few months ago and competed Wednesday in the new category created just for him. Marchand circled the Saint-Quentin-en-Yvelines velodrome near Paris 92 times with a crowd cheering him on. Afterward, he was modest about his feat, telling BFMTV via the BBC that he "could have done better" but missed a sign telling him he had 10 minutes to go. He adds, "My legs didn't hurt … My arms hurt, but that's because of rheumatism." Marchand, who cycles daily and once biked from Paris to Moscow, tells the AP he's "now waiting for a rival." Chips in his physiologist: "He could have been faster but he made a big mistake. He has stopped eating meat over the past month after being shocked by recent reports on how animals are subjected to cruel treatment." (This French sailor set an around-the-world record.)
talipes equinovarus is the most common congenital foot deformity diagnosed in newborns . serial manipulative casting by the use of plaster of paris ( pop ) has been accepted as the primary treatment modality to correct this multiplanar deformity . application of serial manipulative pop casts was first introduced and popularized by kite in the literature ; however , the ponseti technique has been the most popular and demonstrated as the most effective treatment approach . using the ponseti technique , the rates of satisfactory results have been reported as more than 90% by various authors in the literature . the ponseti technique requires casting weekly , with each cast being applied just after the previous cast removal . pop can be more difficult for parents and clinical staff to remove than fiberglass casts . some physicians dealing primarily with the treatment of clubfoot deformity , ask the parents of the patient to soak off the old cast the night before the application of the new cast upon arrival at the clinic . in this case , there is a time gap between the cast removal and the application of the new one . this period can compromise the duration of effective splintage . this may lengthen the time needed for correction and increase the number of casts needed . moreover , the removal of the casts can be stressful for the child as well as the parents . semirigid synthetic softcast is a lightweight material that allows rapid application and removal without any soaking or the use of an oscillating saw . it can easily be applied and dried within an acceptable period , allowing for molding . in addition , there is no time interval between cast removal and application . on the other hand , there are limited numbers of reports regarding the use of fiberglass cast materials in the treatment of clubfoot with the ponseti technique . the main purpose of the present study was to comparatively analyze the effectiveness , advantages , and the complications of using semirigid synthetic softcast with respect to pop cast during the treatment of clubfoot deformity . after having approval from selcuk university faculty of medicine ethical research institutional review board , the present study retrospectively evaluated the clinical data of 196 patients who underwent serial manipulative casting treatment using ponseti technique for clubfoot between september 2009 and october 2010 . clubfoot related to neuromuscular disorders such as myelomeningocele , cerebral palsy or arthrogriposis multiplex congenita , the patients who had a past medical history of any failed treatment for clubfoot , and the ones with any accompanying congenital deformity of the lower extremity were excluded from the study . a total of 95 patients ( 133 feet ) treated by an orthopedic referral center using semirigid synthetic softcast were included in group a , whereas the other 101 patients ( 116 feet ) treated by another orthopedic referral center using pop cast were included in group b. group a consisted of 59 boys ( 85 feet ) and 36 girls ( 48 feet ) . group b consisted of 61 boys ( 70 feet ) and 40 girls ( 46 feet ) . the mean age at presentation was 3.44 3 ( range , 014 ) days in group a and 3.59 3.2 ( range , 016 ) days in group b. serial manipulations and cast treatment was started as soon as possible after the first clinical visit ( fig . the feet were scored according to the pirani scoring system at the initial examination and during the last examination before achilles tenotomy . manipulations and serial weekly cast applications were begun immediately according to the ponseti technique . the cavus , adductus , and varus components of the deformity were corrected by positioning the foot in supination and then abducting the foot while counter pressure was applied with the thumb over the head of talus . above - the - knee casts made from the semirigid synthetic cast material ( 3 m scotchcast soft cast ; health care , st paul , mn ) for group a and from the pop for group b were applied . 2 ) . the photograph of a 2 days old baby with bilateral clubfeet at the initial examination . the photograph of a baby applied bilateral above knee casts made of semirigid synthetic softcast according to the ponseti method . all the medical records for the patients were reviewed , especially the clinical evaluation at the initial and final examinations , including pirani scores . any skin problems due to the cast itself ( such as pressure wounds ) and cast removal ( such as skin scratches ) were recorded . all the casts were removed in the clinic just before the next manipulation , so that the reduction loss was minimized . when the foot achieved 70 of abduction relative to thigh , percutaneous achilles tenotomy was indicated if there was < 15 of dorsiflexion at the same time . a total of 113 of the 133 feet ( 84.9% ) in group a and 95 of the 116 feet ( 81.9% ) in group b underwent percutaneous achilles tenotomy , followed by 3 weeks of casting . browne splints , as described in the ponseti method , were applied after the removal of the final cast . a final parent satisfaction score was also obtained . according to this 5-point evaluation system ( excellent = 5 , very good = 4 , good = 3 , fair = 2 , and poor = 1 ) , cast convenience , cast weight , infant tolerance , durability , material satisfaction , and likelihood of recommending the material scores were noted at the end of the final cast removal . number cruncher statistical system 2007 statistical software ( ut ) was used for the statistical analyses . the descriptive statistics describing the demographic information were expressed as medians , ranges , and interquartile ranges or in medians and standard deviations . a wilcoxon test was used to compare the initial and the final pirani scores before achilles tenotomy . also , a mann whitney u test was used to compare the variables at the initial and final clinical examinations . the mean pirani sores were significantly improved from the first administration of the patients up to the time before achilles tenotomy in both groups ( table 1 ) . however , no significant difference was detected between the groups according to the change in the mean pirani score ( p = 0.198 ) . the number of casts applied until the patients reached at the achilles tenotomy stage was 3.6 1 ( range , 25 ) in group a and 3.8 0.8 ( range , 25 ) in group b. there was no significant difference between the groups according to the number of casts applied until achilles tenotomy ( p = 0.081 ) . in group a , the mean pirani score at presentation was 5.75 0.51 for the feet required a tenotomy and 4.13 0.96 for those that did not . in group b , the mean pirani score at presentation was 5.56 0.49 for the feet required a tenotomy and 4.04 0.90 for those that did not ( table 2 ) . the pirani scores before achilles tenotomy or at the end of serial manipulative casting for the ones who did not require an achilles tenotomy were significantly lower than the initial scores in all patients ( p = 0.0001 ) . comparison of pirani scores at the initial administration and before the achilles tenotomy comparison of pirani scores between the patients required achilles tenotomy or did not minor complications were noted in 12 patients from group a. nine patients had minor skin irritations ( 5 skin scratches in the groin , 3 superficial skin abrasions on the dorsal skin of the foot , and 2 superficial heel ulcerations ) that did not require any treatment or pausing of the cast applications . seven patients had minor skin irritations ( 3 skin scratches in the groin and 4 superficial skin abrasions on the dorsal skin of the foot ) . skin scratches due to oscillating saw during the cast removal were documented in 7 feet . three of those feet required pausing of the cast applications for a 1-week period and thus , increased the total period of serial manipulative casting treatment . the slippage of the cast and skin lesions during the removal of the cast was significantly more common in group b ( p = 0.017 ) . the mean parent satisfaction scores were 4 over 5 for cast convenience , 4.5 for cast weight , 3.9 for infant tolerance , 4.5 for cast durability , 4.4 for material satisfaction , and 4.4 for the likelihood of recommending the cast material in group a. in group b , the mean parent satisfaction scores were 3.6 for cast convenience , 2.9 for cast weight , 3.8 for infant tolerance , 4.2 for cast durability , 3.1 for material satisfaction , and 3.3 for the likelihood of recommending the cast material . the history of the clubfoot treatment was assessed in detail by the iowa clinic in 2000 . surgical treatment techniques had major problems , such as stiffness , small feet , and poor functional results . the ponseti technique has been used with satisfactory results for more than 20 years . the major advantage of this method is achieving good deformity correction without any major operations . with this technique , many authors reported good - to - excellent results . one of the main issue regarding the ponseti technique is the need for close follow - ups . the ponseti technique requires traditional pop for casting material because it can easily be molded . the standard removal techniques for pop casts include holding the extremity in water for approximately 30 minutes and then unwrapping or using a cast saw or a cast knife , which can be very irritating for the baby and the parents . semirigid synthetic softcast may also be the choice of material used during serial manipulative cast applications . the main advantages of fiberglass materials are radiolucency , lighter weight , improved strength , faster curing time , lower risk of thermal burn , cleaner application and removal , and improved durability . a semirigid fiberglass material has been marketed as having additional advantages over classic rigid fiberglass materials , such as molding ability , flexibility , comfort , and ease of removal and unwrapping . semirigid fiberglass cast materials can be removed easily with unwinding alone in a few minutes . on the other hand some studies have been designed to compare semirigid synthetic and traditional pop cast materials in the treatment of clubfoot . one of them described better functional results with pop , although the cast convenience and parent satisfaction parameters were better with fiberglass material . dimeglio scoring system was used for this study and the number of the patients was 34 . our study revealed different results compared to pittner et al although we used approximately the same parameters such as complete time , different results could be secondary to the scoring system which was used and low number of the patients . they used below knee soft casts and concluded that using below knee soft cast was comparable to those using above knee pop cast . the similar results compared to our study can be secondary to the same scoring system and similar number of patients . this matched the conclusion drawn by another report showing that fiberglass was statically superior in its durability , performance , and ease of removal ; 94% of parents strongly preferred semirigid fiberglass over pop for their children 's serial casting . other advantages of fiberglass materials include its light weight , lack of soiling , and water resistance . we noted pirani severity scores for each patient during the treatment and follow - up visits . overall , we obtained acceptable correction with serial casting alone in 15% of feet and with percutaneous achilles tenotomy in 85% of feet . parental compliance during the treatment is one of the main factors in the nonsurgical treatment . our results demonstrated that using semirigid synthetic softcast for clubfoot treatment provides higher parent satisfaction with lower rates of cast - related complications . furthermore , the fiberglass cast is durable , and its removal is easier than the pop cast . the main limitation of the current study was the retrospective evaluation of prospectively followed patient groups . the different results of similar studies using the scoring system of dimeglio could not be checked for the efficacy of softcast material . on the other hand , the current study was a comparative analysis between the 2 groups of patients with similar clinical features and treated by 2 orthopedic referral clinics using 2 different types of casting material for the ponseti technique . our cohort was a large series included patients with primary idiopathic clubfoot deformity diagnosed and treated immediately after birth . although we did not apply a priori calculation for the sample size , post - hoc analysis was performed and the statistical power of our study in the aspect of achieving a comparison between the 2 groups according to the 2 types of cast material was 0.97 with an alpha value of 0.05 . the effect of semirigid synthetic softcast on long - term results and recurrence rate was not evaluated in this study , and thus we recommend that further studies are required . the results that we acquired during the present study were similar to those of previous reports on the effectiveness of the ponseti method of manipulation and casting . semirigid softcast material can easily be used with the ponseti technique for clubfoot treatment because it provides higher parent satisfaction levels and is easier for the physicians to apply and remove .
abstractrandomized controlled clinical trial.the main purpose of the present study was to comparatively analyze the effectiveness , advantages , and the complications of using semirigid synthetic softcast with respect to plaster of paris ( pop ) during the treatment of clubfoot deformity.the study group consisted of 196 babies ( 249 feet ) . a total of 133 feet treated by an orthopedic referral center using semirigid synthetic softcast were included in group a whereas the other 116 feet treated by another orthopedic clinic using pop cast were included in group b. the pirani scores , number of cast applications , time period until achilles tenotomy , any skin problems due to the cast itself , and/or cast removal were recorded . a final parent satisfaction score was also obtained.the mean pirani sores were significantly improved from the first administration to the time before achilles tenotomy in both groups . there was no significant difference according to the number of casts applied until tenotomy . the slippage of the cast and skin lesions was significantly more common in group b. higher parent satisfaction levels were detected in group a.semirigid softcast has been found as superior to pop in the aspects of parent satisfaction and cast - related complication rates .
topologically ordered phases of matter attract a great deal of interest currently in condensed matter physics . fractional quantum hall ( fqh ) states @xcite , as one of the most well - known topological states , provide examples of some of the most exotic features such as excitations with a fraction of the electron charge that obey anyonic statistics @xcite and are essential resources in topological quantum computation @xcite . exact diagonalization ( ed ) in small systems has traditionally been the central numerical technique of theoretical research of fqh states . despite its considerable success in studying some of the robust fqh states like the @xmath0 laughlin state @xcite , the largest system size that ed can reach is seriously limited by the exponential growth of the hilbert space . this weakens its capability in understanding some more complex fqh systems , for example those with non - abelian anyons and landau level mixing . therefore , new algorithms , like density - matrix - renormalization - group ( dmrg ) @xcite , were applied to fqh systems and the computable system size was increased by almost a factor of two compared with ed @xcite . very recently , it was realized that some model fqh states and their quasihole excitations have exact matrix - product - state ( mps ) representation @xcite . motivated by this , and considering that the dmrg algorithm can be apparently formulated in the mps language @xcite , here we report a mps code for finite fqh systems on the cylinder geometry , which is different from the one developed for infinite systems @xcite . we will show the structure of our code , explain how to use it to search the fqh ground states , compare the performance of our code with ed and traditional two - site dmrg algorithm , and discuss possible generalizations . we show the structure of our code in fig . [ structure ] . the most basic part is the implementation of general tensors and @xmath1 quantum numbers . based on that , we can construct tensors that conserve @xmath1 quantum numbers @xcite . each site in the mps ( mpo ) is a special case of this kind of tensor with three ( four ) indices . then we can implement the chains of these sites , namely mps and mpo , and the contraction between them . finally , with the mpo representation of the physical hamiltonian as an input , we can do variational procedure ( single - site update @xcite and density matrix correction @xcite ) to minimize the expectation value of the energy @xmath2 . the structure of our mps code . the most basic part is located on the top , based on which we can implement the whole mps algorithm step by step to search the ground state and ground energy . the arrows on the bonds of tensors represent @xmath1 currents . the concepts of @xmath1-symmetric tensors and their graphical representation are explained in ref . the details of the variational procedure can be found in ref . for a fqh system with @xmath3 electrons and @xmath4 flux in a single landau level on the cylinder of circumference @xmath5 ( in units of the magnetic length ) , any translational invariant two - body hamiltonian can be written as @xmath6 where @xmath7 ( @xmath8 ) creates ( annihilates ) an electron on the landau level orbital @xmath9 , and @xmath10 and @xmath11 determine the range of @xmath12 . the total electron number @xmath13 and total quasi - momentum @xmath14 are @xmath1 good quantum numbers . in our code , the procedure to search for the ground state of @xmath12 in a fixed @xmath15 sector is as follows : * construct an initial mps state @xmath16 ( either a random or a special state ) with fixed @xmath15 . * construct a mpo representation of @xmath12 , which can be generated by the finite state automaton @xcite . there are two choices when applying the finite state automaton method . the simplest one is to keep all @xmath17 satifying @xmath18 larger than some threshold ( for example @xmath19 ) and @xmath20 , where @xmath21 is the truncation of the interaction range . the bond dimension of the mpo , @xmath22 , obtained in this way is proportional to @xmath23 @xcite . the other choice is numerically much more efficient . for each @xmath24 , we first select all @xmath17 satisfying @xmath18 larger than some threshold ( for example @xmath19 ) , and then approximate them by an exponential expansion @xmath25 , with an error @xmath26 . here @xmath27 , @xmath28 and @xmath29 are real @xmath30 , @xmath31 and @xmath32 matrices , respectively , whose optimal values can be found by the state space representation method in the control theory @xcite . this exponential expansion of @xmath17 can reduce the bond dimension of mpo by a factor of two compared with the first choice and is especially suitable for long - range interactions @xcite . we find that typically @xmath33 and @xmath34 are enough for a good representation of @xmath12 . * do the variational procedure with the mpo representation of @xmath12 for the initial mps state @xmath16 . in the density matrix correction @xmath35 @xcite , @xmath36 should be a small number ( we choose @xmath37 in the following ) . eigenstates of the corrected reduced density matrix @xmath38 with eigenvalues larger than @xmath39 are kept . the smallest and largest allowed number of kept states can also be set . when the energy goes up , the density matrix correction is switched off . after the energy converges , we get a candidate of the ground state of @xmath12 . * try different values of @xmath36 and @xmath39 to make sure we are not trapped in local energy minimum . in this section , we study a system of @xmath40 electrons at filling @xmath0 with @xmath41 and @xmath42 , which is easily accessible for ed . by comparing the mps results with ed , we demonstrate that there are two error sources in our mps code : one is the quality of the mpo representation of the hamiltonian , the other is the density matrix truncation . we use haldane s @xmath43 pseudopotential @xcite as the hamiltonian , for which @xmath44}.\ ] ] the ground state in the @xmath45 sector is the exact laughlin state with exactly zero ground energy . the orbital - cut entanglement spectra obtained by ed ( blue dashes ) and mps algorithm ( red dots ) for @xmath40 electrons at @xmath0 with @xmath42 . @xmath46 in ( a ) and @xmath47 in ( b ) . the difference from the ed results , which are caused by nonzero @xmath48 and @xmath39 , are indicated by the green circles in ( a ) . ] in fig . [ mps_ed ] , we use the orbital - cut entanglement spectrum @xcite , which is a fingerprint of the topological order in fqh states , to analyze the error sources of our mps algorithm . we observe two kinds of difference between the entanglement spectra obtained by ed and mps algorithm , as indicated by the green circles in fig . [ mps_ed](a ) . the extra levels in the left circle are caused by the relatively poor quality ( large @xmath48 ) of the mpo representation of the hamiltonian , and the missing levels in the right circle are caused by the relatively large density matrix truncation error @xmath39 . after improving the quality of the mpo representation ( reduce @xmath48 ) and increase the accuracy of the density matrix truncation ( reduce @xmath39 ) , we find the difference between mps and ed results essentially disappears , as shown in fig . [ mps_ed](b ) . we now apply our mps code to systems at @xmath0 with sizes beyond the ed limit . again , we choose the interaction between electrons as @xmath43 pseudopotential , so the ground state in the @xmath45 sector is the exact laughlin state with exactly zero ground energy . in fig . [ mps ] , we fix the accuracy of the mpo ( @xmath21 and @xmath48 ) and the density matrix truncation ( @xmath39 ) , and study the convergence of the ground energy on various square samples with @xmath49 . we choose the root configuration at @xmath0 @xcite as the initial state @xmath16 . the number of kept states in the density matrix truncation is controlled by @xmath39 and changes during the sweeps . this is a little different from the usual dmrg algorithm where a fixed number of kept states is usually set at the beginning . however , in our mps code , we can still track the number of kept states in each sweep and select the maximal one , which is also shown in fig . [ mps ] . with the increase of the system size from @xmath50 electrons to @xmath51 electrons , the entanglement in the system grows due to the increase of circumference @xmath5 . thus the computational cost of the mps simulation also increases , reflected by the fact that the maximal number of kept states grows by a factor of @xmath52 . for the largest system size ( @xmath51 electrons ) , our computation takes @xmath53 days by @xmath54 cpu cores on a computer cluster with @xmath55 gb memory . the final energy that we obtain is very close to the theoretical value @xmath56 for each system size , but grows from roughly @xmath57 for @xmath50 electrons to @xmath58 for @xmath51 electrons . this is because higher accuracy is needed for larger system size and cylinder circumference . the ground energy at @xmath0 with @xmath59 versus the number of sweeps obtained our mps algorithm for various system sizes . we consider square samples , namely the circumference of the cylinder @xmath49 . the maximal number of kept states is given in the bracket for each system size . ] we also want to compare our mps code with the traditional two - site dmrg code . we consider @xmath60 electrons at @xmath0 with @xmath43 interaction and study the convergence of the ground energy for different density matrix truncations @xmath39 ( fig . [ mps_dmrg ] ) . the number of kept states in the two - site dmrg algorithm is set to be equal with the maximal number of kept states in the mps sweeps . with the decrease of @xmath39 , the ground energy for both of the mps algorithm and two - site dmrg algorithm goes closer to @xmath56 . however , the mps algorithm can reach lower energy than two - site dmrg . the ground energy of @xmath61 electrons at @xmath0 with @xmath62 versus the number of sweeps for our mps algorithm and traditional two - site dmrg algorithm . the number of kept states in the dmrg algorithm is set to be equal with the maximal number of kept states in the mps sweeps . ] when the interaction goes beyond the pure @xmath43 pseudopotential , the ground state at @xmath0 is no longer the exact laughlin state with exactly zero ground energy . to achieve this , we use a combination of haldane s @xmath43 and @xmath63 pseudopotentials with @xmath64 . considering the strength of @xmath63 is much smaller than @xmath43 , we expect that the ground state at @xmath0 is still in the laughlin phase , although not the exact laughlin state . because @xmath63 pseudopotential is a longer - range interaction than @xmath43 pseudopotential , the bond dimension of the mpo is larger than that of pure @xmath43 pseudopotential . we calculate the ground - state entanglement spectrum of @xmath60 electrons at filling @xmath0 , and find that the low - lying part indeed matches that of the exact laughlin state , although generic levels appear in the high energy region ( fig . [ v1_v3 ] ) . the orbital - cut entanglement spectra obtained by mps algorithm for pure @xmath43 ( green dashes ) and @xmath65 interactions ( red dots ) for @xmath61 electrons at @xmath0 with @xmath66 . ] in this work , we here reported a mps code for finite fqh systems on the cylinder geometry . by comparing with ed and traditional two - site dmrg , we show its capability of searching for the fqh ground states . compared with the mps code for infinite fqh systems @xcite , our code is more suitable to study the physics in finite systems , such as edge effects . there are several possible directions for the future work . we can generalize our code to the infinite cylinder , as in ref . @xcite , where a multi - site update was used . it would be interesting to see the performance of single - site update and density matrix correction in the that case . we can also go beyond short - range haldane s pseudopotential to deal with some long - range hamiltonians , such as dipole and coulomb interactions . however , the bond dimension of mpo increases fast with the interaction range . therefore , we will need to truncate the interaction differently and study results as a function of the truncation length . finally , because the only system - dependent part of our code is the mpo representation of the physical hamiltonian , it can readily be used in other many - body systems such as spin chains and lattice models ( such as fractional chern insulators ) @xcite .
exact diagonalization is a powerful tool to study fractional quantum hall ( fqh ) systems . however , its capability is limited by the exponentially increasing computational cost . in order to overcome this difficulty , density - matrix - renormalization - group ( dmrg ) algorithms were developed for much larger system sizes . very recently , it was realized that some model fqh states have exact matrix - product - state ( mps ) representation . motivated by this , here we report a mps code , which is closely related to , but different from traditional dmrg language , for finite fqh systems on the cylinder geometry . by representing the many - body hamiltonian as a matrix - product - operator ( mpo ) and using single - site update and density matrix correction , we show that our code can efficiently search the ground state of various fqh systems . we also compare the performance of our code with traditional dmrg . the possible generalization of our code to infinite fqh systems and other physical systems is also discussed .
ATHENS, Greece (AP) — Suspected domestic terrorists exploded a car bomb outside a Bank of Greece building in the heart of Athens Thursday, causing damage but no injuries in a brazen attack hours before a landmark bond issue by the financially struggling country. Police bomb disposal experts search for evidence next to remains of a car after a car bomb explosion in central Athens on Thursday, April 10, 2014. A bomb exploded outside a Bank of Greece building in... (Associated Press) The remains of a car sit on a street after a car bomb explosion in central Athens, on Thursday, April 10, 2014. The bomb exploded outside a Bank of Greece building in central Athens before dawn Thursday,... (Associated Press) Police bomb disposal experts search for evidence next to remains of a car after a car bomb explosion in central Athens on Thursday, April 10, 2014. A bomb exploded outside a Bank of Greece building in... (Associated Press) Police bomb disposal experts search for evidence next to remains of a car after a car bomb explosion in central Athens on Thursday, April 10, 2014. A bomb exploded outside a Bank of Greece building in... (Associated Press) The remains of a car sits on a street after a car bomb explosion in central Athens as police officers block the street on Thursday, April 10, 2014. The bomb exploded outside a Bank of Greece building... (Associated Press) Police bomb disposal experts search for evidence next to remains of a car after a car bomb explosion in central Athens on Thursday, April 10, 2014. A bomb exploded outside a Bank of Greece building in... (Associated Press) No group claimed responsibility for the 6 a.m. (0300GMT) explosion, which shattered windows in the central bank branch and buildings up to 200 meters away, and left the charred remnants of a car with only two wheels still recognizable. The attack hit a largely commercial zone with a large shopping mall and banks, a few blocks from the Greek parliament. It came one day ahead of a visit to Athens by German Chancellor Angela Merkel, whose country is the largest single contributor to Greece's bailout program. Police said two anonymous calls — to a news website and a newspaper — gave a 45-minute warning. Apart from a couple of security guards, nobody else was in the immediate area, which police swiftly cordoned off. Forensic experts began combing through the blast site, while gaggles of office workers unable to get to their offices gathered in nearby cafes. An officer said the size and composition of the bomb were still unclear, and the anti-terrorism squad was taking over the case. He spoke on condition of anonymity, as the investigation is still at a very early stage. The news website that received one of the anonymous calls at 5:11 a.m. local time (0211 GMT) said the caller warned a bomb containing 75 kilograms (165 pounds) of explosives had been planted in a car. The attack comes as Greece returns to borrowing on the international bond market. The country announced Wednesday it was issuing a five-year bond, its first since it became locked out of international markets in 2010. The government has hailed the return to the bond market as proof that the country is emerging from its deep financial crisis. "The evident target of the attackers is to change this image, and change the agenda," government spokesman Simos Kedikoglou said on an early morning television news show. "We will not allow the attackers to achieve their aim." Since 2010, Greece has relied on funds from an international bailout, in return for which it has imposed deeply resented spending cuts, tax hikes and labor market reforms. Greece's economy has shrunk by a quarter, while unemployment hovers at 28 percent. Greece has a long history of domestic militant groups who plant usually small bombs late at night that rarely cause injuries. Although the country's deadliest terrorist group, November 17, was eradicated and its members jailed in the early 2000s, several newer groups are still active. One of the November 17 members vanished while on furlough from prison in January. Two suspected members of a different group, Revolutionary Struggle, vanished during their trial in 2012. That group is best known for firing a rocket into the U.S. Embassy in Athens and bombing the Athens Stock Exchange. ||||| 1 of 3. Forensic experts search for evidence on a street where a car bomb went off in Athens April 10, 2014. ATHENS (Reuters) - A car bomb exploded outside a central bank building in Athens early on Thursday, smashing windows but causing no injuries, just hours before Greece was due to make its first foray into the international bond markets in four years. The dawn blast, which police believe was carried out by leftist or anarchist guerrilla groups, also came a day before a planned visit to Athens by German Chancellor Angela Merkel. An anonymous caller warned a newspaper about 45 minutes before the explosion just before 6 a.m. (0300 GMT), saying it contained about 70 kg (155 pounds) of explosives, a police official said on condition of anonymity. A news website also received a warning call. Witnesses saw debris strewn across the street in one of the busiest parts of central Athens that is lined with banks, shops and a mall. A second police official, who also declined to be named, said the force had yet to determine the amount and type of explosives used in the attack. "It is clear that the attackers are trying to set the agenda," Government Spokesman Simos Kedikoglou told Skai TV. "We will not allow the terrorists to succeed." Makeshift bomb and arson attacks have escalated since Greece adopted unpopular austerity measures in exchange for multi-billion euro bailouts by the European Union and International Monetary Fund from 2010. Athens, which is seeking to send a strong political and economic signal that it is finally exiting its debt crisis, is expected to tap bond markets later on Thursday. But public anger remains high after six years of recession that has sent unemployment to record highs of nearly 28 percent, eroded living standards and shut down thousands of businesses. Greeks have had their incomes slashed by almost a third since 2010 and strikes against austerity are frequent. Attacks against banks, government buildings, politicians, journalists and businesspeople are fairly common in Greece, which has a long history of political violence. In 2009, a powerful car bomb exploded at the Athens stock exchange, slightly injuring one person and damaging the building One of Greece's most militant guerrilla groups, Revolutionary Struggle, claimed responsibility for that attack. (Additional reporting by Harry Papachristou and Alkis Konstantinidis; Editing by Gareth Jones)
– Greece is set to return to the international bond market today after four years of financial struggle, but not everyone is happy about it—especially not the suspected domestic terrorists who set off a large car bomb in front of the country's central bank hours before the bond issue. The huge blast early in central Athens this morning shattered windows at the bank and at buildings up to 700 feet away but didn't cause any injuries, reports the AP. Two warning calls were made 45 minutes before the explosion, which comes a day before German Chancellor Angela Merkel, who imposed tough conditions on the Greek bailout, is due to visit the country. After six years of recession, public anger at austerity measures in Greece runs deep, with unemployment at 28% and incomes down around a third over the last four years, Reuters notes. Authorities, who believe a leftist or anarchist group is behind the bombing, say the bond issue issue is a sign that Greece is finally emerging from its debt crisis. "It is clear that the attackers are trying to set the agenda," a government spokesman says. "We will not allow the terrorists to succeed."
hysterectomy is the second most frequently performed surgery , after cesarean delivery , among women of reproductive age in the united states . over 600 000 hysterectomies are performed annually in the united states . from 1994 through 1999 , 1 in every 9 women aged 35 to 45 years was estimated to have had a hysterectomy . the most frequent indications for hysterectomy during this time were uterine leiomyoma , endometriosis , and uterine prolapse . chronic pelvic pain ( cpp ) was the primary indication for 10% to 12% of all hysterectomies performed . although surgery provides some pain relief in a majority of patients , pain persists in a considerable proportion of patients . hysterectomy for cpp should be considered after exclusion of other gynecologic and nongynecologic diagnoses and after a trial of nonsurgical treatment . interstitial cystitis ( ic ) is a clinical syndrome of the bladder characterized by pelvic pain and urinary urgency and frequency in the absence of an identifiable cause . the diagnosis of ic is based on history and physical examination ; no definitive test for it exists . diagnosis of ic can be elusive , because the symptoms of ic are variable and can mimic those of other urologic and gynecologic conditions . the difficulty in accurately identifying ic may result in unnecessary hysterectomies to treat pelvic pain associated with ic . in the interstitial cystitis database , interstitial cystitis should be considered in women who present with symptoms of cpp , along with dyspareunia and/or irritative voiding symptoms . several studies have found a high rate of hysterectomy in patients subsequently diagnosed with ic . in some cases , the hysterectomies may have been unnecessary , as the pelvic pain was solely due to ic . in other cases , undiagnosed and untreated ic may have coexisted with the condition that was the indication for hysterectomy . it is also possible that the hysterectomy itself may have played a role in precipitating the neurogenic inflammation and visceral pain associated with ic . driscoll et al conducted a retrospective review of 45 patients with ic to determine the presentation and history of their disease . patients had been symptomatic for a median of 5 years before being diagnosed with ic . the diagnosis of ic was based on national institute of diabetes and digestive and kidney diseases ( niddk ) criteria or by clinical suspicion and a positive potassium sensitivity test ( pst ) . the initial presentation of ic was highly variable : only 7% of patients presented with simultaneous symptoms of urinary urgency and frequency , nocturia , and pain . most patients ( 89% ) presented with only 1 symptom initially and then progressed to the full spectrum of symptoms over a mean time of 5 years . among the 41 women in the study , hysterectomy had been performed in 11 patients with a gynecologic diagnosis , including 8 with a surgical indication of nonspecific pelvic pain . another study looked specifically at the rates of pelvic surgeries in women with ic by comparing responses from a survey of 215 ic patients , with 823 women serving as a community - based control group ( both groups , mean age 51 years ) . women with an established diagnosis of ic were drawn from a database at a referral center . diagnosis of ic was based on nih criteria , which include positive cystoscopic findings in addition to symptoms . in this study , women with ic were twice as likely as controls to have had a hysterectomy ( 42.3% vs 21.4% ; p<0.001 ) . although the higher rates of hysterectomies in women with ic could have been due to a higher prevalence of concomitant conditions requiring this procedure , it is notable that 68.4% of the hysterectomies were done before the diagnosis of ic , 10.5% were performed in the same year as the diagnosis of ic , and 21.1% were performed after the diagnosis of ic . the authors concluded that some of the hysterectomies may have been performed for an indication of pelvic pain that was in fact due to undiagnosed ic . a third study evaluated 111 women who had persistent or recurrent pelvic pain after a hysterectomy for cpp . patients were evaluated with a symptom questionnaire , a physical examination ( to identify tenderness at the bladder base or anterior vaginal wall ) , a pst , and optional cystoscopy with hydrodistention . irritative voiding symptoms were present in 104 ( 94% ) patients , and 88 ( 79% ) had a positive pst . of 66 patients who had cystoscopy with hydrodistention , 61 ( 92% ) had cystoscopic evidence of ic . in this study , patients were treated with dietary modification alone ( n=33 ) or in combination with pentosan polysulfate and/or cystoscopy with hydrodistention ( n=78 ) . after 6 months of treatment , the diet - alone group showed a mean improvement of 15.4% in score on the pelvic pain and urgency / frequency ( puf ) questionnaire , and the diet plus other treatment group showed a mean improvement of 34.2% . this study showed that not only was ic diagnosed at a high rate among women whose cpp was not relieved by hysterectomy but also that these patients responded favorably to treatment for ic . another recent study characterized a clinical cohort of 87 women with ic or painful bladder syndrome ( pbs ) who were referred to a pelvic pain and sexual - health program . the diagnosis of ic had been made by board - certified urologists based on niddk criteria . the patients had an average of 4.4 previous pelvic surgeries , with 23% having had 3 surgeries or more . almost half ( 48% ) had had a hysterectomy , two - thirds of which were performed prior to the diagnosis of ic . the high rate of hysterectomies among women later diagnosed with ic may suggest a potential cause of ic . pelvic surgeries including hysterectomy may have a negative effect on the innervation , musculature , or vasculature of the bladder . neural crosstalk between the afferents of irritated or damaged pelvic organs may adversely affect other organs , perhaps resulting in changes such as neurogenic inflammation in the secondary organ . in the last study discussed above , 59% of patients reported that their pain was initiated by a specific event , including surgery ( 25% ) or a bladder infection ( 23% ) , suggesting that pelvic insults may have played a causative role . even when the indication for hysterectomy ( such as endometriosis ) is clear , patients may also have ic . endometriosis and ic often coexist in the same patient . in 2 different studies of select groups of patients with cpp ( who exhibited both bladder base and uterine tenderness [ n=178 ] or in whom all other nongynecologic or nonurologic causes had been ruled out [ n=162 ] ) , 75% to 76% of patients were found to have endometriosis , 82% to 89% had ic , and 65% to 66% had both conditions concomitantly . interstitial cystitis should be considered in any patient who presents with cpp , even if another condition is diagnosed that could be partly responsible for the symptoms . several studies have found a high rate of hysterectomy in patients subsequently diagnosed with ic . in some cases , the hysterectomies may have been unnecessary , as the pelvic pain was solely due to ic . in other cases , undiagnosed and untreated ic may have coexisted with the condition that was the indication for hysterectomy . it is also possible that the hysterectomy itself may have played a role in precipitating the neurogenic inflammation and visceral pain associated with ic . driscoll et al conducted a retrospective review of 45 patients with ic to determine the presentation and history of their disease . patients had been symptomatic for a median of 5 years before being diagnosed with ic . the diagnosis of ic was based on national institute of diabetes and digestive and kidney diseases ( niddk ) criteria or by clinical suspicion and a positive potassium sensitivity test ( pst ) . the initial presentation of ic was highly variable : only 7% of patients presented with simultaneous symptoms of urinary urgency and frequency , nocturia , and pain . most patients ( 89% ) presented with only 1 symptom initially and then progressed to the full spectrum of symptoms over a mean time of 5 years . among the 41 women in the study , hysterectomy had been performed in 11 patients with a gynecologic diagnosis , including 8 with a surgical indication of nonspecific pelvic pain . another study looked specifically at the rates of pelvic surgeries in women with ic by comparing responses from a survey of 215 ic patients , with 823 women serving as a community - based control group ( both groups , mean age 51 years ) . women with an established diagnosis of ic were drawn from a database at a referral center . diagnosis of ic was based on nih criteria , which include positive cystoscopic findings in addition to symptoms . in this study , women with ic were twice as likely as controls to have had a hysterectomy ( 42.3% vs 21.4% ; p<0.001 ) . although the higher rates of hysterectomies in women with ic could have been due to a higher prevalence of concomitant conditions requiring this procedure , it is notable that 68.4% of the hysterectomies were done before the diagnosis of ic , 10.5% were performed in the same year as the diagnosis of ic , and 21.1% were performed after the diagnosis of ic . the authors concluded that some of the hysterectomies may have been performed for an indication of pelvic pain that was in fact due to undiagnosed ic . a third study evaluated 111 women who had persistent or recurrent pelvic pain after a hysterectomy for cpp . patients were evaluated with a symptom questionnaire , a physical examination ( to identify tenderness at the bladder base or anterior vaginal wall ) , a pst , and optional cystoscopy with hydrodistention . irritative voiding symptoms were present in 104 ( 94% ) patients , and 88 ( 79% ) had a positive pst . of 66 patients who had cystoscopy with hydrodistention , 61 ( 92% ) had cystoscopic evidence of ic , patients were treated with dietary modification alone ( n=33 ) or in combination with pentosan polysulfate and/or cystoscopy with hydrodistention ( n=78 ) . after 6 months of treatment , the diet - alone group showed a mean improvement of 15.4% in score on the pelvic pain and urgency / frequency ( puf ) questionnaire , and the diet plus other treatment group showed a mean improvement of 34.2% . this study showed that not only was ic diagnosed at a high rate among women whose cpp was not relieved by hysterectomy but also that these patients responded favorably to treatment for ic . another recent study characterized a clinical cohort of 87 women with ic or painful bladder syndrome ( pbs ) who were referred to a pelvic pain and sexual - health program . the diagnosis of ic had been made by board - certified urologists based on niddk criteria . the patients had an average of 4.4 previous pelvic surgeries , with 23% having had 3 surgeries or more . almost half ( 48% ) had had a hysterectomy , two - thirds of which were performed prior to the diagnosis of ic . the high rate of hysterectomies among women later diagnosed with ic may suggest a potential cause of ic . pelvic surgeries including hysterectomy may have a negative effect on the innervation , musculature , or vasculature of the bladder . neural crosstalk between the afferents of irritated or damaged pelvic organs may adversely affect other organs , perhaps resulting in changes such as neurogenic inflammation in the secondary organ . in the last study discussed above , 59% of patients reported that their pain was initiated by a specific event , including surgery ( 25% ) or a bladder infection ( 23% ) , suggesting that pelvic insults may have played a causative role . even when the indication for hysterectomy ( such as endometriosis ) is clear , patients may also have ic . endometriosis and ic often coexist in the same patient . in 2 different studies of select groups of patients with cpp ( who exhibited both bladder base and uterine tenderness [ n=178 ] or in whom all other nongynecologic or nonurologic causes had been ruled out [ n=162 ] ) , 75% to 76% of patients were found to have endometriosis , 82% to 89% had ic , and 65% to 66% had both conditions concomitantly . interstitial cystitis should be considered in any patient who presents with cpp , even if another condition is diagnosed that could be partly responsible for the symptoms . the cause of ic is unknown although several potential mechanisms have been proposed , including abnormal permeability of the bladder epithelium , neurogenic inflammation , an autoimmune or allergic response , and occult infection . it is likely that several of these factors interact to produce the clinical picture of ic ( figure 1 ) . furthermore , the cause of this condition may not be the same for all patients . a proposed model of the etiology of interstitial cystitis . a large body of evidence indicates that the symptoms of ic may be caused by abnormal permeability of the urothelium due to a defect in the glycosaminoglycan ( gag ) layer lining the bladder surface . abnormalities in the gag layer may be causative or the result of poor healing of the epithelium in patients following an injury to the bladder , eg , from an infection or pelvic surgery . the increased permeability is thought to allow irritating solutes and toxins , such as potassium from the urine , to contact the underlying epithelium , triggering the symptoms of pain and urinary urgency and frequency . in response to toxic stimuli and the resulting pain , bladder biopsies from patients with ic show increased numbers of degranulated mast cells , as well as increased numbers of substance p - positive nerve fibers near the mast cells . the continued inflammation and nerve - fiber activation can lead to neural upregulation and hyperalgesia . injury and irritation in one organ can contribute to hyperalgesia in other organs through viscerovisceral crosstalk . the diagnosis of ic is one of exclusion based on history , physical examination , laboratory studies , symptom questionnaires , and other optional tests . the niddk created criteria to ensure inclusion of comparable groups of patients in studies of ic . although the niddk criteria clearly identify a subgroup of patients with ic , they also exclude a substantial proportion of patients ( up to two - thirds ) who were clinically diagnosed with ic . in a diagnostic workup for ic , the history should address the initial presentation and progression of symptoms as well as any factors that trigger or worsen the symptoms , such as allergies , certain foods , the menstrual cycle , and sexual activity . patients often present initially with only 1 or 2 mild symptoms then progress over time to the full symptom complex of urinary urgency and frequency , nocturia , and pelvic pain . the pain caused by ic is most commonly felt in the suprapubic region but can be referred to other areas , including other areas of the pelvis and the thighs . patients may not report symptoms such as dyspareunia , because they do not connect it with their urinary symptoms . two commonly used questionnaires are the puf questionnaire and the o ' leary - sant ( ols ) interstitial cystitis symptom and problem indices ( icsi and icpi ) . both questionnaires address the characteristic symptoms of ic as well as the degree of bother that patients associate with each symptom . the puf questionnaire was designed primarily as a clinical screening tool , whereas the ols indices were designed for disease follow - up . the physical examination in a diagnostic workup for ic should include a bimanual assessment for pain or tenderness at the bladder base or along the urethra . tenderness along the anterior vaginal wall or at the bladder base can help establish a diagnosis of ic . bladder cancer must be ruled out if the patient has hematuria or is at risk for bladder cancer ( history of smoking , age > 40 years , occupational or other risk factors ) . cytology of bladder - washing specimens in combination with cystoscopy is the gold standard for detection of urothelial neoplasia . cytology of voided urine samples is less sensitive and should be used as an adjunct to cystoscopy . noninvasive urine - based immunoassays such as the nuclear matrix protein 22 test and the bard tumor antigen test may also be used to screen for bladder cancer ; in combination , results from these 2 tests approach the accuracy of cystoscopy . both tests are approved by the fda as an adjunct to but not as a replacement for cystoscopy . urodynamics may be helpful in determining whether detrusor instability exists , although this does not rule out the diagnosis of ic . voiding diaries can also be helpful in identifying frequency , as not all patients recognize their symptoms of frequency . cystoscopy with hydrodistention is not required for the diagnosis of ic , except to rule out bladder cancer . among patients with ic , those with positive findings of cystoscopy ( glomerulations or hunner 's ulcers ) are more likely to have symptoms of ic ( although glomerulations may also be present in women with normal bladders ) ; however , not all patients with ic will have positive cystoscopy . the potassium sensitivity test ( pst ) identifies patients with abnormal urothelial permeability by their provoked response of pain and urgency to intravesical instillation of potassium . the pst will be positive in any patient with increased epithelial permeability of the bladder , including uti or radiation cystitis . the pst may be useful to help identify the bladder as the source of pain . a test using intravesical instillation of anesthetics plays a similar role to that of the pst , but without provoking symptoms of pain and urgency . relief of pain following instillation of a solution of bupivacaine and/or lidocaine can identify the bladder as the source of pelvic pain in patients with ic . this test can not be used in patients without significant pain , and the lack of a positive result does not rule out ic as the cause . once ic is diagnosed , it is important for the clinician to counsel the patient and help set treatment expectations . patients may feel frustration after years of experiencing symptoms and not receiving an accurate diagnosis or effective therapy . a clinician should acknowledge the patient 's frustration , and reassure her that , once identified , the condition can be effectively managed . it is important to let the patient know that it may take several months for treatment to reach full efficacy and that adherence to the treatment regimen is crucial to reducing symptoms . patients should be educated about the condition as well as diet and lifestyle changes that can help avoid symptom flares . pentosan polysulfate sodium ( pps ; elmiron , ortho - mcneil janssen scientific affairs , llc , titus - ville , nj ) is the only fda - approved oral agent for the treatment of ic symptoms and is first - line therapy for most patients with this condition . pps is a heparin - like compound that is thought to act by replenishing the gag layer on the bladder surface , thus restoring urothelial impermeability . patients should be counseled to stay on pps therapy for 3 to 6 months to see a clinical response . some patients on pps continue to experience improvement in symptom relief for up to 36 months . pharmacologic therapies for interstitial cystitis pps = pentosan polysulfate sodium ; dmso = dimethyl sulfoxide . may be used in combination with each other . other pharmacologic agents ( although not fda - approved for ic treatment ) may be added to pps as needed for symptom relief . antihistamines , such as hydroxyzine , may be used to reduce mast cell activity , and have been shown to have some efficacy for ic symptom relief , particularly in patients who are prone to allergies . hydroxyzine should be taken at bedtime to minimize sedation side effects at doses up to 75mg / day . tricyclic antidepressants ( tcas ) , such as amitriptyline , also benefit many patients with ic . these drugs have mild antihistamine , analgesic , and anticholinergic effects , and are widely used in the treatment of neuropathic pain because of their effects on cns pain transmission . anticholinergic side effects like dry mouth occur in the majority of patients and may lead to treatment discontinuation for some patients . careful dosing ( 25mg / day to 50mg / day at bedtime ) can minimize these side effects . anticonvulsants , such as gabapentin and pregablin , have been used to treat neuropathic pain and may be of benefit for ic patients with severe pain . gabapentin should be dosed from 300mg / day to 2400mg / day and requires careful dose titration . several intravesical therapies are also used for ic , often in combination with oral agents . cocktails consisting of a local anesthetic ( bupivacaine or alkalinized lidocaine ) and heparin ( as a bladder surface coating agent ) may be administered as initial therapy while oral agents take effect or can be used for treatment of symptom flares . dimethyl sulfoxide ( dmso ) is approved by the fda as an intravesical agent for the relief of ic symptoms . in addition to pharmacologic agents , nonpharmacologic approaches to therapy can provide symptom relief for patients with ic . every patient with ic a range of foods have been associated with symptom flares , particularly foods or drinks that contain caffeine or alcohol , are spicy , or have a low ph . patients may already be aware of some of their triggers ; use of a diary can help track others . symptom flares are also associated with stress , which patients can reduce by using stress - reduction techniques . for patients who experience dyspareunia , use of lubricants , lidocaine jelly , or premedicating with antispasmodics or muscle relaxants can reduce pain during sex . bladder training is an option to reduce urination frequency in patients who experience only mild pain on bladder filling . the patient slowly lengthens the intervals between voids , increasing the interval by up to 15 minutes every week . high - tone pelvic floor dysfunction ( pfd ) involving pelvic floor muscle tenderness and spasm is commonly found in patients with ic . patients with high - tone pfd respond favorably to physical therapy , including realignment of the sacrum and ilium , myofascial release , and overall strengthening and stretching . internal ( thiele ) massage can be effective for the relief of ic symptoms in patients with high - tone pfd or in patients who have not responded to medical therapy . for patients who do not respond to pharmacologic and behavioral therapies , sacral neuromodulation has been shown to be beneficial in several studies , with improvements in pain and urinary - symptom scores , as well as decreased need for narcotics . patients should be reevaluated 1 month to 3 months after therapy is initiated and therapy adjusted as needed . patients should be told to contact their physician as soon as possible when symptom flares occur and can be treated immediately with intravesical instillation of an anesthetic cocktail . the cause of ic is unknown although several potential mechanisms have been proposed , including abnormal permeability of the bladder epithelium , neurogenic inflammation , an autoimmune or allergic response , and occult infection . it is likely that several of these factors interact to produce the clinical picture of ic ( figure 1 ) . furthermore , the cause of this condition may not be the same for all patients . a proposed model of the etiology of interstitial cystitis . a large body of evidence indicates that the symptoms of ic may be caused by abnormal permeability of the urothelium due to a defect in the glycosaminoglycan ( gag ) layer lining the bladder surface . abnormalities in the gag layer may be causative or the result of poor healing of the epithelium in patients following an injury to the bladder , eg , from an infection or pelvic surgery . the increased permeability is thought to allow irritating solutes and toxins , such as potassium from the urine , to contact the underlying epithelium , triggering the symptoms of pain and urinary urgency and frequency . in response to toxic stimuli and the resulting pain , bladder biopsies from patients with ic show increased numbers of degranulated mast cells , as well as increased numbers of substance p - positive nerve fibers near the mast cells . the continued inflammation and nerve - fiber activation can lead to neural upregulation and hyperalgesia . injury and irritation in one organ can contribute to hyperalgesia in other organs through viscerovisceral crosstalk . the diagnosis of ic is one of exclusion based on history , physical examination , laboratory studies , symptom questionnaires , and other optional tests . the niddk created criteria to ensure inclusion of comparable groups of patients in studies of ic . although the niddk criteria clearly identify a subgroup of patients with ic , they also exclude a substantial proportion of patients ( up to two - thirds ) who were clinically diagnosed with ic . in a diagnostic workup for ic , the history should address the initial presentation and progression of symptoms as well as any factors that trigger or worsen the symptoms , such as allergies , certain foods , the menstrual cycle , and sexual activity . patients often present initially with only 1 or 2 mild symptoms then progress over time to the full symptom complex of urinary urgency and frequency , nocturia , and pelvic pain . the pain caused by ic is most commonly felt in the suprapubic region but can be referred to other areas , including other areas of the pelvis and the thighs . patients may not report symptoms such as dyspareunia , because they do not connect it with their urinary symptoms . two commonly used questionnaires are the puf questionnaire and the o ' leary - sant ( ols ) interstitial cystitis symptom and problem indices ( icsi and icpi ) . both questionnaires address the characteristic symptoms of ic as well as the degree of bother that patients associate with each symptom . the puf questionnaire was designed primarily as a clinical screening tool , whereas the ols indices were designed for disease follow - up . the physical examination in a diagnostic workup for ic should include a bimanual assessment for pain or tenderness at the bladder base or along the urethra . tenderness along the anterior vaginal wall or at the bladder base can help establish a diagnosis of ic . bladder cancer must be ruled out if the patient has hematuria or is at risk for bladder cancer ( history of smoking , age > 40 years , occupational or other risk factors ) . cytology of bladder - washing specimens in combination with cystoscopy is the gold standard for detection of urothelial neoplasia . cytology of voided urine samples is less sensitive and should be used as an adjunct to cystoscopy . noninvasive urine - based immunoassays such as the nuclear matrix protein 22 test and the bard tumor antigen test may also be used to screen for bladder cancer ; in combination , results from these 2 tests approach the accuracy of cystoscopy . both tests are approved by the fda as an adjunct to but not as a replacement for cystoscopy . urodynamics may be helpful in determining whether detrusor instability exists , although this does not rule out the diagnosis of ic . voiding diaries can also be helpful in identifying frequency , as not all patients recognize their symptoms of frequency . cystoscopy with hydrodistention is not required for the diagnosis of ic , except to rule out bladder cancer . among patients with ic , those with positive findings of cystoscopy ( glomerulations or hunner 's ulcers ) are more likely to have symptoms of ic ( although glomerulations may also be present in women with normal bladders ) ; however , not all patients with ic will have positive cystoscopy . the potassium sensitivity test ( pst ) identifies patients with abnormal urothelial permeability by their provoked response of pain and urgency to intravesical instillation of potassium . the pst will be positive in any patient with increased epithelial permeability of the bladder , including uti or radiation cystitis . the pst may be useful to help identify the bladder as the source of pain . a test using intravesical instillation of anesthetics plays a similar role to that of the pst , but without provoking symptoms of pain and urgency . relief of pain following instillation of a solution of bupivacaine and/or lidocaine can identify the bladder as the source of pelvic pain in patients with ic . this test can not be used in patients without significant pain , and the lack of a positive result does not rule out ic as the cause . once ic is diagnosed , it is important for the clinician to counsel the patient and help set treatment expectations . patients may feel frustration after years of experiencing symptoms and not receiving an accurate diagnosis or effective therapy . a clinician should acknowledge the patient 's frustration , and reassure her that , once identified , the condition can be effectively managed . it is important to let the patient know that it may take several months for treatment to reach full efficacy and that adherence to the treatment regimen is crucial to reducing symptoms . patients should be educated about the condition as well as diet and lifestyle changes that can help avoid symptom flares . pentosan polysulfate sodium ( pps ; elmiron , ortho - mcneil janssen scientific affairs , llc , titus - ville , nj ) is the only fda - approved oral agent for the treatment of ic symptoms and is first - line therapy for most patients with this condition . pps is a heparin - like compound that is thought to act by replenishing the gag layer on the bladder surface , thus restoring urothelial impermeability . patients should be counseled to stay on pps therapy for 3 to 6 months to see a clinical response . some patients on pps continue to experience improvement in symptom relief for up to 36 months . pharmacologic therapies for interstitial cystitis pps = pentosan polysulfate sodium ; dmso = dimethyl sulfoxide . may be used in combination with each other . other pharmacologic agents ( although not fda - approved for ic treatment ) may be added to pps as needed for symptom relief . antihistamines , such as hydroxyzine , may be used to reduce mast cell activity , and have been shown to have some efficacy for ic symptom relief , particularly in patients who are prone to allergies . hydroxyzine should be taken at bedtime to minimize sedation side effects at doses up to 75mg / day . tricyclic antidepressants ( tcas ) , such as amitriptyline , also benefit many patients with ic . these drugs have mild antihistamine , analgesic , and anticholinergic effects , and are widely used in the treatment of neuropathic pain because of their effects on cns pain transmission . anticholinergic side effects like dry mouth occur in the majority of patients and may lead to treatment discontinuation for some patients . careful dosing ( 25mg / day to 50mg / day at bedtime ) can minimize these side effects . anticonvulsants , such as gabapentin and pregablin , have been used to treat neuropathic pain and may be of benefit for ic patients with severe pain . gabapentin should be dosed from 300mg / day to 2400mg / day and requires careful dose titration . several intravesical therapies are also used for ic , often in combination with oral agents . cocktails consisting of a local anesthetic ( bupivacaine or alkalinized lidocaine ) and heparin ( as a bladder surface coating agent ) may be administered as initial therapy while oral agents take effect or can be used for treatment of symptom flares . dimethyl sulfoxide ( dmso ) is approved by the fda as an intravesical agent for the relief of ic symptoms . in addition to pharmacologic agents , nonpharmacologic approaches to therapy can provide symptom relief for patients with ic . every patient with ic a range of foods have been associated with symptom flares , particularly foods or drinks that contain caffeine or alcohol , are spicy , or have a low ph . patients may already be aware of some of their triggers ; use of a diary can help track others . symptom flares are also associated with stress , which patients can reduce by using stress - reduction techniques . for patients who experience dyspareunia , use of lubricants , lidocaine jelly , or premedicating with antispasmodics or muscle relaxants can reduce pain during sex . bladder training is an option to reduce urination frequency in patients who experience only mild pain on bladder filling . the patient slowly lengthens the intervals between voids , increasing the interval by up to 15 minutes every week . high - tone pelvic floor dysfunction ( pfd ) involving pelvic floor muscle tenderness and spasm is commonly found in patients with ic . patients with high - tone pfd respond favorably to physical therapy , including realignment of the sacrum and ilium , myofascial release , and overall strengthening and stretching . internal ( thiele ) massage can be effective for the relief of ic symptoms in patients with high - tone pfd or in patients who have not responded to medical therapy . for patients who do not respond to pharmacologic and behavioral therapies , sacral neuromodulation has been shown to be beneficial in several studies , with improvements in pain and urinary - symptom scores , as well as decreased need for narcotics . patients should be reevaluated 1 month to 3 months after therapy is initiated and therapy adjusted as needed . patients should be told to contact their physician as soon as possible when symptom flares occur and can be treated immediately with intravesical instillation of an anesthetic cocktail . clinicians should consider all sources of pelvic pain before performing a hysterectomy for the primary indication of cpp . patients with documented endometriosis may still have concomitant ic and should be evaluated and treated for ic prior to hysterectomy . with the appropriate diagnostic tools , ic can be identified early in the progression of the disease . an individualized , multimodal approach to treatment including both pharmacologic and nonpharmacologic therapies can provide symptom relief in the majority of patients with ic .
background : interstitial cystitis is a clinical syndrome characterized by symptoms of pelvic pain , urinary urgency and frequency , and nocturia . it can be difficult to accurately identify interstitial cystitis because the symptoms overlap many other common gynecologic and urologic conditions . patients with undiagnosed interstitial cystitis may undergo unnecessary procedures , including hysterectomy.methods:a pubmed literature search for articles dating back to 1990 was conducted on the topics of interstitial cystitis and hysterectomy . further references were identified by cross - referencing the bibliographies in articles of interest.results:the literature review found that hysterectomy is performed more often in patients with undiagnosed interstitial cystitis than in patients with a confirmed diagnosis . interstitial cystitis often coexists with conditions like endometriosis , for which hysterectomy is indicated . many patients subsequently diagnosed with interstitial cystitis continue to experience persistent pelvic pain despite having had a hysterectomy for chronic pelvic pain . careful history and physical examination can identify the majority of interstitial cystitis cases.conclusion:interstitial cystitis should be considered prior to hysterectomy in women who present with pelvic pain or who experience pelvic pain after a hysterectomy . if interstitial cystitis is diagnosed , appropriate therapy may eliminate the need for hysterectomy .
Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period.
– Jon Stewart worked himself up to a fever pitch last night over the Rush Limbaugh/Sandra Fluke controversy, wondering how exactly Limbaugh got "from 'young woman trying to get a private institution to cover contraception' to 'prostitution slut having constant sexy-sex on my dime.'" But, Stewart noted, "Personally, I don't get too worked up about the things Rush Limbaugh says. Because he is, and has been for many years, a terrible person." Even without the "slut" comment, there are so many problems with Limbaugh's (and the GOP presidential candidates', and Fox News') argument, Stewart said, he couldn't cover them all. Bottom line: "To the people who are upset about their hard-earned tax money going to things they don't like, 'Welcome to the f***ing club.'" Stephen Colbert followed with his own take on the issue, explaining that Limbaugh is right: "That's how the pill works. The more sex you have, the more birth control pills you have to take. It's one for each sperm. They act like little baby deflectors! And Rush knows what he's talking about, because every time he's had sex with a woman, he's had to slip her a pill first." But he couldn't agree with Limbaugh's decision to apologize, particularly since it came only after sponsors started leaving his show. "I don't think Rush should have apologized for calling her a prostitute. I mean, it takes one to know one," Colbert concluded. "And remember, he only apologized to keep his advertisers, proving Rush will do anything with his mouth for cash."
SECTION 1. SHORT TITLE. This Act may be cited as the ``Alaska Hero's Card Act of 2011''. SEC. 2. PILOT PROGRAM ON PROVISION OF HEALTH CARE TO VETERANS RESIDING IN ALASKA AT NON-DEPARTMENT OF VETERANS AFFAIRS MEDICAL FACILITIES. (a) In General.--The Secretary of Veterans Affairs shall establish a pilot program to assess the feasibility and advisability of carrying out a program by which a covered veteran can, except as provided in subsection (f), receive necessary hospital care or medical services for any condition at any hospital or medical facility or from any medical provider eligible to receive payments under-- (1) the Medicare program under title XVIII of the Social Security Act (42 U.S.C. 1395 et seq.); (2) the Medicaid program under title XIX of such Act (42 U.S.C. 1396 et seq.); (3) the TRICARE program; or (4) the Indian health program. (b) Covered Veteran.--For purposes of this section, a covered veteran is any veteran who-- (1) is entitled to hospital care or medical services under laws administered by the Secretary of Veterans Affairs; (2) is located in the State of Alaska; and (3) resides at a location that is located in-- (A) such State; and (B) a town, village, or other community that is not accessible by motor vehicle (as defined in section 30102 of title 49, United States Code). (c) Duration of Pilot.--The pilot program shall be carried out during the two-year period beginning on the date of the enactment of this Act. (d) Cost of Care and Service.-- (1) In general.--The cost of any hospital care or medical service provided under the pilot program shall be borne by the United States from amounts other than amounts appropriated or otherwise made available for an Indian health program. (2) No billing of veterans.--The Secretary shall take measures to ensure that covered veterans are not billed for the hospital care and medical services they receive under the pilot program. (e) Alaska Hero Card.--In carrying out the pilot program, the Secretary shall issue to each covered veteran a card to be known as an ``Alaska Hero Card'' that such veteran may present to an authorized provider to establish the covered veteran's eligibility for hospital care and medical services under the pilot program. (f) Authorized Providers.--The Secretary may establish a list of authorized providers from whom a covered veteran may receive hospital care and medical services under the pilot program. (g) Measures To Ensure Quality and Safety of Care.-- (1) In general.--The Secretary shall take such measures as may be necessary to ensure that the quality and safety of care provided to veterans under the pilot program is equal to or better than the quality and safety of care otherwise provided by the Department of Veterans Affairs. (2) Specific measures.--The measures described in paragraph (1) may include requirements relating to the following: (A) Credentialing and accreditation of providers of hospital care or medical services. (B) Timely reporting of access to care. (C) Timely reporting of clinical information to the Secretary. (D) Reporting safety issues, patient complaints, and patient satisfaction. (E) Robust quality programs, including peer review and compliance with industry standards and requirements. (3) Providers certified by indian health service.--For purposes of the pilot program, the Secretary shall consider the equality and safety of care provided by a provider described in subsection (a)(2) who is certified by the Indian Health Service as a community health aide pursuant to section 119 of the Indian Health Care Improvement Act (25 U.S.C. 1616l) and who is providing services within the scope of such certification as being equal to or better than the quality and safety of care otherwise provided by the Department. (h) Savings.--Nothing in this section shall be construed to limit any right of recovery available to an Indian health program under the provisions of section 206 or 405(c) of the Indian Health Care Improvement Act (25 U.S.C. 1621e and 1645(c)), or any other Federal or State law. (i) Definitions.--In this section: (1) Hospital care and medical services.--The terms ``hospital care'' and ``medical services'' have the meanings given such terms in section 1701 of title 38, United States Code. (2) Indian health program.--The term ``Indian health program'' has the meaning given such term in section 4 of the Indian Health Care Improvement Act (25 U.S.C. 1603). (3) Service-connected.--The term ``service-connected'' has the meaning given such term in section 101 of such title. (4) TRICARE program.--The term ``TRICARE program'' has the meaning given such term in section 1072 of title 10, United States Code.
Alaska Hero's Card Act of 2011 - Directs the Secretary of Veterans Affairs (VA) to establish a two-year pilot program assessing the feasibility and advisability of carrying out a program by which certain veterans entitled to VA services residing in communities in the state of Alaska that are inaccessible by motor vehicle can, subject to exceptions, receive necessary hospital care or medical services at any hospital or medical facility or from any medical provider eligible to receive payments under: (1) titles XVIII (Medicare) or XIX (Medicaid) of the Social Security Act, (2) the TRICARE program (a Department of Defense [DOD] managed health care program), or (3) the Indian health program. Requires the cost of any hospital care or medical service provided under the pilot program to be borne by the United States from amounts other than amounts appropriated or otherwise made available for an Indian health program. Directs the Secretary to take measures ensuring that covered veterans are not billed for hospital care and medical services received under the pilot program. Requires the Secretary, in carrying out the pilot program, to issue to each covered veteran a card to be known as an "Alaska Hero Card" that such veteran may present to an authorized provider to establish the covered veteran's eligibility for hospital care and medical services under the pilot program. Authorizes the Secretary to establish a list of authorized providers from whom a covered veteran may receive hospital care and medical services under the pilot program.
new observations of thermal radiation from isolated middle - aged neutron stars ( e.g. , pavlov , zavlin & sanwal 2002 , pavlov & zavlin 2003 ) initiated further development of the cooling theory of these objects . its main aim is to interpret the data and constrain still poorly known properties of dense matter in neutron star cores , such as the composition , the equation of state and nucleon superfluidity ( e.g. , yakovlev & pethick 2004 , page et al . 2004 and references therein ) . it is well - known ( e.g. , yakovlev & pethick 2004 ) that theoretical models of non - superfluid neutron stars which possess nucleon cores and cool via the modified urca process of neutrino emission can not explain the observations . some neutron stars ( e.g. , rx j08224300 and psr b105552 ) are much warmer than predicted by these theories , while others ( e.g. , the vela pulsar or the compact source in cta 1 ) are much colder . warmest objects can be treated as relatively low - mass neutron stars with strong proton ( e.g. , kaminker , haensel & yakovlev 2001 ) or neutron ( e.g. , gusakov et al . 2004b ) pairing in their cores . strong pairing suppresses the modified urca process and makes the stars warmer . coldest stars should have higher neutrino emission than the emission provided by the modified urca process . they are usually treated as massive neutron stars which cool either via the powerful direct urca process in nucleon ( or nucleon / hyperon ) matter or via similar processes in kaon - condensed , pion - condensed , or quark matter in their inner cores . recently page et al . ( 2004 ) and gusakov et al . ( 2004a ) proposed new scenarios of neutron star cooling which involve only standard physics of neutron star interiors . the neutron star cores are assumed to contain nucleons ( no exotic forms of matter ) with the forbidden direct urca process . some enhancement of the cooling can be provided by neutrino emission due to cooper pairing of nucleons . page et al . ( 2004 ) called their cooling scenario the `` minimal cooling model '' ( for its simplicity ) . we will also use this very properly chosen name for the scenario of gusakov et al . ( 2004a ) that is based on the same assumptions ( but differs in their realization ; see below ) . according to our previous paper ( gusakov et al . 2004a ) the enhanced cooling is produced by the neutrino emission due to cooper pairing of neutrons in the cores of massive neutron stars , while warmest objects are thought to be low - mass stars with strong proton pairing in their cores . we assumed a phenomenological model of strong density - dependent singlet - state proton pairing with the critical temperature @xmath3 that has the maximum value @xmath4 k. we also assumed a phenomenological model of moderate triplet - state neutron pairing @xmath5 with the maximum critical temperature @xmath6 k shifted to higher @xmath7 , where proton pairing dies out . we were able to interpret all the data but under stringent constraints on the density dependence of @xmath5 . the present paper extends our previous analysis . we use the same equation of state of matter in neutron star interiors ( douchin & haensel 2001 ) and the same model of triplet - state neutron pairing . however , in addition , we take into account the effects of surface layers of light ( accreted ) elements ( h and/or he ) , as well as singlet - state neutron pairing @xmath2 in the stellar crust . the effects of accreted envelopes allow us to lower proton pairing ( @xmath8 k ) required to explain the data . this weaker proton pairing is consistent with recent microscopic calculations of proton critical temperatures by zuo et al . ( 2004 ) and takatsuka & tamagaki ( 2004 ) ( although some other calculations predict much stronger proton pairing ; e.g. , lombardo & schulze 2001 ; also see references in yakovlev , levenfish & shibanov 1999 , and a recent paper by tanigawa , matsuzaki & chiba 2004 ) . let us emphasize the difference of cooling scenarios of page et al . ( 2004 ) and gusakov et al . ( 2004a ) . in particular , page et al . ( 2004 ) used several selected models of triplet - state neutron pairing provided by microscopic theories . corresponding cooling curves do not depend sensitively on neutron star mass and do not allow the authors to explain all the data in the frame of one physical model of neutron star interiors . in contrast , gusakov et al . ( 2004a ) used phenomenological models of triplet - state pairing and succeeded to explain all the data ( although under stringent constraints on these models ; see their paper for details ) . note that page el al . ( 2004 ) analyzed the effect of accreted envelopes on their minimal cooling models but our models are different and require separate analysis . our main aim is to interpret all the data assuming the same physics ( equation of state and superfluid properties ) in the interiors of all neutron stars . table 1 summarizes observations of isolated neutron stars , whose thermal surface radiation has been detected or constrained . we present the estimated stellar age @xmath9 , the effective surface temperature @xmath10 and the surface thermal luminosities @xmath11 ( as detected by a distant observer ) . the data on @xmath9 and @xmath10 are described by gusakov et al . ( 2004a ) in more detail , with two exceptions . first , following slane et al . ( 2004a ) , we slightly lower the upper limit on the surface temperature @xmath10 of psr j0205 + 6449 in the supernova remnant 3c 58 ( @xmath12 mk instead of @xmath13 mk ) . second , we include into consideration the central x - ray source rx j0007.0 + 7303 in the supernova remnant cta 1 . for psr j0205 + 6449 we adopt the age of the historical supernova sn 1181 ( @xmath14820 yr ) . however notice , that recently chevalier ( 2004 , 2005 ) presented arguments in favor for a larger age of the pulsar wind nebula in 3c 58 ( @xmath152400@xmath16 yr ) . were this the actual age of the neutron star , its interpretation would be easier . for the source rx j0007.0 + 7303 we adopt the age of its host supernova remnant cta 1 ( g119.5 + 10.2 ) . according to slane et al . ( 2004b ) , the age is @xmath17 kyr . following halpern et al . ( 2004 ) we assume the neutron star age limits @xmath18 kyr @xmath19 kyr . as for rx j0205 + 6449 , the crab pulsar and rx j0007.0 + 7303 , no thermal radiation component has been detected from these objects , and only the upper limits on @xmath10 have been set ( slane et al . 2004a , weisskopf et al . 2004 , slane et al . 2004b , halpern et al . 2004 ) . [ cols="<,^,^,^,<,^",options="header " , ] the surface temperatures of some sources from table 1 ( labeled by @xmath20 ) have been obtained by fitting their thermal radiation spectra with hydrogen atmosphere models . such models are more consistent with other information on these sources ( e.g. , pavlov & zavlin 2003 ) than the blackbody model . for other sources ( e.g. , for the geminga pulsar and psr b105552 , labeled by @xmath21 ) we present the values of @xmath10 inferred using the blackbody spectrum because this spectrum is more consistent for these sources . the surface temperature of rx j1856.43754 is still uncertain . following gusakov et al.(2004a ) we adopt the upper limit @xmath22 mk . finally , @xmath10 for rx j0720.43125 is taken from motch et al . ( 2003 ) , who interpreted the observed spectrum with a model of a hydrogen atmosphere of finite depth . note also the new results by kargaltsev et al . ( 2005 ) for geminga presented in table 1 . these authors confirm the observational value of @xmath10 reported by zavlin & pavlov ( 2004 ) . taking into account systematic uncertainties of @xmath10 discussed by kargaltsev et al . ( 2005 ) we retain @xmath23 errorbars adopted by gusakov et al . ( 2004a ) and erroneously referred to @xmath24 confidence level in their table 1 . following gusakov et al . ( 2004a ) , the same @xmath23 errorbars will be adopted for psr j0538 + 2817 , psr b105552 , and rx j0720.43128 . as noted by several authors ( e.g. , page et al . 2004 ) , it may be instructive to compare the cooling theory with measured values of stellar thermal surface luminosities @xmath11 , rather than with @xmath10 . the data on @xmath11 are also collected in table 1 . the luminosity is related to the effective surface temperature via @xmath25 where @xmath26 is the stefan - boltzmann constant , @xmath27 is the so called apparent radius of a neutron star ( as would be detected by a distant observer if a telescope could resolve the star ) , @xmath28 is the circumferential radius , and @xmath29 the gravitational stellar mass . thus , the luminosity is determined by the effective temperature and neutron star radius ; an uncertainty in @xmath11 is produced by uncertainties in @xmath10 and @xmath30 . we have already described the values of @xmath10 . as for the values of @xmath30 , we vary them ( with two exceptions indicated below ) within the reasonable theoretical interval for neutron star radii , @xmath30=1116 km ; while translating @xmath28 into @xmath30 we always set @xmath31 . in table 1 the upper limits on @xmath11 for psr j0205 + 6449 , the crab pulsar , rx j0007.0 + 7303 , and rx j1856.43754 were obtained assuming @xmath32 km . the luminosities of rx j08224300 and 1e 1207.45209 have been calculated from the values of @xmath10 obtained by zavlin et al . ( 1999 ) and zavlin et al . ( 2004 ) , respectively . we have taken the same fixed radius @xmath33 km ( @xmath34 km ) which was used by the cited authors to fit the observed spectra with the hydrogen atmosphere models . all other values of @xmath11 in table 1 have been obtained by varying @xmath30 within the interval @xmath351116 km . the central values of @xmath11 have been calculated taking into account the central values of @xmath10 from table 1 and the values of @xmath28 ( or @xmath30 ) obtained in cited papers from spectral fits , except for the vela pulsar , where we set @xmath34 km . for psr b170644 , psr j0538 + 2817 , and rx j0720.43125 these values of @xmath28 have been taking 12 km , 10.5 km , and 10 km , as suggested by mcgowan et al . ( 2004 ) , zavlin & pavlov ( 2004 ) , and motch et al . ( 2003 ) , respectively . for the geminga pulsar we have used the value @xmath36 km from zavlin & pavlov ( 2004 ) , and for psr b105552 we set @xmath37 km from pavlov & zavlin ( 2003 ) . in all the cases , the limits of @xmath11 presented in table 1 seem to be rather uncertain . although , in principle , the luminosities @xmath11 can be measured / constrained more accurately than @xmath10 ( by exact measuring the distance and the bolometric thermal flux ) , it is not so for the sources collected in table 1 mainly due to large uncertainties in measured distances to the sources ( see , e.g. , page et al . nevertheless , comparing observed and theoretical luminosities of cooling neutron stars seems to be useful . our limits of @xmath11 are in reasonable agreement with corresponding limits given by page et al . the main differences refer to the geminga pulsar and 1e 1207.45209 . in the first case the limits of @xmath11 presented by page et al . ( 2004 ) correlate with too low apparent radius of the star , @xmath38 km , for the temperature limits adopted in their paper . in the second case page et al . used @xmath39 with @xmath40 erg s@xmath41 from zavlin , pavlov & trmper ( 1998 ) . the value of @xmath42 was possibly underestimated by zavlin et al . ( 1998 ) , because their value of @xmath43 was indicated later by zavlin et al . ( 2004 ) as @xmath10 . also , let us note that the radii of our neutron star models used for the cooling calculations presented below are consistent with the radii used for the interpretation of the data . the cooling calculations have been done using our general relativistic cooling code ( gnedin , yakovlev & potekhin 2001 ) . at the initial cooling stage ( @xmath44 yr ) the main cooling mechanism is the neutrino emission but the stellar interior stays highly non - isothermal . at the next stage ( @xmath45 yr @xmath46 yr ) the neutrino emission is dominant but the stellar interior is isothermal . later ( @xmath47 yr ) the star cools predominantly through the surface photon emission . following gusakov el al . ( 2004a ) we adopt the moderately stiff equation of state for the neutron star matter suggested by douchin & haensel ( 2001 ) . in this case a neutron star core ( a region of density @xmath48 g @xmath49 ) consists of neutrons with the admixture of protons , electrons and muons . all constituents exist everywhere in the core , except for muons which appear at @xmath50 g @xmath49 . the most massive stable star has the ( gravitational ) mass @xmath51 , the central density @xmath52 g @xmath49 , and the ( circumferential ) radius @xmath53 km . the parameters of neutron stars with some other masses are given by gusakov et al . ( 2004a ) . the employed equation of state forbids the powerful direct urca process of neutrino emission ( lattimer et al . 1991 ) in all stable neutron stars ( @xmath54 ) . accordingly , a non - superfluid neutron star of any mass in the range @xmath55 ( without any accreted envelope ) will have almost the same ( universal ) cooling curve @xmath56 ( the dotted curve in the right panel of fig . [ fig1 ] ) . at the neutrino cooling stage , this curve is determined by the modified urca process and is almost independent of the equation of state in the stellar core ( see , e.g. , yakovlev & pethick 2004 and references therein ) . as seen from fig . [ fig1 ] , this universal cooling curve can not explain the data . we will show that all the data can be explained assuming nucleon superfluidity in the internal layers of neutron stars and the presence of accreted envelopes ( of light elements ) . following the standard procedure ( gudmundsson , pethick & epstein 1983 ) our code calculates heat transport in the neutron - star interior ( @xmath57 g @xmath49 ) and uses the predetermined relation between the effective surface temperature @xmath43 and the temperature @xmath58 at the bottom of the surface heat - blanketing envelope ( @xmath59 ) . we use the relation calculated by potekhin , chabrier , & yakovlev ( 1997 ) and updated by potekhin et al . we will employ the models of blanketing envelopes made of iron ( which is the standard assumption ) and envelopes containing light elements . the detailed description of these models is given by potekhin et al . ( 2003 ) . the thermal energy in the heat - blanketing envelope is mainly conducted by electrons . the thermal conductivity of electrons which scatter off lighter ions in the accreted envelope is higher than the conductivity in the iron envelope . this means that the accreted envelope is more heat transparent than the iron one , resulting in higher @xmath43 for the same @xmath58 . this rise of the surface temperature depends on @xmath58 and @xmath60 , the mass of light elements ( hydrogen and/or helium , with a possible carbon / oxygen layer at the bottom of the accreted envelope as a result of nuclear burning of lighter elements ) . potekhin et al . ( 1997 , 2003 ) varied the boundaries of layers containing different elements within physically reasonable limits and found that the resulting relation between @xmath43 and @xmath58 is remarkably insensitive to these variations and depends mainly on @xmath60 . however , @xmath60 can not exceed @xmath61 , because at higher @xmath60 the bottom density of the accreted envelope would exceed @xmath62 g @xmath49 . at such high densities , light elements ( including carbon / oxygen ) would rapidly transform into heavier ones . at the neutrino cooling stage @xmath58 is governed by the neutrino emission from the stellar interior and is almost independent of conductive properties in the heat - blanketing envelope . in contrast , at the photon cooling stage the star with the accreted envelope has lower @xmath58 and , consequently , lower @xmath43 due to higher heat transparency of the surface layers . this leads to faster photon cooling through the surface ( for not too cold stars ; see , e.g. , potekhin et al . 1997 ) . the cooling of a neutron star is sensitive to superfluidity of nucleons in the stellar core and to superfluidity of free neutrons in the inner stellar crust . any superfluidity is characterized by its own density - dependent critical temperature @xmath63 . microscopic theories predict mainly ( i ) singlet - state ( @xmath64s@xmath65 ) pairing of neutrons ( @xmath66 ) in the inner crust and the outermost core ; ( ii ) @xmath64s@xmath65 proton pairing in the core ( @xmath67 ) ; and ( iii ) triplet - state ( @xmath68p@xmath69 ) neutron pairing in the core ( @xmath70 ) . these theories give a large scatter of critical temperatures , from @xmath71 k to @xmath72 k and lower , depending on a nucleon - nucleon interaction model and a many - body theory employed ( e.g. , lombardo & schulze 2001 , yakovlev et al . 1999 ; also see recent papers by schwenk & friman 2004 , takatsuka & tamagaki 2004 , zuo et al . 2004 , tanigawa et al . because of these huge theoretical uncertainties , we will not rely on any specific microscopic results but will treat @xmath0 and @xmath73 as phenomenological functions of @xmath7 ( which can be varied in physically reasonable limits ) . our aim will be to constrain these functions by comparing theoretical cooling curves with the observations . superfluidity of nucleons affects the heat capacity and suppresses neutrino processes such as urca and nucleon - nucleon bremsstrahlung processes ( as reviewed , e.g. , by yakovlev et al . it also introduces an additional neutrino emission mechanism associated with cooper pairing of nucleons ( flowers , ruderman & sutherland 1976 ) . all these effects of superfluidity are incorporated into our cooling code . while calculating the neutrino emission due to cooper pairing of protons we use phenomenological values of weak - interaction parameters renormalized by many - body effects ( the same as in gusakov et al . 2004b ) . in the left panel of fig . [ fig1 ] we plot models for nucleon pairing adopted in our calculations : one model ns1 of strong singlet - state pairing of neutrons ( with the peak of @xmath2 approximately equal to @xmath74 k ) ; three models of proton pairing strong p1 , moderately strong p2 , and moderate p3 ( @xmath75 k , @xmath76 k , and @xmath77 k , respectively ) ; and one model nt1 of moderate triplet - state neutron pairing ( @xmath78 k ) . models p1 and nt1 are the same as in gusakov et al . ( now we add models of weaker proton pairing ( particularly , p2 ) . strong proton pairing has been predicted in a number of publications ( e.g. , tanigawa et al . 2004 ) while other publications predict much weaker proton pairing ( e.g. , zuo et al . 2004 , takatsuka & tamagaki 2004 ) . as seen from the right panel of fig . [ fig1 ] , proton pairing p2 combined with strong crustal superfluidity of neutrons ns1 results in too cold low - mass neutron stars . the neutrino emission due to cooper pairing of protons in the core and of neutrons in the inner crust ( see section [ low - mass ] ) accelerates the cooling and does not allow us to explain the observations of the young and hot neutron stars , rx j08224300 and 1e 1207.45209 . however , this cooling scenario is consistent with the observations of the old and warm neutron stars , psr 105552 and rx j0720.43125 . accreted envelopes can rise the surface temperatures of middle - aged neutron stars and explain the observations of rx j08224300 and 1e 1207.45209 . this is demonstrated by the dot - and - dashed cooling curve for the low - mass star with the accreted envelope of the mass @xmath79 . our interpretation of the neutron stars coldest for their age ( psr j0205 + 6449 in 3c 58 , rx j0007.0 + 7303 in cta 1 , the vela and geminga pulsars ) remains the same as in gusakov et al . ( 2004a ) . these objects can be treated as massive neutron stars ( @xmath80 ) with moderate triplet - state neutron pairing nt1 in their inner cores where proton pairing p1 ( as well as p2 and p3 ) dies out ( the left panel of fig . [ fig1 ] ) . our phenomenological pairing model nt1 seems specific ( shifted to too high densities @xmath7 ) . however similar models have been obtained from microscopic theories ( e.g. , see the curve @xmath81 in fig . 1 of takatsuka & tamagaki 1997 ) . in this way we come to the same three distinct classes of cooling neutron stars as in gusakov et al . ( 2004a ) ( and generally as in kaminker et al . the first class contains low - mass stars whose surface layers are composed either of iron or of light elements ( solid or dot - and - dashed cooling curves , respectively , for the @xmath82 star in fig . [ fig1 ] ) . another class contains high - mass stars which show _ enhanced _ cooling ( the solid curve for the @xmath83 star ) produced by the neutrino emission due to cooper pairing of neutrons . finally , there is a class of medium - mass neutron stars ( the solid curve for the @xmath84 star ) which show intermediate cooling . their cooling curves fill in the space between the upper curve for low - mass stars and the lower curve for high - mass stars . these curves explain the observations of psr b170644 , psr j0538 + 2817 , and rx j1856.43754 . as has been shown in section [ physics ] , the presence of light elements on the surfaces of the younger and hotter neutron stars , rx j0822 - 4300 and 1e 1207.4 - 5209 , can allow us to explain their observations if we assume moderately strong proton pairing p2 in their interiors . this pairing is also consistent with the observations of the old and warmest sources , psr b105552 and rx j0720.43125 . we interpret all these sources as low - mass neutron stars . let us analyze the main cooling regulators of such stars . in our case , triplet - state neutron pairing in low - mass stars is weak . for the adopted equation of state of douchin & haensel ( 2001 ) , this implies @xmath85 k at @xmath86 g @xmath49 . under this condition , neutron pairing does not affect the cooling of low - mass stars ( @xmath87 ) at least at the neutrino cooling stage . the thin short - dash line in the left panel of fig . [ fig2 ] shows that ( in the absence of crustal pairing ) strong proton pairing p1 is needed to explain the data on all neutron stars hottest for their age ( gusakov et al . in contrast , cooling curves for moderately strong proton pairing p2 ( the thin solid line ) and moderate pairing p3 ( the thin long - dashed line ) go essentially lower than the curve for pairing p1 , being inconsistent with the observations of rx j08224300 and 1e 1207.45209 . more rapid cooling for these two models of proton superfluidity is provided by the neutrino emission due to cooper pairing of protons which occurs at @xmath88@xmath89 yr . thick lines in the left panel of fig . [ fig2 ] demonstrate the additional effect of neutron pairing ns1 in the crust . comparing three thick lines , one can see that crustal neutron pairing noticeably accelerates only very slow cooling of low - mass neutron stars with strong proton pairing p1 in their cores ( yakovlev et al . 2001 , 2002 ) . in that case the neutrino luminosity due to cooper pairing of neutrons in the stellar crust at @xmath90 yr may dominate the total neutrino luminosity of the stellar core . moreover , at @xmath91 yr crustal neutron pairing reduces the heat capacity of the crust . both effects accelerate the cooling and decrease @xmath10 , violating the interpretation of the two hottest sources , rx j08224300 and 1e 1207.45209 . any model of weaker crustal superfluidity will only bring cooling curves closer to thin ones and simplify the interpretation of the observations . on the other hand , for moderately strong ( p2 ) or moderate ( p3 ) proton pairing in the core , the effects of strong crustal neutron pairing on the cooling of middle - aged neutron stars ( @xmath92 yr @xmath93 yr ) are almost negligible . the neutrino emission due to crustal cooper pairing of neutrons can noticeably accelerate the cooling and decrease @xmath10 only during the internal thermal relaxation stage ( @xmath44 yr ) . the right panel of fig . [ fig2 ] demonstrates that the observations of rx j08224300 and 1e 1207.45209 can be explained by adopting any model of proton pairing ( p1 , p2 or p3 ) , model ns1 of crustal superfluidity , and the presence of an accreted envelope of the mass @xmath94 ( thin lines ) . let us note , that the upper dot - and - short - dashed cooling curve goes higher than is needed to interpret the observations of the young and hottest source rx j08224300 . accordingly , following yakovlev et al . ( 2002 ) ( also see potekhin et al . 2003 ) , we may assume the presence of a thinner accreted envelope ( e.g. , @xmath95 ) to interpret the observations of rx j08224300 and 1e 1207.45209 ( for the combination of p1 and ns1 pairing ) . the stronger proton core superfluidity , the less massive accreted envelope is needed for the interpretation of the data for these two stars . in order to explain the old and warmest sources , psr b105552 and rx j0720.43125 , we will treat them as low - mass stars with the iron surface and proton pairing p2 in the core ( or similar model of pairing with the peak of critical temperature @xmath8 k ) . moreover , the presence of any crustal neutron pairing ( for example , ns1 ; thick solid lines in fig . [ fig2 ] ) , does not violate the interpretation of these sources . note that proton pairing p3 ( thick long - dashed lines ) is less appropriate for the interpretation of these sources than pairing p2 . therefore , we adopt proton pairing p2 as the basic model for a new cooling scenario . obviously , any model of stronger proton pairing ( with higher @xmath0 ) is better consistent with the observations . figure [ fig3 ] illustrates the effects of the accreted envelopes of the mass @xmath94 on the cooling of neutron stars with different masses and the same nucleon pairing ( models p2 , nt1 , and ns1 ) . for comparison , we present also the cooling curves for stars with iron surface ( thick solid lines ) and the same nucleon superfluidity ( also see the right panel of fig . [ fig1 ] ) . note that the effect of crustal superfluidity on the cooling of such stars is unimportant . = in the left panel of fig . [ fig3 ] we present our traditional cooling curves @xmath56 and compare them with the data on the surface temperatures . on the right panel we show the temporal evolution of the surface thermal luminosity @xmath96 and compare it with the data ( table 1 ) . both representations of the same cooling processes are seen to be in a reasonably good agreement although the data on @xmath11 are generally less certain and seem to be currently less conclusive ( because , as a rule , the luminosity of the selected sources is determined less accurately than their surface temperature as discussed in section [ observations ] ) . figure [ fig3 ] shows a strong rise of cooling curves for neutron stars with accreted envelopes at the neutrino cooling stage ( @xmath97 yr ) and their steep decrease at the photon cooling stage . their photon stage starts earlier than for stars with the iron surface . assuming the presence of accreted envelopes , we can explain the observations of the young and hottest neutron stars , rx j08224300 and 1e 1207.45209 , treating them either as low - mass or as medium - mass stars . in contrast , the observations of the old and warmest objects , psr b105552 and rx j0720.43125 , can be explained only by treating them as low - mass stars with the iron surfaces and with moderately strong ( or strong ) proton pairing inside . it was shown by chang & bildsten ( 2003 , 2004 ) that the mass of light elements may decrease with time , particularly due to diffusive nuclear burning . the characteristic burning time @xmath98 can be considered as an additional cooling regulator . following chang & bildsten ( 2003 , 2004 ) and page et al . ( 2004 ) we assume that the mass of light elements decreases with time as @xmath99 , where @xmath100 is the initial mass . = figure [ fig4 ] illustrates the effect of variable mass of the accretion envelope on the cooling of the @xmath101 neutron star . all cooling curves are calculated assuming nucleon pairing p2 , nt1 , and ns1 . the thick solid line is our typical cooling curve for a low - mass superfluid star without any accreted envelope . we use two values of the initial mass of light elements @xmath102 and @xmath103 and present thus three pairs of cooling curves for three characteristic times @xmath98 . when @xmath98 is lower than the time of the transition from the neutrino cooling stage to the photon stage ( @xmath104 yr ) we obtain a smooth transition of the cooling track from the regime of highest temperatures in young stars to the regime of lower temperatures in old stars ( cf . the curves for @xmath82 in figs . [ fig3 ] and [ fig4 ] ) . this effect has been pointed out by page et al . ( 2004 ) . at @xmath105 yr cooling curves merge into the _ limiting _ curve obtained for constant @xmath106 . in the intermediate case of @xmath107 yr @xmath108 yr the cooling curves gradually approach this limiting curve with the increase of @xmath98 . as seen from fig . [ fig4 ] , by assuming any @xmath98 in the range @xmath92 yr @xmath109 yr one can explain the observations of all neutron stars hottest for their age by one cooling curve . note also that the value @xmath102 is too small to explain the observations of young and hottest neutron stars , especially rx j08224300 , at any @xmath98 . as remarked by chang & bildsten ( 2004 ) , an accreted envelope of a pulsar can become thinner owing to the excavation of ions from the stellar surface by a pulsar wind at a rate @xmath110 , where @xmath111 is the pulsar spin frequency , @xmath112 is the magnetic moment , and @xmath113 is the ion mass . for an ordinary pulsar with the spin period @xmath114 s , @xmath115 g cm@xmath68 , and helium surface we would have the surface mass loss @xmath117 in @xmath118 yr , too small to affect the cooling of a star with the initial helium layer of @xmath119 . for a pulsar with much higher magnetic field and/or faster rotation the effect may be stronger and affects the cooling . we have extended the scenario of neutron star cooling proposed by gusakov et al . ( 2004a ) taking into account the effects of accreted envelopes and crustal singlet - state pairing of neutrons . as stressed in section [ introduction ] , this scenario is different from the minimal cooling scenario of page et al . ( 2004 ) . the general idea of the minimal cooling scheme is that the enhanced neutrino emission , required for the interpretation of observation of neutron stars coldest for their age , is provided by the neutrino emission due to cooper pairing of neutrons . in this case the direct urca process or similar enhanced neutrino processes in kaon - condensed , pion - condensed , or quark matter can be forbidden in neutron stars of all masses . as in gusakov et al . ( 2004a ) , the proposed cooling scenario imposes stringent constraints on the density dependence of the critical temperature @xmath73 for triplet - state neutron pairing in the stellar core . they result from the comparison of theoretical cooling curves with the data on the three most important `` testing sources '' , psr j0205 + 6449 , rx j0007.0 + 7303 , and the vela pulsar ( sect . [ physics ] ) . by tuning our phenomenological model of triplet - state neutron pairing in the stellar core we obtain a noticeable dependence of the cooling on neutron star mass . it enables us to explain all the data by single combination of models for nucleon superfluidity . assuming the presence of accreted envelopes we obtain two additional parameters to regulate the cooling , which are the initial envelope mass @xmath100 and its characteristic burning time @xmath98 ( chang & bildsten 2003 ) . our interpretation implies the presence of moderately strong proton pairing ( @xmath8 k ) and moderate triplet - state neutron pairing ( with @xmath120 k ) in neutron star cores . also , we have taken into account the effect of strong singlet - state neutron pairing ( @xmath121 k ) in the stellar crust . however , as shown in sects . [ low - mass ] and [ accr ] , the effect of crustal superfluidity is unimportant for cooling middle - aged neutron stars with moderately strong proton pairing in their cores . we need proton superfluidity to explain the observations of the neutron stars hottest for their age . however , in contrast to the cooling scenario of gusakov et al . ( 2004a ) , our new cooling scenario does not require too strong proton pairing . in fact , we can explain the observations of the old and warmest stars , psr b105552 and rx j0720.43125 , by treating them as low - mass neutron stars ( without accreted envelopes ) with moderately strong proton pairing in their cores . such phenomenological models for proton pairing are consistent with recent microscopic calculations of proton critical temperatures by zuo et al . ( 2004 ) and takatsuka & tamagaki ( 2004 ) . the young and hottest neutron stars , rx j08224300 and 1e 1207.45209 , can also be treated as low - mass stars with the same moderate proton superfluidity in their cores but assuming the presence of accreted envelopes . the smaller the mass of the envelope , required for the interpretation of these sources , the stronger proton pairing should be assumed . as discussed above , we need neutron pairing nt1 ( or similar ) to explain the observations of the stars coldest for their age . however , as has been demonstrated by gusakov et al . ( 2004b ) , cooling curves are not too sensitive to exchanging neutron and proton superfluidities ( @xmath122 ) in neutron star cores . therefore , we would also be able to explain the data in the scenario with moderately strong neutron and moderate proton pairing in the stellar cores . neutron star cooling can also be affected by surface magnetic fields and by some reheating mechanisms in neutron star interiors . we have not discussed the effects of magnetic fields ( although they are incorporated in our cooling code ) . the main reason is that these effects are weaker than the effects discussed above ( for ordinary cooling isolated neutron stars of non - magnetar type ; see , e.g. , yakovlev et al . 2002 for a detailed discussion of this point ) . internal reheating mechanisms ( see , e.g. , page 1998a , b , and references therein ) , for instance , the reheating due to the viscous dissipation of differential rotation , are relatively weak and model dependent ; they become important at the photon cooling stage . no reheating is required to explain the data in our cooling scenario . what is more important , that most elaborated model equations of state of dense matter ( akmal , pandharipande & ravenhall 1998 ) predict the operation of the direct urca process in most massive stable neutron stars . this should lead to the existence of new classes of cooling neutron stars . the scenario with the open direct urca process ( which can be called the _ extended minimal cooling scenario _ ) has been studied by gusakov et al . ( 2005 ) . it is important that the same physics of neutron star interiors , which is tested by observations of isolated ( cooling ) neutron stars , can also be tested by observations of accreting neutron stars in soft x - ray transients ( e.g. , yakovlev , levenfish & haensel 2003 ) basing on the hypothesis of deep crustal heating of such stars ( brown , bildsten & rutledge 1998 ) by pycnonuclear reactions in accreted matter ( haensel & zdunik 1990 ) . the observations of soft x - ray transients in quiescent states indicate ( yakovlev , levenfish & gnedin 2005 ) the existence of rather cold neutron stars ( first of all , sax j1808.43658 ) inconsistent with the model of neutron star structure proposed in the present paper . however , these observational indications are currently inconclusive ( e.g. , yakovlev et al . if confirmed in future observations , they could give stronger evidence against the proposed scenario than new observations of cooling neutron stars . in this case the extended minimal cooling scenario may appear to be more perspective . we are grateful to yurii shibanov for very fruitful discussions and critical remarks , and to patrick slane for useful discussion of the observational data . this work was supported partly by the rfbr ( grants 03 - 07 - 90200 , 05 - 02 - 22003 , 05 - 02 - 16245 ) , the russian leading science schools ( grant 1115.2003.2 ) , the russian science support foundation , and by the intas ( grant ysf 03 - 55 - 2397 ) . page d. , lattimer j.m . , prakash m. , steiner a.w . , 2004 , apjss , 155 , 623 pavlov g.g , zavlin v.e . , 2003 , in bandiera r. , maiolino r. , mannucci f. , eds . , texas in tuscany . xxi texas symposium on relativistic astrophysics , singapore , world scientific publishing , p. 319 slane p. , helfand d.j . , van der swaluw e. , murray s.s . , 2004a , apj , 616 , 403 slane p. , zimmerman e.r . , hughes j.p . , seward f.d . , gaensler b.m . , clarke m.j . , 2004b , apj , 601 , 1045 takatsuka t. , tamagaki r. , 1997 , prog . , 97 , 345 yakovlev d.g . , kaminker a.d . , gnedin o.y . , 2001 , a&a , 379 , l5 yakovlev d.g . , gnedin o.y . , kaminker a.d . , potekhin a.y . , 2002 , in becker w. , lesh h. , trmper j. , eds . , 270 we - heraeus seminar on neutron stars , pulsars and supernova remnants , mpe , garching , p. 287
we study the `` minimal '' cooling scenario of superfluid neutron stars with nucleon cores , where the direct urca process is forbidden and the enhanced cooling is produced by the neutrino emission due to cooper pairing of neutrons . extending our previous consideration ( gusakov et al . 2004a ) , we include the effects of accreted envelopes of light elements . we employ phenomenological density - dependent critical temperatures @xmath0 and @xmath1 of singlet - state proton and triplet - state neutron pairing in a stellar core , as well as the critical temperature @xmath2 of singlet - state neutron pairing in a stellar crust . we show that the presence of accreted envelopes simplifies the interpretation of observations of thermal radiation from isolated neutron stars in the scenario of gusakov et al . ( 2004a ) and widens the class of models for nucleon superfluidity in neutron star interiors consistent with the observations . [ firstpage ] stars : neutron evolution .
Everyone seems to be at it these days - Volkswagen , Toyota, Porsche, BMW, Nissan, the list goes on and on. Those global giants have recalled millions of vehicles over the past few years. But there are exceptions. And with so many companies asking for drivers to return their cars so faults can be fixed, it appears that Rolls-Royce was feeling a bit left out. So, the luxury car maker has joined in the latest trend by issuing its own recall - for one car. The recall notice In a letter issued by the US National Highway Traffic Safety Administration, Rolls - owned by BMW - announced it was recalling a single Ghost, made in 2014, because of an issue with the airbags. "Rolls BMW of North America is recalling one model year 2015 Rolls-Royce Ghost manufactured on January 23, 2014," the letter reads. "The affected vehicle has thorax air bags fitted to both front seats that may fail to meet the side impact performance requirements for the front seat occupants. As such, this vehicle may fail to comply with Federal Motor Vehicle Safety Standard (FMVSS) number 214, 'Side Impact Protection'." A Rolls-Royce dealer "will replace the driver-side and passenger-side thorax air bag modules, free of charge", the letter adds - It's probably the least they can do, seeing as the car costs £231,730. According to the Financial Times, which first reported the story, the affected car had left its factory in Goodwood, East Sussex, in January 2014 but its North American owner had not yet taken delivery. The issue “was due to the incorrect labelling on one of the airbags”, a Rolls spokesman told the FT. Rolls, which sold 4,000 cars last year, officially unveiled the Ghost in 2009. The 2014 model boasts a 6.6-litre twin-turbo V12, eight-speed automatic gearbox and can reach 62mph in just 4.9 seconds. The entry-level car has an electronically-limited top speed of 155mph, which is probably why you need airbags that work... ||||| The seed for Wide00014 was: - Slash pages from every domain on the web: -- a ranking of all URLs that have more than one incoming inter-domain link (rank was determined by number of incoming links using Wide00012 inter domain links) -- up to a maximum of 100 most highly ranked URLs per domain - Top ranked pages (up to a max of 100) from every linked-to domain using the Wide00012 inter-domain navigational link graph
– The only single-vehicle recalls you'll usually see are for trucks, fire engines, and school buses. So the owner of a Rolls-Royce made in England can feel pretty special that his or her sedan is the subject of a rare single-car recall that affects that car alone, the Financial Times reports. The affected 2015 luxury sedan, which Syracuse.com notes typically costs at least $300,000, has front-seat air bags that may not meet side-impact requirements (which, as the Telegraph notes, isn't ideal for a car that can reach a top speed of 155mph). It's apparently the result of "the incorrect labeling on one of the airbags," a rep tells the Times. No worries for the owner of the car, which was manufactured on January 23, 2014—the recall notes a Rolls-Royce dealer can hook the vehicle up with new air bags free of charge. (Maybe the owner can soup up the car with some of these insane options while it's in the shop.)
this is a retrospective study of consecutive patients with saccular unruptured ias in the anterior circulation that were treated with the ped ( covidien vascular therapies , mansfield , ma , usa ) from september 2008 to december 2011 . 26 of the patients had been included in a previous study that reported on 178 ias treated at seven neurosurgical centers in this locality . this earlier report , however , did not include risk factor analysis for treatment failure , which would be the focus of the present study . patients received oral clopidogrel ( 75 mg / day ) and aspirin ( 100 mg / day ) for three days before ped placement . clopidogrel and aspirin were continued after surgery for at least three and six months , respectively . adjuvant coiling and additional ped placement were used at the discretion of the interventionists when there was significant persistent contrast - filling within the aneurysm sac . follow - up imaging studies with computerized tomography angiography ( cta ) , magnetic resonance angiography ( mra ) or digital subtraction angiogram ( dsa ) were performed at 6- and 18-months . successful flow diverter treatment was defined as total exclusion of the aneurysm from the circulation with no residual neck on angiogram . secondary outcome measures included periprocedural complications , such as ischemic infarction , transient ischemic attack ( tia ) , intracerebral hemorrhage ( ich ) as well as stent migration or thrombosis . we analyzed the following potential predictors for treatment failure : gender , ia location , size ( the largest dome diameter ) , height , aspect ratio ( the ratio of aneurysm height to neck width ) , wideneck lesions ( defined as neck width > 4 mm ) , prior endovascular treatment , and the number of ped used . these variables were studied using binary logistic regression against ia occlusion with spss software ( version 20 , ibm , new york , usa ) . this is a retrospective study of consecutive patients with saccular unruptured ias in the anterior circulation that were treated with the ped ( covidien vascular therapies , mansfield , ma , usa ) from september 2008 to december 2011 . 26 of the patients had been included in a previous study that reported on 178 ias treated at seven neurosurgical centers in this locality . this earlier report , however , did not include risk factor analysis for treatment failure , which would be the focus of the present study . patients received oral clopidogrel ( 75 mg / day ) and aspirin ( 100 mg / day ) for three days before ped placement . clopidogrel and aspirin were continued after surgery for at least three and six months , respectively . adjuvant coiling and additional ped placement were used at the discretion of the interventionists when there was significant persistent contrast - filling within the aneurysm sac . follow - up imaging studies with computerized tomography angiography ( cta ) , magnetic resonance angiography ( mra ) or digital subtraction angiogram ( dsa ) were performed at 6- and 18-months . successful flow diverter treatment was defined as total exclusion of the aneurysm from the circulation with no residual neck on angiogram . secondary outcome measures included periprocedural complications , such as ischemic infarction , transient ischemic attack ( tia ) , intracerebral hemorrhage ( ich ) as well as stent migration or thrombosis . we analyzed the following potential predictors for treatment failure : gender , ia location , size ( the largest dome diameter ) , height , aspect ratio ( the ratio of aneurysm height to neck width ) , wideneck lesions ( defined as neck width > 4 mm ) , prior endovascular treatment , and the number of ped used . these variables were studied using binary logistic regression against ia occlusion with spss software ( version 20 , ibm , new york , usa ) . table 1 shows their characteristics . there were 23 small and six large ias , with a mean size of 6.99 mm ( range : 2.1 to 22 mm ) . wide - neck ias accounted for 48.2% ( n=14 ) , and 55.1% ( n=16 ) of the ias had an aspect ratio of less than 1.5 . four ias arose from the fetal - type posterior communicating artery ( pcoma ) origin , three from other segments of the supraclinoid internal carotid artery ( ica ) , five from the cavernous ica and 17 from the paraclinoid ica . one ia had previous stent - assisted coiling , while three others received previous coiling only . out of 29 aneurysms , six did not occlude , yielding an overall occlusion rate of 79.3% ( n=23 ) . these included all four ias that were arising at the pcoma origin with persistent fetal - type circulations . curve reformation angiography showed that the residual aneurysm necks in these lesions were all incorporating a portion of the fetal - type pcoma after ped placement . the mean follow - up duration was 270 days ( range : 150 to 800 ) . lower occlusion rates were found with wide - neck lesions ( 64.2% ) , ias with aspect ratios of 1.5 ( 75.0% ) , and those with prior treatment ( 50% ) , but these did not reach statistical significance . only gender was found to have a significant correlation , female patients had a higher occlusion rate compared to men ( 87.5% vs. 40% , p = 0.033 ) ( table 2 ) . the overall symptomatic complication rate was 10.3% ( n = 3 ) , including two tias . one patient suffered from left frontal parenchymal hemorrhage one day after operation , secondary to rupture of an underlying arterio - venous malformation , which was an incidental finding . craniotomy and clot evacuation were performed , and he subsequently recovered to functional independence ( modif ied rankin scale 2 ) . a 60-year - old woman had a 7.6 mm wide - neck right pcoma ia . it received feeding from both the ica and the fetal - type pcoma ( fig . one ped was placed initially , but the aneurysm received persistent inflow from the fetal - type pcoma . follow - up angiography showed a persistent aneurysm despite flow diverters at 18 months after the second ped ( fig . 1b ) . a 73-year - old man had a 4.4 mm right pcoma lesion that reconstituted after initial coil embolization . cta and dsa again showed the presence of persistent fetal - type circulation , and the ia was fed by both the ica and the pcoma ( fig . a 68-year - old man had an 11 mm left pcoma lesion supplied from both the ica and the fetal - type pcoma ( fig . there was reconstitution despite three sessions of coil embolization , the last being stent - assisted . table 1 shows their characteristics . there were 23 small and six large ias , with a mean size of 6.99 mm ( range : 2.1 to 22 mm ) . wide - neck ias accounted for 48.2% ( n=14 ) , and 55.1% ( n=16 ) of the ias had an aspect ratio of less than 1.5 . four ias arose from the fetal - type posterior communicating artery ( pcoma ) origin , three from other segments of the supraclinoid internal carotid artery ( ica ) , five from the cavernous ica and 17 from the paraclinoid ica . one ia had previous stent - assisted coiling , while three others received previous coiling only . , six did not occlude , yielding an overall occlusion rate of 79.3% ( n=23 ) . these included all four ias that were arising at the pcoma origin with persistent fetal - type circulations . curve reformation angiography showed that the residual aneurysm necks in these lesions were all incorporating a portion of the fetal - type pcoma after ped placement . the mean follow - up duration was 270 days ( range : 150 to 800 ) . lower occlusion rates were found with wide - neck lesions ( 64.2% ) , ias with aspect ratios of 1.5 ( 75.0% ) , and those with prior treatment ( 50% ) , but these did not reach statistical significance . only gender was found to have a significant correlation , female patients had a higher occlusion rate compared to men ( 87.5% vs. 40% , p = 0.033 ) ( table 2 ) . the overall symptomatic complication rate was 10.3% ( n = 3 ) , including two tias . one patient suffered from left frontal parenchymal hemorrhage one day after operation , secondary to rupture of an underlying arterio - venous malformation , which was an incidental finding . craniotomy and clot evacuation were performed , and he subsequently recovered to functional independence ( modif ied rankin scale 2 ) . a 60-year - old woman had a 7.6 mm wide - neck right pcoma ia . it received feeding from both the ica and the fetal - type pcoma ( fig . one ped was placed initially , but the aneurysm received persistent inflow from the fetal - type pcoma . a second ped was placed 15 months later . follow - up angiography showed a persistent aneurysm despite flow diverters at 18 months after the second ped ( fig . 1b ) . a 73-year - old man had a 4.4 mm right pcoma lesion that reconstituted after initial coil embolization . cta and dsa again showed the presence of persistent fetal - type circulation , and the ia was fed by both the ica and the pcoma ( fig . a 68-year - old man had an 11 mm left pcoma lesion supplied from both the ica and the fetal - type pcoma ( fig . there was reconstitution despite three sessions of coil embolization , the last being stent - assisted . a 60-year - old woman had a 7.6 mm wide - neck right pcoma ia . it received feeding from both the ica and the fetal - type pcoma ( fig . one ped was placed initially , but the aneurysm received persistent inflow from the fetal - type pcoma . a second ped was placed 15 months later . follow - up angiography showed a persistent aneurysm despite flow diverters at 18 months after the second ped ( fig . a 73-year - old man had a 4.4 mm right pcoma lesion that reconstituted after initial coil embolization . cta and dsa again showed the presence of persistent fetal - type circulation , and the ia was fed by both the ica and the pcoma ( fig . a 68-year - old man had an 11 mm left pcoma lesion supplied from both the ica and the fetal - type pcoma ( fig . there was reconstitution despite three sessions of coil embolization , the last being stent - assisted . the ped was approved by the united states food and drug administration ( fda ) in 2011 for the treatment of a selected group of anterior circulation ias . a multi - center study involving 178 ias treated in our locality reported occlusion rates of 81% at 12 months , and 84% at 18 months . currently , only a limited number of studies have investigated predisposing factors for incomplete occlusion after ped placement . the female gender ( or = 0.52 , p = 0.03 ) and prior treatment ( or = 0.36 , p = 0.002 ) were associated with lower success rates . in mcauliffe 's series of 57 ias , those without prior treatment attained an occlusion rate of 92.5% , compared with 80% in those previously treated with coiling or clipping , and 50% in those with previous stenting or stent - assisted coiling . the authors postulated that the presence of a prior stent may compromise the apposition of the ped to the arterial wall , thus creating endoleak and persistent filling of the aneurysm . we found a similar trend , though it did not reach statistical significance possibly due to the small number of patients with prior treatments in our cohort . notably , we found that fetal - type pcoma ias were associated with a particularly poor occlusion rate in our series . pcoma ias are the second most common aneurysms overall , accounting for 25% of all ias and 50% of all ica ias . the most common type is the one in which the neck of the ia originates from the ica and partially incorporates the pcoma . a pcoma with persistent fetal pattern has the same caliber as the p2 segment of the posterior cerebral artery ( pca ) , and is associated with an atrophic p1 segment . their incidence rates range from 4 to 29% for unilateral , and 1 to 9% for bilateral fetal pcoma . this was not due to case selection bias , as we did not treat any other unruptured pcoma lesions by other means during the study period . pcoma ias are potentially difficult lesions to treat . in the international subarachnoid hemorrhage trial ( isat ) , on reviewing the outcome of coil - embolized pcomas , songsaeng et al found five morphological factors that were predictive of initial aneurysm occlusion and long - term stability . they were small ia size , dome - to - neck ratio < 2 , small size of the pcoma , ica - fundus angle of 160 to 180 degrees , and posteroinferior dome orientation . , lesions associated with fetal - type circulation may pose difficulties , in that backflow from a large pcoma may persistently fill the ia even after a temporary clip was in place . it is critical but not always possible to preserve adequate flow in these large pcomas during clipping . zada et al described a patient who underwent clipping of an ia with fetal variant . the fetal - type pcoma was sacrificed resulting in an occipital infarction . in the context of treatment with the ped , all four of our fetal - type pcoma ias persisted and accounted for two - thirds of all failures . we surmised that while a ped placed within the ica would protect the ia from ica inflow , the dominant and large caliber fetal pcoma will continuously sump blood from the ica across the stent , resulting in diminished flow - diverting effect and therefore persistent aneurysm flow . backflow from the pca territory through the fetal - type pcoma may also contribute to persistent aneurysm perfusion . on the other hand , in the presence of fetal pcoma , increasing flow diversion by using multiple ped across the pcoma ostium may jeopardize the perfusion to the pca territory and may lead to ischemic complications . therefore , based on our limited experiences , we do not find the ped a suitable treatment for pcoma ias with persistent fetal - type circulation , especially when an ia incorporates a significant portion of the pcoma . adjuvant coiling may help but can be technically difficult for a wide - neck lesion located at bifurcations . the use of multiple peds and adjuvant coiling in our treatment was not protocol - driven . incomplete occlusion was observed in all cases of pcoma ias associated with persistent fetal - type circulation treated with flow diverters .
purposethe pipeline embolization device ( ped ) is a flow diverter that has shown promise in the treatment of intracranial aneurysms . close to one - fifth of aneurysms , however , fail to occlude after ped placement . this study aims to identify anatomical features and clinicopathologic factors that may predispose failed aneurysm occlusion with the ped.materials and methodswe retrospectively reviewed all anterior circulation unruptured saccular aneurysms treated with the ped in a single - center . the primary outcome measure was angiographic occlusion . anatomical features and potential predictors , including gender , aneurysm location , size , height , aspect ratio , neck width , prior treatment and the number of ped , were studied using binary logistic regression.results29 anterior circulation unruptured saccular aneurysms with a mean size of 6.99 mm treated with the ped in a single center were retrospectively studied . the overall occlusion rate was 79.3% after a mean follow - up of 9.2 months . four aneurysms were related to the fetal - type posterior communicating artery ( pcoma ) , and all were refractory to flow diverter treatment . female gender was significantly associated with a higher occlusion rate . we present the anatomical features and propose possible pathophysiological mechanisms of these pcoma aneurysms that failed flow diverter treatment.conclusiona pcoma aneurysm with persistent fetal - type circulation appears to be particularly refractory to flow diverter treatment , especially when the aneurysm incorporates a significant portion of the pcoma . our experience suggested that flow diverting stents alone may not be the ideal treatment for this subgroup of aneurysms , and alternative modalities should be considered . female patients were found to have a significantly higher rate of treatment success .
multiplicity is common for both stars and protostars ( e.g. , * ? ? ? * ; * ? ? ? while several scenarios have been proposed to explain the origin of multiplicity , fragmentation at early phases is generally regarded as the main mechanism @xcite . in particular , the turbulent fragmentation scenario , which proposes that multiplicity results from turbulent perturbations in a bound core , typically produces wide binaries with separation larger than @xmath11000 au ( e.g. , * ? ? ? in contrast , the disk fragmentation scenario , which proposes that fragmentation occurs in gravitationally unstable protostellar disks , produces relatively close binaries with separation typically within a few hundred au ( e.g. , * ? ? ? * ) . a number of theoretical works have studied the alignment between the spin ( rotation ) axes of binary / multiple ( hereafter referred to as multiple for simplicity ) components and the binary orbital axis in protostars and stars @xcite . during the early stages of star formation , misaligned systems can be produced by turbulent fragmentation where the distribution of angular momentum is complex in the initial core , by dynamical capture in a small cluster , or by ejections in a multiple system @xcite . at later stages , misaligned systems can also be produced by effects that alter angular momentum , such as stellar encounters or precession @xcite . alternatively , aligned systems can form from a large co - rotating structure in a massive disk / ring ( e.g. , * ? ? ? * ) or by fragmentation of a core whose angular momentum vectors are aligned . aligned systems can also be produced via tidal effects during subsequent evolutionary phases @xcite . while ( mis)alignment at late stages alone can not provide clear clues in discerning between formation mechanisms , it provides a clearer discriminant at early stages . clccccc 1 & per16 & 2.2 & @xmath0co(2 - 1 ) , ( 1 ) & 86 & @xmath2 ( -15.4@xmath3 ) & ( 03:43:51.0 , 32:03:16.7 ) + & per28 & 2.2 & @xmath0co(2 - 1 ) , ( 1 ) & 86 & @xmath2 ( -15.4@xmath3 ) & ( 03:43:51.0 , 32:03:16.7 ) + 2 & per26 & 2.7 & @xmath0co(2 - 1 ) , ( 2 ) & 139 & @xmath4 ( -13.9@xmath3 ) & ( 03:25:39.0 , 30:44:02.0 ) + & per42 & 2.7 & @xmath0co(2 - 1 ) , ( 2 ) & 139 & @xmath4 ( -13.9@xmath3 ) & ( 03:25:39.0 , 30:44:02.0 ) + 3 & per11 & 6.6 & @xmath0co(2 - 1 ) , ( 3 ) & 81 & @xmath5 ( -14.6@xmath3 ) & ( 03:43:56.9 , 32:03:04.6 ) + 4 & per33 & 3.4 & @xmath0co(2 - 1 ) , ( 4 ) & 60 & @xmath6 ( 86.5@xmath3 ) & ( 03:25:36.5 , 30:45:22.3 ) + 5 & b1-bn & 2.8 & @xmath0co(3 - 2 ) , ( 5 ) & 315 & @xmath7 ( -24.6@xmath3 ) & ( 03:33:21.0 , 31:07:23.8 ) + & b1-bs & 2.8 & @xmath0co(3 - 2 ) , ( 5 ) & 315 & @xmath7 ( -24.6@xmath3 ) & ( 03:33:21.0 , 31:07:23.8 ) + & per41 & 2.8 & @xmath0co(3 - 2 ) , ( 5 ) & 315 & @xmath7 ( -24.6@xmath3 ) & ( 03:33:21.0 , 31:07:23.8 ) + 6 & per8 & 1.9 & @xmath0co(2 - 1 ) , ( 6 ) & 104 & @xmath8 ( -77.9@xmath3 ) & ( 03:44:43.6 , 32:01:33.7 ) + & per55 & 1.9 & @xmath0co(2 - 1 ) , ( 6 ) & 104 & @xmath8 ( -77.9@xmath3 ) & ( 03:44:43.6 , 32:01:33.7 ) + 7 & per12 & 12.7 & @xmath0co(3 - 2 ) , ( 8) & 197 & @xmath9 ( -17.2@xmath3 ) & ( 03:29:10.5 , 31:13:31.0 ) + & per13 & 11.5 & @xmath0co(3 - 2 ) , ( 7 ) & 254 & @xmath10 ( -23.4@xmath3 ) & ( 03:29:12.0 , 31:13:01.5 ) + 8 & per18 & 4.7 & @xmath0co(3 - 2 ) , ( 10 ) & 322 & @xmath11 ( -31.0@xmath3 ) & ( 03:29:11.0 , 31:18:25.5 ) + & per21 & 4.7 & @xmath0co(3 - 2 ) , ( 10 ) & 322 & @xmath11 ( -31.0@xmath3 ) & ( 03:29:11.0 , 31:18:25.5 ) + & per49 & 2.7 & @xmath0co(3 - 2 ) , ( 9 ) & 327 & @xmath10 ( -31.3@xmath3 ) & ( 03:29:12.9 , 31:18:14.4 ) + 9 & per44 & 3.7 & @xmath0co(3 - 2 ) , ( 11 ) & 350 & @xmath12 ( 35.9@xmath3 ) & ( 03:29:03.4 , 31:15:57.7 ) + & svs 13b & 3.7 & @xmath0co(3 - 2 ) , ( 11 ) & 350 & @xmath12 ( 35.9@xmath3 ) & ( 03:29:03.4 , 31:15:57.7 ) + & svs 13c & 2.6 & @xmath0co(3 - 2 ) , ( 12 ) & 312 & @xmath12 ( 35.8@xmath3 ) & ( 03:29:02.0 , 31:15:38.1 ) [ tbl : data ] therefore , investigating the alignment between the spin axes of multiple components provides important guidance on the formation mechanism of multiple systems . among early spectral type , main sequence binaries , most close binaries have aligned spin axes , while wide binaries exhibit misaligned spin axes @xcite . in t tauri disks , a mixture of aligned and misaligned spin axes are observed in wide binaries @xcite . in the protostellar stage , jet / outflow orientations provide important information for disk orientation since jets are always launched perpendicular to disks while disks are still deeply embedded in envelopes . however , compared to the studies at later stages , only a few studies have discovered misaligned jets in the youngest objects . to our knowledge , there have been no systematic and statistical studies of the ( mis)alignment of protostellar outflows in proto - binary / multiple systems using high - resolution , interferometric observations . in this letter we investigate molecular outflows in nine wide multiple systems ( projected separation @xmath13 au ) located in the perseus molecular cloud ( distance = 230 pc , * ? ? ? the data are from a large program with the submillimeter array ( sma ) : mass assembly of stellar systems and their evolution ( masses ; pi : michael dunham , * ? ? ? these nine systems cover all the wide systems in the current masses sample and cover 70% of all of the known wide class 0 multiple systems ( some of these systems have class i components ) in perseus @xcite . with outflows from 23 protostellar objects in these nine systems , they currently provide the largest , unbiased , interferometric sample of outflows in proto - multiple systems observed in the same molecular cloud complex with similar sensitivity , angular resolution , and spectral line coverage . we present data from the subcompact configuration with the sma . the observations were carried out between november 2014 and november 2015 . the observations were obtained in good weather conditions with the zenith opacity at 225 ghz around 0.1 . we observed molecular lines and the continuum at 231.29 ghz and 356.72 ghz simultaneously using the dual receiver mode . the continuum measurements at the two different frequencies each have an effective bandwidth of 1312 mhz considering the upper and lower sidebands . high spectral resolution channels were configured for molecular line observations ; smoothed velocity resolutions for lines presented in this letter are the following : 0.5 km s@xmath14 for @xmath0co(2 - 1 ) ( 230.53796 ghz ) and @xmath0co(3 - 2 ) ( 345.79599 ghz ) , 0.2 km s@xmath14 for c@xmath15o(2 - 1 ) ( 219.56036 ghz ) and n@xmath16d@xmath17(3 - 2 ) ( 231.32183 ghz ) . we also used the @xmath0co(2 - 1 ) from the extended configuration published in @xcite for more clear outflow morphologies in per33 . the @xmath18 rms sensitivities of the 230 ghz continuum and @xmath0co observations are summarized in table [ tbl : data ] . we used the mir software packagecqi / mircook.html ] for data calibration and data reduction . the uncertainty in the absolute flux calibration was estimated to be @xmath19% . we used the miriad software package @xcite for data imaging . the synthesized fwhm beams for the subcompact data are about @xmath20 at 230 ghz and @xmath21 at 345 ghz ( see table [ tbl : data ] for details ) . these resolutions are sufficient to resolve wide multiples ( separation @xmath13 au ) at the distance of 230 pc to perseus . more details about masses observations , correlator setup , calibration , and imaging can be found in @xcite . p0.5inp1inccr@@xmath22lr@@xmath22lr@@xmath22llr@@xmath22lc per16 & & 03:43:51.00 & + 32:03:23.91 & 21.4&2.3 & 72.5&7.8 & 0.026&0.003 & 0@xmath23 & 7&1(1 ) & [email protected] + per28 & & 03:43:50.97 & + 32:03:08.01 & 16.7&3.0 & 35.8&6.4 & 0.013&0.002 & 0@xmath24 & 112&2(1 ) & [email protected] + per26 & & 03:25:38.87 & + 30:44:05.31 & 200.7&11.4 & 310.4&17.6 & 0.113&0.006 & 0@xmath23 & 162&1(1 ) & [email protected] + per42 & & 03:25:39.12 & + 30:44:00.45 & 24.2&6.7 & 58.6&16.2 & 0.021&0.006 & i@xmath25 & 43&2(1 ) & [email protected] + per11 & ic348 mms1 & 03:43:57.06 & + 32:03:04.66 & 260.8&15.8 & 437.7&26.5 & 0.159&0.01 & 0@xmath23 & 161&1(1 ) & [email protected] + & ic348 mms2 & 03:43:57.74 & + 32:03:10.08 & 43.1&4.6 & 124.4&13.3 & 0.045&0.005 & 0@xmath26 & 36&12(1 ) & [email protected] + per33 & l1448n - b & 03:25:36.33 & + 30:45:14.81 & 423.2&4.6 & 922.4&45.0 & 0.335&0.016 & 0@xmath27 & 122&15(1,2 ) & [email protected] + & l1448n - a & 03:25:36.48 & + 30:45:21.70 & 66.6&7.6 & 274.4&29.2 & 0.100&0.011 & 0/i@xmath27 & 218&10(1,2 ) & [email protected] + & l1448n - nw & 03:25:35.66 & + 30:45:34.26 & 68.1&5.5 & 209.4&18.6 & 0.076&0.007 & 0@xmath27 & 128&15(1,2 ) & [email protected] + b1-bn & & 03:33:21.20 & + 31:07:43.92 & 166.0&6.2 & 209.3&7.8 & 0.076&0.003 & fhsc@xmath28 & 90&1(3 ) & [email protected] + b1-bs & & 03:33:21.34 & + 31:07:26.44 & 308.6&12.9 & 353.1&14.8 & 0.128&0.005 & fhsc@xmath28 & 112&6(3 ) & [email protected] + per41 & & 03:33:20.34 & + 31:07:21.36 & & & & & & & i@xmath25 & 30&5(1 ) & + per8 & & 03:44:43.98 & + 32:01:34.97 & 125.9&7.2 & 159.8&9.1 & 0.058&0.003 & 0@xmath25 & 15&5(4 ) & [email protected] + per55 & & 03:44:43.30 & + 32:01:31.24 & & & & & & & i@xmath25 & 115&2(1 ) & [email protected] + per12 & & 03:29:10.50 & + 31:13:31.33 & 2722.0&82.6 & 4200.0&127.5 & 1.524&0.046 & 0@xmath23 & 19&5(5,6 ) & [email protected] + per13 & ngc1333 iras4b & 03:29:11.99 & + 31:13:08.14 & 915.0&32.5 & 1202.0&42.7 & 0.436&0.015 & 0@xmath23 & 176&2(1 ) & [email protected] + & ngc1333 iras4b & 03:29:12.82 & + 31:13:07.00 & 330.1&30.0 & 419.5&38.1 & 0.152&0.014 & 0@xmath23 & 90&1(1,7 ) & [email protected] + per18 & & 03:29:11.26 & + 31:18:31.33 & 130.2&7.7 & 178.4&10.6 & 0.065&0.004 & 0@xmath23 & 150&1(1 ) & [email protected] + per21 & & 03:29:10.69 & + 31:18:20.11 & 71.3&6.2 & 154.6&13.4 & 0.056&0.005 & 0@xmath23 & 48&13(1 ) & [email protected] + per49 & & 03:29:12.90 & + 31:18:13.87 & 14.8&2.4 & 23.2&3.8 & 0.008&0.001 & i@xmath25 & 27&2(1 ) & + per44 & svs 13a1 & 03:29:03.75 & + 31:16:03.59 & 424.6&19.8 & 525.7&24.5 & 0.191&0.009 & 0/i@xmath29 & 130&5(8 ) & + & svs 13a2 & 03:29:03.40 & + 31:16:00.10 & 80.06&13.3 & 232.7&38.7 & 0.084&0.014 & 0/i@xmath29 & & & + svs 13b & & 03:29:03.04 & + 31:15:51.47 & 332.5&20.8 & 663.9&41.5 & 0.241&0.015 & 0@xmath29 & 170&10(1,9 ) & + svs 13c & & 03:29:02.00 & + 31:15:38.31 & 69.9&7.8 & 139.7&15.6 & 0.051&0.006 & 0@xmath29 & 0&1(1 ) & [ tbl : continuum ] table [ tbl : continuum ] lists the properties of the observed sources derived from the 230 ghz continuum . we observed 19 protostellar sources ( listed as source " in table [ tbl : continuum ] ) including 15 sources identified by a 1.1 mm bolocam continuum and _ spitzer _ survey @xcite , svs 13b and svs 13c @xcite , and two first hydrostatic core candidates b1-bn and b1-bs . some of these sources contain multiple sources as revealed by higher angular resolution observations , and we list those multiples as object " in table [ tbl : continuum ] . in total , there are 24 protostellar objects . our 230 ghz continuum observations detected 22 of these 24 protostellar objects ( except for per41 and per55 ) . these objects form nine wide multiple systems with separations larger than 1000 au . several objects ( per18 , l1448n - b , l1448n - nw , svs 13a1 , per12 , ic348 mms ) contain close binaries with separations less than a few hundred au @xcite . in this letter we focus on the properties of the wide systems and regard each close system as one source with properties based on our 230 ghz continuum observations . the position , peak intensity , and total flux density of each object were obtained by fitting a gaussian to the 230 ghz continuum images . we derived the masses of the envelopes using the equation ( assuming the 230 ghz dust emission is optically thin ) : @xmath30 where @xmath31 is the total flux density at 230 ghz , @xmath32 is the distance to the object ( 230 pc ) , @xmath33 is the dust opacity derived from @xmath34 cm@xmath24 g@xmath14 @xcite with an assumed gas - to - dust ratio of 100 , and @xmath36 is the blackbody intensity at dust temperature t. we assumed a dust temperature of 30 k and @xmath37 ( e.g. , * ? ? ? * ; * ? ? ? the masses range from 0.01 m@xmath38 to 1.5 m@xmath38 ( table [ tbl : continuum ] ) . the uncertainties of these mass estimates are at least a factor of 2 due to the choices of dust opacity , dust temperature , the gas - to - dust ratio , the beta value , and resolved - out contributions from emission at larger scales ( e.g. , * ? ? ? * ) . figure [ fig : outflow ] shows outflows from the nine wide multiple systems . we inspected both @xmath0co(2 - 1 ) and @xmath0co(3 - 2 ) maps for each object and show the molecular line transition that presents the clearest outflow morphologies . we identified outflows primarily based on our @xmath0co data ; we also investigated _ spitzer - irac _ 4.5 @xmath39 images @xcite to examine scattered light from outflow cavities and confirm the identifications . the majority of the objects have clear blue- and red - shifted emissions offset from the protostar in position ( hereafter blue - lobe " and red - lobe " ) including per16 , per28 , per26 , per42 , ic348 mms , l1448n - b , l1448n - a , l1448n - nw , per41 , ngc1333 iras4b , and per49 . ic348 mms2 shows detection in the blue lobe toward the north - east , a feature also observed in @xcite with a consistent position and velocity range . ngc1333 iras4b shows a weak outflow in the e - w direction , consistent with the detection in @xcite . the outflow identification of per18 is based on the strong red lobe as a jet - like morphology is also observed in the _ irac _ 4.5 @xmath39 image with an orientation consistent with the red lobe . the blue lobe of per21 exhibits an arc shape , while the red lobe is less clear and is not symmetric about the source . the _ irac _ 4.5 @xmath39 image of per21 shows clear outflow structures that agree with the identified orientation based on @xmath0co . svs 13b shows a red lobe approximately in the n - s direction , and observed a blue lobe in the velocity range from -17 to 6.5 km s@xmath14 with a consistent orientation . svs 13c has overlapping blue and red lobes along the line of sight , suggesting that the outflow is pole - on . we used identifications from the literature for a few objects where we do not observe clear outflow morphologies in the @xmath0co maps . b1-bn and b1-bs exhibit complicated co outflows , particularly in the redshifted emissions between b1-bn and b1-bs @xcite . we used the h@xmath16co and ch@xmath40oh observations in , which showed two clear outflows in approximately e - w directions . per12 ( ngc1333 iras4a ) contains a close binary , 4a1 and 4a2 , and each source drives an outflow @xcite . we used the position angle of the outflow from a2 since it is stronger than the outflow from a1 . for svs 13a1 , we used the identification from , which was based on the @xmath0co(2 - 1 ) outflow sensitive to extremely high velocities and a chain of herbig - haro objects in hh 7 - 11 . table [ tbl : continuum ] lists the position angles of the identified outflows associated with each object . we used c@xmath15o(2 - 1 ) and n@xmath16d@xmath17(3 - 2 ) data to obtain the velocity of each object . our data show that n@xmath16d@xmath17 peaks coincide well with continuum peaks in fhscs , while c@xmath15o peaks coincide well with continuum peaks in class 0 and i objects . this is expected from chemistry since co is evaporated to detect and destroys n@xmath16h@xmath17/n@xmath16d@xmath17 in more evolved sources , while insufficient co is evaporated to detect for fhscs . therefore , we used n@xmath16d@xmath17 to obtain the source velocity for b1-bn and b1-bs and used c@xmath15o for the rest of the objects . each source velocity was obtained by fitting a gaussian profile to the spectrum averaged over one synthesized beam at the continuum peak . the source velocities are listed in table [ tbl : continuum ] . lcr@@xmath22lr@@xmath22lr@@xmath22l per16+per28 & 3658 & 0.14&0.006 & 0.06&0.04 & 75&2 + per26+per42 & 1863 & 0.36&0.011 & 0.12&0.02 & 61&2 + ic348 mms+mms2 & 2347 & 0.39&0.011 & @xmath410.01&0.04 & 55&12 + l1448n - b+n - a & 1646 & 0.68&0.015 & 0.43&0.01 & 84&18 + l1448n - b+n - nw & 4895 & 0.38&0.008 & 1.32&0.02 & 6&21 + l1448n - a+n - nw & 3776 & 0.29&0.011 & 1.75&0.02 & 90&18 + b1-bn+b1-bs & 4042 & 0.30&0.004 & 0.47&0.08 & 22&6 + b1-bn+per41 & 5777 & & & & & 60&5 + b1-bs+per41 & 3176 & & & & & 82&8 + per8+per55 & 2166 & & & 0.72&0.06 & 80&5 + ngc1333 iras4a+4b & 6912 & 0.71&0.009 & 0.06&0.01 & 23&5 + ngc1333 iras4a+4b & 8841 & 0.58&0.008 & 0.29&0.04 & 61&5 + ngc1333 iras4b+4b & 2463 & 0.65&0.011 & 0.24&0.04 & 86&2 + per18+per21 & 3079 & 0.26&0.007 & 0.70&0.04 & 78&13 + per18+per49 & 6285 & 0.14&0.004 & & & 78&13 + per21+per49 & 6671 & 0.13&0.005 & & & 21&13 + svs 13a1 + 13b & 3486 & 0.47&0.009 & & & 40&11 + svs 13a1 + 13c & 7774 & 0.23&0.005 & & & 50&5 + svs 13b+13c & 4309 & 0.35&0.010 & & & 10&10 [ tbl : pair ] dynamical interactions play a crucial role in the alignment of multiple systems @xcite . to better understand the dynamics in our sample , we investigated gravitational boundedness in each system . we calculated the escape velocity for each binary pair in a two - body system : @xmath42 , where @xmath43 is the gravitational constant , @xmath44 and @xmath45 are the masses of the two objects in a pair , and @xmath46 is the separation between the two objects . we used the masses calculated in table [ tbl : continuum ] for @xmath44 and @xmath45 . the projected separations and the resulting escape velocity are listed in table [ tbl : pair ] . the velocity difference ( @xmath47 ) between two objects in a pair is calculated based on the source velocities in table [ tbl : continuum ] ; the results are listed in table [ tbl : pair ] . these results show that the majority of the pairs have @xmath48 larger than @xmath47 by a factor of @xmath49 , implying that most systems are bound . however , this comparison is highly uncertain due to several factors . first , the separations we used in the formula are the projected separations , which likely under - estimate the actual separations . with actual separations being larger , the systems would be more unbound . in addition , the derived @xmath47 only considers components along the line of sight ; components in other directions would increase the differences in the velocities , and the systems would be more unbound . furthermore , the envelope mass estimates are uncertain by a factor of at least a few due to the choices of dust temperature and dust opacity ( sect . [ sect : continuum ] ) , and the masses from the central protostars are not included in the envelope masses . by including the masses from protostars the total masses could increase by a factor of two or more . the mass estimates also suffer from spatial filtering and do not have contributions from large scales . we argue that the level of contributions the interferometric observations resolve out are similar in both @xmath44 and @xmath45 , and our @xmath48 estimates based on these masses would be lower limits considering this effect . an increase in the masses would cause the systems to be more bound . with these uncertainties , the estimates of escape velocities can easily be altered by a factor of a few . considering these uncertainties , we speculate that the systems with @xmath50 may be soft binaries ( loosely bound ) or intermediate binaries ( between loosely bound and tightly bound ; @xcite ) . soft binaries are likely to be destroyed by an encounter @xcite . intermediate binaries may be disrupted or may survive depending on the details of the individual dynamical history @xcite . . ] when comparing the differences in outflow orientations , we consider all the possible pairs in each system with separations larger than 1000 au and less than 10000 au @xcite . table [ tbl : pair ] lists the difference in the outflow orientation for each pair ( @xmath51 ) derived from the data in table [ tbl : continuum ] . to investigate if the distribution of observed outflow orientation differences , which are projected on the plane of the sky , reflects particular intrinsic distributions in the 3d space , we performed monte carlo simulations considering three distributions in 3d : tightly aligned ( outflows orientation differences less than 20@xmath3 ) , random , and preferentially anti - aligned ( outflow orientation differences between 70@xmath3 and 90@xmath3 ) . we then project these outflows generated in 3d onto the plane of the sky in 2d . figure [ fig : ks ] shows the cumulative distribution functions of the projected outflow orientation differences from the three distributions in 3d . the black solid line shows the observed data of outflow orientation differences from table [ tbl : pair ] . we performed kolmogorov - smirnov ( k - s ) tests with three null hypotheses : the observed distribution is the same as the tightly aligned , random , and preferentially anti - aligned distributions , respectively . the p - values from the k - s tests are @xmath52 for the tightly aligned distribution , 0.18 for the random distribution , and 0.5 for the anti - aligned distribution . by adopting a significance level of 0.05 , we reject the null hypothesis that the observed distribution is the same as the tightly aligned distribution . however , we can not reject the other two null hypotheses . this result suggests that the outflows in these multiple systems are misaligned . provided that most of our objects are at the youngest class 0 stage , our observations are likely the best available probe of the initial conditions of wide multiple formation . our k - s test results suggest that members of these wide multiple systems do not come from the same co - rotating structures , or from an initial cloud with aligned vectors of angular momentum . the results suggest that these wide multiple systems likely formed in environments where the distribution of angular momentum was complex and disordered . one major possibility for such an environment is turbulent fragmentation , where the distribution of angular momentum has spatial variations @xcite . in this case misaligned outflows are expected in wide systems @xcite . another possibility is dynamical interactions such as dissipative star - disc encounters via capture of a passing object , typically with a different direction of angular momentum ( e.g. , * ? ? ? however , this is less likely since the frequency for such favorable encounters is low . this work is based primarily on observations obtained with the sma , a joint project between the smithsonian astrophysical observatory and the academia sinica institute of astronomy and astrophysics and funded by the smithsonian institution and the academia sinica . the authors thank the sma staff for executing these observations as part of the queue schedule , and charlie qi and mark gurwell for their technical assistance with the sma data . k.i.l . acknowledges support from nasa grant nnx14ag96 g . m.m.d . acknowledges support from nasa adap grant nnx13ae54 g and from the submillimeter array through an sma postdoctoral fellowship . t.l.b . also acknowledges partial support from nasa adap grant nnx13ae54 g . e.i.v . acknowledges support from the russian ministry of education and science grant 3.961.2014/k . s.s.r.o . acknowledges support from the national aeronautics and space administration under grant no . 14-atp14 - 0078 issued through the astrophysics theory program . is currently supported by grant 639.041.439 from the netherlands organisation for scientific research ( nwo ) . , x. , bourke , t. l. , launhardt , r. , & henning , t. 2008 , , 686 , l107 , x. , launhardt , r. , & henning , t. 2009 , , 691 , 1729 , x. , arce , h. g. , zhang , q. , et al . 2013 , , 768 , 110 ching , t .- c . , lai , s .- , zhang , q. , et al . 2016 , arxiv:1601.05229 , m. 2005 , , 630 , 976 , c. j. , & pringle , j. e. 1991 , , 249 , 584 , g. , & kraus , a. 2013 , , 51 , 269 , m. m. , vorobyov , e. i. , & arce , h. g. 2014 , , 444 , 887 , m. l. , evans , ii , n. j. , sargent , a. i. , & glenn , j. 2009 , , 692 , 973 , r. t. 2004 , , 600 , 769 , m. , pety , j. , fuente , a. , et al . 2015 , , 577 , l2 , s. p. , kroupa , p. , goodman , a. , & burkert , a. 2007 , protostars and planets v , 133 , n. , & liu , f .- 2014 , , 789 , 50 , t. , bushimata , t. , choi , y. k. , et al . 2008 , , 60 , 37 , k. s. , & clarke , c. j. 2009 , , 392 , 448 , c. l. h. , plambeck , r. l. , kwon , w. , et al . 2014 , , 213 , 13 , e. l. n. , mathieu , r. d. , donar , a. x. , & dullighan , a. 2004 , , 600 , 789 , j. k. , harvey , p. m. , evans , ii , n. j. , et al . 2006 , , 645 , 1246 , j. k. , bourke , t. l. , myers , p. c. , et al . 2007 , , 659 , 479 , k. m. 2011 , in astronomical society of the pacific conference series , vol . 447 , evolution of compact binaries , ed . l. schmidtobreick , m. r. schreiber , & c. tappert , 47 , k. i. , dunham , m. m. , myers , p. c. , et al . 2015 , , 814 , 114 , l. w. , mundy , l. g. , & welch , w. j. 2000 , , 529 , 477 , s. h. , & ogilvie , g. i. 2000 , , 538 , 326 , n. , & clarke , c. j. 2011 , , 415 , 1179 , j .- l . , clarke , c. j. , prato , l. , & mccabe , c. 2007 , protostars and planets v , 395 , s. s. r. , kratter , k. m. , matzner , c. d. , krumholz , m. r. , & klein , r. i. 2010 , , 725 , 1485 , s. s. r. , lee , e. j. , goodman , a. a. , & arce , h. 2011 , , 743 , 91 , j. c. , cole , d. m. , ressler , m. e. , & wolf - chase , g. 2006 , , 131 , 2601 , a. , zapata , l. a. , rodrguez , l. f. , et al . 2014 , , 444 , 833 , r. j. , & goodwin , s. p. 2012 , , 424 , 272 , s. , elia , d. , schisano , e. , et al . 2012 , , 547 , a54 , b. , clarke , c. j. , boss , a. p. , et al . 2014 , protostars and planets vi , 267 , b. , heathcote , s. , roth , m. , noriega - crespo , a. , & raga , a. c. 1993 , , 408 , l49 sadavoy , s. i. , di francesco , j. , andr , p. , et al . 2014 , , 787 , l18 , g. , & knee , l. b. g. 2001 , , 546 , l49 , g. , knee , l. b. g. , aspin , c. , robson , i. e. , & russell , a. p. g. 1994 , , 285 , g. , codella , c. , cabrit , s. , et al . 2015 , , 584 , a126 , r. j. , teuben , p. j. , & wright , m. c. h. 1995 , in astronomical society of the pacific conference series , vol . 77 , astronomical data analysis software and systems iv , ed . r. a. shaw , h. e. payne , & j. j. e. hayes , 433 , j. j. , looney , l. w. , li , z .- y . , et al . 2016 , arxiv e - prints , arxiv:1601.00692 , j. e. 2002 , , 40 , 349 , j. p. , mann , r. k. , di francesco , j. , et al . 2014 , , 796 , 120
we investigate the alignment between outflow axes in nine of the youngest binary / multiple systems in the perseus molecular cloud . these systems have typical member spacing larger than 1000 au . for outflow identification , we use @xmath0co(2 - 1 ) and @xmath0co(3 - 2 ) data from a large survey with the submillimeter array : mass assembly of stellar systems and their evolution with the sma ( masses ) . the distribution of outflow orientations in the binary pairs is consistent with random or preferentially anti - aligned distributions , demonstrating that these outflows are misaligned . this result suggests that these systems are possibly formed in environments where the distribution of angular momentum is complex and disordered , and these systems do not come from the same co - rotating structures or from an initial cloud with aligned vectors of angular momentum .
the emerging census of extrasolar planets has revealed an abundance of exoplanets , ranging from super - earths to super - jupiters . mayor et al . ( 2009 ) suggest that @xmath4 30% of solar - type stars have short period ( less than 100 days ) super - earths with masses less than 30 @xmath5 . the estimated frequency of giant planets with masses in the range from 0.3 to 10 @xmath6 ( jupiter - masses ) inside @xmath7 au is @xmath4 10% to @xmath4 20% ( cumming et al . gravitational microlensing detections imply an even higher frequency of giant planets orbiting beyond 3 au , about 35% ( gould et al . giant planet formation thus appears to be a reasonably common outcome of the low - mass star formation process . while core accretion continues to be the most popular mechanism for giant planet formation ( e.g. , johnson et al . 2010 ) , disk instability seems to be necessary as well , at least in order to explain the formation of gas giant planets orbiting at great distances . hr 8799 , e.g. , appears to have a system of three giant planets , orbiting at distances of 24 , 38 , and 68 au , with masses of 10 , 10 , and 7 @xmath6 , respectively ( marois et al . core accretion appears to be unable to form gas giants beyond @xmath4 35 au even in the most favorable circumstances ( e.g. , levison & stewart 2001 ; thommes , duncan , & levison 2002 ; chambers 2006 ) , and gravitational scattering outward of planets formed closer in does not seem to lead to stable wide orbits ( dodson - robinson et al . 2009 ; raymond , armitage , & gorelick 2010 ) . disk instability appears to be the more likely mechanism for forming wide gas giant planets ( boss 2003 , 2010 ; dodson - robinson et al . 2009 ; boley 2009 ) , while its utility for forming planets much closer in continues to be debated ( e.g. , boss 2009 ) . at a minimum , the disk instability mechanism require two conditions to be met in order to produce giant planets : a disk sufficiently massive and cold enough to be gravitationally unstable , and the ability to radiate away enough energy produced by compressional heating to allow any clumps that form to contract toward planetary densities ( e.g. , helled , podolak , & kovetz 2006 ; helled & bodenheimer 2010 ) . the latter question has been a particular focus of study , with much effort devoted to simplified models where disk cooling occurs over a timescale @xmath8 . gammie ( 2001 ) found that fragmentation should occur in two dimensional ( razor - thin ) disks with @xmath9 , where @xmath10 , with @xmath11 being the disk s angular frequency . rice et al . ( 2003 ) found that @xmath12 led to fragmentation in their three dimensional disk simulations . boss ( 2004 ) estimated that @xmath13 characterized his three dimensional disk instability models with radiative transfer that resulted in clump formation . more recently , meru & bate ( 2010 ) have performed a detailed study of the effects of @xmath14 on disk models with varied surface density and temperature profiles , disk masses and radii , and stellar masses , finding that a single critical value of @xmath15 is not always able to predict whether or not fragmentation occurs . in a similar vein , a recent analysis by nero & bjorkman ( 2009 ) found that their analytical cooling time estimates were over an order of magnitude shorter than those calculated by rafikov ( 2005 ) , and hence considerably more supportive of fragmentation . a similar conclusion was found by boss ( 2005 ) . here we completely avoid the debate over @xmath15 by directly calculating disk cooling through the inclusion of radiative transfer . we then use this brute force approach to attack the other pre - condition for a disk instability leading to fragmentation , namely the disk mass . recent observations of low- and intermediate - mass pre - main - sequence stars imply that their disks form with masses in the range from 0.05 @xmath16 to 0.4 @xmath16 ( isella , carpenter , & sargent 2009 ) . these observed disk masses form one of the primary constraints on disk instability models . previous disk instability models by boss ( 2002 ) for solar - mass protostars assumed disk masses of 0.091 @xmath16 from 4 to 20 au , while those by mayer et al . ( 2004 ) had disk masses ranging from 0.075 to 0.125 @xmath16 inside 20 au . we present new results here for even lower mass protoplanetary disks ( 0.043 @xmath16 ) , to learn if the disk instability mechanism for giant planet formation can continue to operate in such a low mass disk around a solar - mass protostar . the calculations were performed with a numerical code that solves the three dimensional equations of hydrodynamics , including the energy equation , along with radiative transfer in the diffusion approximation and poisson s equation for the gravitational potential . compressional heating and radiative cooling are thus included . the same basic code has been used in all of the author s previous studies of disk instability . the code is second - order - accurate in both space and time . a complete description of the code and of the numerous tests it passed during its development may be found in boss & myhill ( 1992 ) . more recently , the radiative transfer solution technique has been shown to be highly accurate in relaxing to , and maintaining , analytical solutions for the temperature and radiative flux profiles for both spheres and disks of gas ( boss 2009 ) . both the jeans length ( e.g. , boss et al . 2000 ) and the toomre length ( nelson 2006 ) criteria are monitored throughout the runs to ensure that any clumps that might form are not numerical artifacts . the disks initially have the density distribution ( boss 1993 ) of an adiabatic , self - gravitating , thick disk in near - keplerian rotation about a stellar mass @xmath17 @xmath18,\ ] ] where @xmath19 and @xmath20 are cylindrical coordinates , @xmath21 is the midplane density , and @xmath22 is the surface density . the adiabatic constant is @xmath23 ( cgs units ) and @xmath24 for the initial model ; thereafter , the disk evolves in a nonisothermal manner governed by the energy equation and radiative transfer ( boss & myhill 1992 ) . the radial variation of the initial midplane density is a power law that ensures near - keplerian rotation throughout the disk : @xmath25 , where @xmath26 g @xmath27 , and @xmath28 au . the surface density used to define the density distribution is : @xmath29 , where @xmath30 g @xmath31 . the use of this analytical surface density in the above density distribution results in an initial disk surface density distribution with @xmath32 to @xmath33 in the inner disk , steepening to @xmath34 in the outer disk ( boss 2002 ) . regions where the disk density falls to small values are considered to be in the infalling envelope with a density @xmath35 , where @xmath36 g @xmath27 . with @xmath37 , the disk mass is @xmath38 from 4 to 20 au , a mass roughly half that of the otherwise identical disk models in boss ( 2002 ) . four different models have been computed with the above disk density distribution and with different combinations of outer disk temperature @xmath39 ( 20 and 25 k ) and envelope temperature @xmath40 ( 30 and 50 k ) . model a had @xmath39 = 20 k and @xmath40 = 50 k , model b had @xmath39 = 25 k and @xmath40 = 50 k , model c had @xmath39 = 20 k and @xmath40 = 30 k , and model d had @xmath39 = 25 k and @xmath40 = 30 k. the initial disk temperatures inside 7 au are those computed by boss ( 1996 ) for this disk density distribution , yielding a midplane temperature of @xmath41 = 339 k at 4 au and decreasing monotonically to @xmath41 = 100 k at 7 au ; thereafter , @xmath41 is assumed to decrease smoothly to @xmath2 = 20 or 25 k. [ in order to err on the side of stability , the temperature is not allowed to drop below this initial distribution . ] these choices lead to initial toomre ( 1964 ) @xmath42 gravitational stability criteria decreasing monotonically outwards from values greater than 10 inside 5 au to minimum @xmath42 values @xmath43 = 1.74 for models a and c and 1.95 for models b and d at the outer grid boundary of 20 au . higher initial @xmath42 values are expected to stifle disk fragmentation , so models b and d are intended to test the robustness of any fragmentation obtained in models a and c. all four models were run initially with @xmath44 , @xmath45 ( effectively ) , @xmath46 and @xmath47 for about 100 yr of evolution . during this time period , all four models evolved in a similar manner , forming multiple trailing spiral arms that interacted with each other . the spiral arms formed throughout the disks , but were most pronounced inside @xmath4 10 au . in models a and c , the spiral arm interactions would occasionally lead to the formation of transient clumps . however , analysis of these clumps did not reveal any that were massive enough to be considered self - gravitating and hence candidates for possible giant planet formation . in models b and d , the spiral arms that formed were not as vigorous as those in models a and c , as expected given their slightly higher initial outer disk temperatures , and again self - gravitating clumps did not occur . after this initial phase of evolution , all four models were doubled in their @xmath48 grid resolution and run further with @xmath49 and @xmath50 , effectively quadrupling the computational load by doubling the number of grid points while halving the time step . in order to maintain numerical stability for the energy equation solution , the time steps used were always small fractions of the maximum permissible explicit time differencing time step ( @xmath51 ) , often as small as 0.01 @xmath51 . this resulted in painfully slow execution of the models , each of which required approximately three years of continuous processing on a dedicated carnegie alpha cluster node . figure 1 shows the equatorial density distribution of model a after 129 yr of evolution . strong spiral arms are apparent from the inner boundary at 4 au out to @xmath4 10 au , as well as a number of clumps , often still aligned with their parental spiral arms . figure 2 depicts the midplane temperature distribution , which rises rapidly inside @xmath4 7 au to a maximum of @xmath4 340 k at 4 au . comparison of figures 1 and 2 shows that the clumps form in the region most advantageous for their formation : just outside @xmath4 7 au , where the disk midplane temperatures begin to moderate , yet as close to the center as possible , where the orbital periods are shortest , as expected for a dynamical instability linked to the rotation period . for model a at 129 yr , the maximum midplane density of @xmath52 g @xmath27 occurs for the clump seen at about 6 oclock in figure 1 . figure 3 presents the midplane density and temperature as a function of disk radius for an azimuthal profile that passes through the clump at @xmath4 6 oclock in figure 1 . figure 3 shows that the maximum density occurs at a radius of @xmath4 8 au , just at the radius where the temperature profile begins to rise rapidly inward . the mass of the clump at this time is @xmath53 , slightly above the jeans mass of @xmath54 at the mean density ( @xmath55 g @xmath27 ) and mean temperature ( 26 k ) of the clump . this mass estimate implies that the clump is self - gravitating and could be expected to contract to higher densities if permitted by the spatial resolution of the grid . at the radial distance of the model a clump ( 7 au ) , @xmath42 has dropped from an initial value of 2.7 to 1.9 , allowing marginal clump formation . the tidal radius for the clump is 0.34 au , similar to the radial half - extent of the clump seen in figure 3 . note from figure 3 that at this early phase , the clump has not begun to undergo any significant self - heating due to contraction . upward convective - like motions are present in model a , but their vigor may not be sufficient to permit cooling on an orbital timescale , compared to disk models with twice the disk mass , i.e. , model hr of boss ( 2004 ) . boss ( 2004 ) estimated an effective global value of @xmath13 for model hr ; the reduced convective - like motions in model a imply a value of @xmath56 . the clump orbits on a trajectory equivalent to a keplerian orbit with a semimajor axis of 7.3 au and an eccentricity of 0.05 . the 6 oclock clump shown in figures 1 , 2 , and 3 first appeared roughly 1/4 of an orbital rotation earlier , and persists for another @xmath4 1/2 orbital rotation before the calculation was ended after a total of 143 yr of evolution ( 143 yr equals @xmath4 19 inner orbital rotation periods , as the disk s orbital rotation period is 7.7 yr at 4 au ) . at that final time , the estimated clump mass had increased slightly to @xmath57 , again above the jeans mass of @xmath58 at the mean density ( @xmath59 g @xmath27 ) and the slightly higher mean temperature ( 30 k ) of the clump . this suggests that a protoplanet with an initial mass of at least @xmath57 should form from this clump . the clump s orbital eccentricity has increased to 0.09 by this time , while its semimajor axis has decreased to 6.8 au . a second distinct clump seen at about 10 oclock in figure 1 is not likely to form a protoplanet , however . at a time of 129 yr , the clump s estimated mass is @xmath60 , well below its jeans mass of @xmath61 at this mean density ( @xmath62 g @xmath27 ) and mean temperature ( 46 k ) . the other clumps evident in figure 1 suffer from the same fate of not being massive enough to be self - gravitating . hence model a seems able to lead to only a single giant protoplanet . figure 4 and 5 present the midplane densities and temperatures for model b after 119 yr of evolution . figures 4 and 5 are similar to figures 1 and 2 , although the spiral arms are not quite as robust in model b as in model a , as seen in either the density distributions of figures 1 and 4 or the temperature distributions of figures 2 and 5 . the most promising clump in model b at 119 yr occurs at 5 oclock in figures 4 and 5 . the estimated clump mass is @xmath63 , below the jeans mass of @xmath57 at the mean density ( @xmath64 g @xmath27 ) and mean temperature ( 40 k ) of the clump . the clump at 7 oclock in figures 4 and 5 suffers from the same problem ; model b appears to be close to , but not quite capable of forming self - gravitating clumps . models c and d are identical to models a and b except for having envelope temperatures of 30 k instead of 50 k. while the envelope temperature has some effect on the outcome of the evolutions , after 137 yr model c was only able to form a single self - gravitating clump with a mass of @xmath65 and another clump that did not exceed the jeans mass , as was the case for model a. similar to model b , model d was unable to form a single self - gravitating clump after 113 yr of evolution . the four models clearly show that low mass disks orbiting solar - mass protostars are less able to form self - gravitating clumps that might go on to form giant protoplanets than more massive disks . boss ( 2002 ) presented a suite of solar - mass protostar models with disk masses of @xmath66 that are otherwise much the same as the present models , with the exception of starting their evolutions with outer disk temperatures ranging from 20 k to 50 k , resulting in initial minimum toomre @xmath42 values ranging from 0.94 to 1.5 . all of these boss ( 2002 ) disk models formed multiple self - gravitating clumps ( see , e.g. , figure 3 of boss 2002 ) . the present models thus suggest that the ability of disk instability to form self - gravitating clumps is severely compromised as the disk mass is lowered to @xmath3 . the results are consistent with those obtained by mayer et al . ( 2007 ) , who studied disks extending from 4 to 20 au in orbit around a solar - mass protostar using an sph code with diffusion approximation radiative transfer . mayer et al . ( 2007 ) found that when the disk mass was taken to be @xmath67 , the toomre @xmath42 was below 2 in the outer disk and strong spiral arms appeared . however , fragmentation occurred in some of their models only when the disk mass was increased to @xmath68 , with fragmentation depending on their choice of the mean molecular weight of the disk gas and of the ability to cool from the surface of the disk . given that the mayer et al . ( 2007 ) disks were assumed to have outer disk temperatures of 40 k , considerably warmer than the values of 20 k and 25 k studied here , the requirement of a disk mass higher than @xmath67 for fragmentation to occur in their models is consistent with the present models , as well as with those of boss ( 2002 ) , where fragmentation occurred in similar models with disk masses of @xmath66 and outer disk temperatures as high as 50 k. while envelope temperatures of 30 to 50 k appear to reasonable bounds for a solar - mass protostar during quiescent periods ( chick & cassen 1997 ) , the primary question arising from these four models is what is the proper outer disk temperature ? is @xmath69 20 k or 25 k beyond @xmath4 7 au a realistic assumption ? dalessio et al . ( 2006 ) presented t tauri disk models with midplane temperatures of @xmath4 30 to 40 k at 10 au , depending on the dust grain population . observations of the dm tau outer disk , on scales though of 50 to 60 au , imply midplane temperatures of 13 to 20 k ( dartois , dutrey , & guilloteau 2003 ) . observations of cometary ices imply disk temperatures of @xmath4 28 k at their formation locations ( kawakita et al . 2001 ) . the composition of the giant planets suggests that solids formed at 5.2 au and beyond at temperatures of no more than 30 to 40 k ( owens & encrenaz 2006 ) . the present models suggest that outer disk temperatures must be as low as @xmath4 20 k in order for disk instability to have a chance to form giant protoplanets in these relatively low mass disks , and it is unclear at present if such low outer disk temperatures are realistic or not . boss ( 2002 ) found that robust disk instablities could occur inside 20 au in disks with a mass of @xmath70 . the present models show that when the disk mass inside 20 au is halved , the ability of disk instability to produce viable , self - gravitating clumps is signficantly compromised , when self - consistently - calculated disk cooling rates are employed . disk instability thus appears to be only a marginally effective process in a disk with @xmath71 , and is unlikely to lead to giant planet formation around solar - mass protostars with disks significantly less massive than @xmath71 . clearly core accretion remains as the favored formation mechanism for giant planets in such lower mass disks . i thank the referee for a number of perceptive comments , sandy keiser for computer systems support and john chambers for advice on orbit determinations . this research was supported in part by nasa planetary geology and geophysics grant nnx07ap46 g , and is contributed in part to nasa astrobiology institute grant nna09da81a . the calculations were performed on the carnegie alpha cluster , the purchase of which was partially supported by nsf major research instrumentation grant mri-9976645 .
forming giant planets by disk instability requires a gaseous disk that is massive enough to become gravitationally unstable and able to cool fast enough for self - gravitating clumps to form and survive . models with simplified disk cooling have shown the critical importance of the ratio of the cooling to the orbital timescales . uncertainties about the proper value of this ratio can be sidestepped by including radiative transfer . three - dimensional radiative hydrodynamics models of a disk with a mass of @xmath0 from 4 to 20 au in orbit around a @xmath1 protostar show that disk instabilities are considerably less successful in producing self - gravitating clumps than in a disk with twice this mass . the results are sensitive to the assumed initial outer disk ( @xmath2 ) temperatures . models with @xmath2 = 20 k are able to form a single self - gravitating clump , whereas models with @xmath2 = 25 k form clumps that are not quite self - gravitating . these models imply that disk instability requires a disk with a mass of at least @xmath3 inside 20 au in order to form giant planets around solar - mass protostars with realistic disk cooling rates and outer disk temperatures . lower mass disks around solar - mass protostars must rely upon core accretion to form inner giant planets .
cells need to regulate the expression of a gene in a specific level at specific time and space in order to fulfill specific task . gene regulation is a ubiquitous phenomenon and is critical in every biological process ( 1 ) . mechanisms of gene regulation include the regulations of transcription , rna processing and translation . in higher eukaryotes , pre - mrna splicing plays an important role in gene regulation . the inclusion of different exons in mrna alternative splicing ( as)enables one single gene to produce multiple different mrnas , which can be further translated into different proteins called splice variants ( 2,3 ) . new high - throughput sequencing technology has revealed that > 90% of human genes undergo as a much higher percentage than anticipated ( 4 ) . and recent genome - wide analyses have indicated that almost all primary transcripts from multi - exon human genes undergo alternative pre - mrna splicing ( 5 ) . therefore , rna splicing greatly increases the genomic complexity of higher eukaryotes ( 6 ) . rna splicing is tissue specific and studies highlight differences in the types of as occurring commonly in different tissues . for example , the frequencies of alternative 3 splice site and alternative 5 splice site usage are 50100% higher in liver than in other investigated tissues ( 7 ) . the importance of splicing is emphasized by its presence in species throughout the phylogenetic tree . evolutionary studies , which have revealed the formation of de novo alternative exons and the evolution of exon intron architecture , highlight the importance of as in the diversification of the transcriptome , especially in humans ( 8) . as we stated earlier , rna splicing is critical in many biological processes . splicing of rna is regulated by complicated mechanisms involving numerous rna - binding proteins and the intricate network of interactions among them . splicing in general , and as in particular , if disrupted , can lead to disease . therefore , mutations in cis - acting splicing elements or splicing machinery and the regulatory proteins which could compromise the accuracy of either constitutive or alternative splicing would have a profound impact on human pathogenesis . defects in pre - mrna splicing have been shown as a common disease - causing mechanism in several studies ( 911 ) . as an example , a point mutation in exon 7 of smn2 gene leads to exon 7 skipping and a truncated protein , which causes decreased effective rate of smn protein production and motor neuron degenerative disease ( 12 ) . other studies indicate trans - acting mutations affect rna - dependent functions and cause disease ( 9,13 ) . a number of bioinformatics resources for rna splicing have been developed during the past decade including databases and tools ( table 1 ) . for example , human splicing finder is a tool to predict the effects of mutations on splicing signals and can identify splicing motifs in human sequence ( 14 ) . these resources have provided great help in the study and analysis of rna splicing . table 1.databases and tools of splicing mutation and alternative splicingresourcedescriptionurlhgmd ( 15)the human gene mutation database ( hgmd ) constitutes a comprehensive core collection of data on germ - line mutations in nuclear genes underlying or associated with human inherited diseasewww.hgmd.orgdbass5 ( 16,17)a database of aberrant 5 splice siteshttp://www.dbass.org.uk / dbass3 ( 17,18)a database of aberrant 3 splice siteshttp://www.dbass.org.uk / asdb ( 19)database of alternatively spliced geneshttp://cbcg.nersc.gov / asdbsssnptarget ( 20)a genome - wide splice - site single nucleotide polymorphism databasehttp://sssnptarget.orgeusplice ( 21)a splice - centric database which provides reliable splice signal and as information for 23 eukaryoteshttp://66.170.16.154/euspliceasmamdb ( 22)an alternative splice database of mammalshttp://166.111.30.65/asmamdb.htmlalternative splicing database ( 23)an alternative splicing database based on publicationshttp://cgsigma.cshl.org / new_alt_exon_db2/tassdb2 ( 24)a database of subtle alternative splicing eventshttp://www.tassdb.infoisis ( 25)an intron information systemhttp://isis.bit.uq.edu.au / aspicdb ( 26)a database of annotated transcript and protein variants generated by alternative splicinghttp://www.caspur.it / aspicdb / steps ( 27)a database of splice translational efficiency polymorphismshttp://dbstep.genes.org.uk / human splicing finder ( 14)a tool to predict the effects of mutations on splicing signals or to identify splicing motifs in any human sequencehttp://www.umd.be / hsf / spliceminer ( 28)a high - throughput database implementation of the ncbi evidence viewer for microarray splice variant analysishttp://discover.nci.nih.gov / spliceminerintronerator ( 29)exploring introns and alternative splicing in caenorhabditis eleganshttp://www.cse.ucsc.edu/~kent / introneratorwebscipio ( 30)tool for predicting mutually exclusive spliced exons based on exon length , splice site and reading frame conservation , and exon sequence homologyhttp://www.webscipio.orgisoem ( 31)tool for the estimation of alternative splicing isoform frequencies from rna - seq datahttp://dna.engr.uconn.edu / software / isoem / maistas ( 32)a tool for automatic structural evaluation of alternative splicing productshttp://maistas.bioinformatica.crs4.it / hmmsplicer ( 33)a tool for efficient and sensitive discovery of known and novel splice junctions in rna - seq datahttp://derisilab.ucsf.edu / software / hmmsplicersfmap ( 34)a web server for motif analysis and prediction of splicing factor binding siteshttp://sfmap.technion.ac.il databases and tools of splicing mutation and alternative splicing the above evidences have shown an increased importance of connecting the rna splicing and diseases . for this reason , a high quality database linking rna splicing and splicing mutations with disease will be of great help and be emergently needed in the study of both rna splicing and disease . although the human gene mutation database ( hgmd , http://www.hgmd.org/ ) integrated this kind of data but there are big difference between hgmd and splicedisease database ( 15 ) . secondly , hgmd only provides information of point mutations of intronic sequence for splicing mutation . thirdly , hgmd does not provide detailed descriptions for the relationship among gene mutations , splicing defects and diseases . on the other hand , splicedisease database is a free and comprehensive database containing cis - splicing sequence mutations and trans - acting splicing mutations that cause disease . splicedisease integrates detailed descriptions for the relationship among gene mutation , splicing defect and disease . and it provides direct links of entrez gene , genome browser , respective location of the mutation on the gene and pubmed for each literature . at present , the splicedisease database is at its first step , it will be a valuable ongoing resource for the study of rna splicing and disease . rna splicing and disease - related literature was acquired by pubmed search using the keywords spliced. literatures with titles including mutation spectrum , mutational spectrum , mutation analysis and mutation screening were also obtained . we then curated the data manually and retrieved the association between rna splicing and splicing mutations in the gene and disease of interest . we standardized the disease names and gene names based on nlm mesh browser and entrez gene . each gene was linked to ncbi for comprehensive annotations and to ucsc genome browser for genomic sequence . the mutations of genes were annotated as well including nucleotide change and location on the sequence . we used the nomenclature for description of sequence variants and exon / intron numbering according to den dunnen and antonarakis ( 35 ) . c. for a cdna sequence ; ivs for intron sequence ; substitutions are designated by a in the sequence file , the intron / exon of the mutation location is highlighted in yellow color and the specific nucleotide is marked in red color . more importantly , we curated the detailed description for the relationship among gene mutation , splicing defect and disease . as a result , we manually curated 2337 splicing mutation - disease entries including 303 genes and 370 diseases from 898 publications . in the 2337 entries , 89% of them are point mutations ( figure 1a ) among which > 50% are mutations between g and a ( 36.5% g > a and 14.6% a > g ) ( figure 1b ) . figure 1.distribution of mutation type and distribution of point mutation type in the splicedisease database . ( a ) splicing mutation type : point , point mutation ; ins , insertion mutation ; del , deletion mutation ; other , other types . ( b ) axes of the histogram represent the proportions of different nucleotide substitutions in whole point mutations . ( a ) splicing mutation type : point , point mutation ; ins , insertion mutation ; del , deletion mutation ; other , other types . ( b ) axes of the histogram represent the proportions of different nucleotide substitutions in whole point mutations . the website is presented using apache tomcat 7.0 , a jsp&java web framework which is available at http://cmbi.bjmu.edu.cn / sdisease/. splicedisease is a user - friendly designed database . the homepage has been designed to provide an organized venue to access all data . when a user performs a search in splicedisease , he can use the browser to select the disease or gene of interest or use the search which supports fuzzy queries to find it . the page of result contains nine items disease name and gene symbol , gene entrez i d ( link to ncbi gene database ) , chromosome location of genomic sequence ( link to ucsc genome browser ) , mutation , mutation location ( direct link to respective position of mutation in the genome browser automatically ) , organism , description and reference ( link to pubmed database ) ( figure 2 ) . ( a ) once a user runs a search , there comes the result summary page that includes nine items . the sequence of exon shows in upper case and intron shows in lower case . and one fasta record per region ( exon , intron ) is used in the sequence file . the inton / exon of the location of mutation is highlighted in yellow color and specific nucleotide is marked in red color . ( a ) once a user runs a search , there comes the result summary page that includes nine items . the sequence of exon shows in upper case and intron shows in lower case . and one fasta record per region ( exon , intron ) is used in the sequence file . the inton / exon of the location of mutation is highlighted in yellow color and specific nucleotide is marked in red color . these data will facilitate study of exploitation of splicing mutational mechanisms , understanding of rna biology and helping to discover new therapeutic targets . when a user performs a search in splicedisease , he can use the browser to select the disease or gene of interest or use the search which supports fuzzy queries to find it . the page of result contains nine items disease name and gene symbol , gene entrez i d ( link to ncbi gene database ) , chromosome location of genomic sequence ( link to ucsc genome browser ) , mutation , mutation location ( direct link to respective position of mutation in the genome browser automatically ) , organism , description and reference ( link to pubmed database ) ( figure 2 ) . ( a ) once a user runs a search , there comes the result summary page that includes nine items . the sequence of exon shows in upper case and intron shows in lower case . and one fasta record per region ( exon , intron ) the inton / exon of the location of mutation is highlighted in yellow color and specific nucleotide is marked in red color . ( a ) once a user runs a search , there comes the result summary page that includes nine items . the sequence of exon shows in upper case and intron shows in lower case . and one fasta record per region ( exon , intron ) is used in the sequence file . the inton / exon of the location of mutation is highlighted in yellow color and specific nucleotide is marked in red color . these data will facilitate study of exploitation of splicing mutational mechanisms , understanding of rna biology and helping to discover new therapeutic targets . the splicedisease database is in the first step of the project and further extensions will be developed . as we described earlier , a number of bioinformatics resources for rna splicing have been developed . as the data accumulation , we will add more trans - acting splicing mutations that cause disease . finally , splicedisease will be continuously updated . funding for open access charge : national natural science foundation of china ( grant no .
rna splicing is an important aspect of gene regulation in many organisms . splicing of rna is regulated by complicated mechanisms involving numerous rna - binding proteins and the intricate network of interactions among them . mutations in cis - acting splicing elements or its regulatory proteins have been shown to be involved in human diseases . defects in pre - mrna splicing process have emerged as a common disease - causing mechanism . therefore , a database integrating rna splicing and disease associations would be helpful for understanding not only the rna splicing but also its contribution to disease . in splicedisease database , we manually curated 2337 splicing mutation disease entries involving 303 genes and 370 diseases , which have been supported experimentally in 898 publications . the splicedisease database provides information including the change of the nucleotide in the sequence , the location of the mutation on the gene , the reference pubmed i d and detailed description for the relationship among gene mutations , splicing defects and diseases . we standardized the names of the diseases and genes and provided links for these genes to ncbi and ucsc genome browser for further annotation and genomic sequences . for the location of the mutation , we give direct links of the entry to the respective position / region in the genome browser . the users can freely browse , search and download the data in splicedisease at http://cmbi.bjmu.edu.cn/sdisease .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Protecting Tenants at Foreclosure Act of 2009''. SEC. 2. EFFECT OF FORECLOSURE ON EXISTING TENANCY. (a) In General.--In the case of any foreclosure on any dwelling or residential real property, any immediate successor in interest in such property pursuant to the foreclosure shall assume such interest subject to-- (1) the provision, by such successor in interest, of a notice to vacate to any bona fide tenant at least 90 days before the effective date of such notice; and (2) the rights of any bona fide tenant, as of the date of such notice of foreclosure-- (A) under any bona fide lease entered into before the notice of foreclosure to occupy the premises until the end of the remaining term of the lease, except that a successor in interest may terminate a lease effective on the date of sale of the unit to a purchaser who will occupy the unit as a primary residence, subject to the receipt by the tenant of the 90 day notice under paragraph (1); or (B) without a lease or with a lease terminable at will under State law, subject to the receipt by the tenant of the 90 day notice under subsection (1), except that nothing under this section shall affect the requirements for termination of any Federal- or State- subsidized tenancy or of any State or local law that provides longer time periods or other additional protections for tenants. (b) Bona Fide Lease or Tenancy.--For purposes of this section, a lease or tenancy shall be considered bona fide only if-- (1) the mortgagor under the contract is not the tenant; (2) the lease or tenancy was the result of an arms-length transaction; and (3) the lease or tenancy requires the receipt of rent that is not substantially less than fair market rent for the property. SEC. 3. EFFECT OF FORECLOSURE ON SECTION 8 TENANCIES. Paragraph (7) of section 8(o) of the United States Housing Act of 1937 (42 U.S.C. 1437f(o)(7)) is amended-- (1) in subparagraph (C), by inserting before the semicolon at the end the following: ``, and in the case of an owner who is an immediate successor in interest pursuant to foreclosure-- ``(i) during the initial term of the tenant's lease having the property vacant prior to sale shall not constitute good cause; and ``(ii) in subsequent lease terms, having the property vacant prior to sale may constitute good cause if the property is unmarketable while occupied, or if such owner will occupy the unit as a primary residence''; (2) in subparagraph (E), by striking ``and'' at the end; (3) by redesignating subparagraph (F) as subparagraph (G); and (4) by inserting after subparagraph (E) the following: ``(F) shall provide that in the case of any foreclosure on any residential real property in which a recipient of assistance under this subsection resides, the immediate successor in interest in such property pursuant to the foreclosure shall assume such interest subject to the lease between the prior owner and the tenant and to the housing assistance payments contract between the prior owner and the public housing agency for the occupied unit; if a public housing agency is unable to make payments under the contract to the immediate successor in interest after foreclosure, due to action or inaction by the successor in interest, including the rejection of payments or the failure of the successor to maintain the unit in compliance with paragraph (8) or an inability to identify the successor, the agency may use funds that would have been used to pay the rental amount on behalf of the family-- ``(i) to pay for utilities that are the responsibility of the owner under the lease or applicable law, after taking reasonable steps to notify the owner that it intends to make payments to a utility provider in lieu of payments to the owner, except prior notification shall not be required in any case in which the unit will be or has been rendered uninhabitable due to the termination or threat of termination of service, in which case the public housing agency shall notify the owner within a reasonable time after making such payment; or ``(ii) for the family's reasonable moving costs, including security deposit costs; except that this subparagraph and the provisions related to foreclosure in subparagraph (C) shall not affect any State or local law that provides longer time periods or other additional protections for tenants.''.
Protecting Tenants at Foreclosure Act of 2009 - States that any immediate successor in interest to residential property in foreclosure assumes such interest subject to: (1) giving an existing tenant at least 90-day notice to vacate; and (2) specified rights of such tenant to occupy the premises until the end of the lease. Amends the United States Housing Act of 1937 to require a housing assistance payment contract to provide that in the case of an owner who is an immediate successor in interest pursuant to foreclosure: (1) during the initial term of the lease vacating the property prior to sale shall not constitute other good cause for termination of the lease; but (2) in subsequent lease terms vacating the property prior to sale may constitute good cause if the property is unmarketable while occupied, or if such owner will occupy the unit as a primary residence. Authorizes: (1) a housing assistance payment contract entered into by the public housing agency and the owner of a dwelling unit to provide that the immediate successor in interest to property in foreclosure in which a housing assistance recipient resides assumes such interest subject to the lease between the prior owner and the tenant, and subject to the housing assistance payments contract between the prior owner and the public housing agency for the occupied unit; and (2) the public housing agency, where the successor owner cannot be identified, to use rental funds to pay for the property's utilities if owed by the owner or for reasonable moving costs, including security deposits.
paul , minnesota , usa ) has been widely used to occlude secundum atrial septal defects ( asds ) ( 1 , 2 ) . aso migration or embolization after deployment is rare , with a reported rate ranging from 0.5% to 3% ( 3 , 4 ) , and it occurs more frequently in cases of asd without sufficient surrounding rims . once it arises it may lead to an emergency surgical approach , however the unique design of the aso allows for retrieval of the migrated device percutaneously . herein , we report a case of percutaneous retrieval of a migrated aso into the right atrium and its successful redeployment without any complications . a 45-yr - old woman with no previous illnesses presented with exercise intolerance on 7th march 2013 . physical examination showed a normal blood pressure of 110/70 mmhg with a regular heart rate of 68 beats / min and an oxygen saturation of 97% on room air . an oval - shaped secundum - type asd with a left - to - right shunt was detected on transthoracic echocardiography ( tte ) . the defect size was 12 mm18 mm on 3d - transesophageal echocardiography ( tee ) , which was surrounded with thin , floppy interatrial septum ( ias ) and deficient aortic rim ( fig . 1a - c ) . on cardiac catheterization , the calculated qp / qs and rp / rs were 1.88 and 0.03 , respectively . we decided to close the defect percutaneously with the aso under tee and fluoroscopic guidance . since the balloon stretched diameter was 19 mm , we chose a 20 mm - sized aso . the device was loaded onto an 8-french ( fr ) delivery system ( aga medical , golden valley , minnesota , usa ) and advanced into the left atrium . at first , improper alignment of the aso with the plane of ias and the flimsy rim prohibited the left atrial ( la ) disc from holding the inferior rim firmly , resulting in herniation into the right atrium . to overcome these difficulties , we used the left upper pulmonary vein technique ; opened the la disc partially in the left upper pulmonary vein and released the waist and the right atrial ( ra ) disc quickly before the alignment of the system changed significantly . with this maneuver we performed the " minnesota wiggle " to ensure a firm position of the device . however , the 3d - tee just after detachment of the aso from the delivery cable noted that the device settled in an oblique position with its superior portion prolapsing into the right atrium ( fig . although more than 75% of the rim was trapped by the discs , it was slipping down into the right atrium with the heart movements , resulting in a left - to - right shunt ( fig . 1e ) . given the high risk of migration , we captured the device using a gooseneck snare ( st . we paid special attention not to lose the accessibility to the snared aso . the attempt to reposition the prolapsed part of the disc using a 7-fr internal mammary catheter via another approach through femoral vein the device completely migrated into the right atrium but still was held on to the snare ( fig . we pulled back the snared aso into the femoral vein because the 8-fr delivery sheath could not accommodate it . at that position , the delivery sheath was replaced by a 12-fr femoral sheath through which the device was re - snared and retrieved ( fig . the device was remounted on the cable and the defect was closed successfully in the same manner ( fig . the occluder has been in the correct position without a residual shunt on doppler echocardiography at 30 days after the procedure . the overall incidence of asd occluder complications has been reported as ranging from 5.2% to 8.6% ( 1 ) . among them , device embolization or migration are the most frequent major complications with an incidence from 0.5% to 3% ( 2 ) . a deficient aortic rim and a thin and floppy posterior rim one of the solutions to this problem is slight oversizing - that is , using a device 2 to 4 mm larger than the stretched diameter . however , because of concerns about erosion , we selected a just 1 mm larger device in this case . although we could achieve an acceptable position wherein more than 75% of the rim was trapped around the defect , the hypermobile atrial septum could not offer enough support for device stabilization , leading to migration . once it is migrated , adjusting the device into a safe position to prevent further it can be achieved by supporting with some devices including stiff wire or bioptome for stabilization and snare loop , tulip shaped snare , basket or alligator clamps to grab the device ( 4 , 7 , 8) . when this step is failed or not suitable , an emergency surgical correction should be considered . if the device is recaptured again , it can be retrieved into the delivery sheath and this procedure could be done more easily with use of a 2-fr sizes larger sheath than the recommended size for device delivery . we believe that when it is difficult to pull back the snared device into the delivery sheath and size of the device is considered suitable enough not to cause vascular or any surrounding structural injuries , it is prudent to place the device in the femoral vein , from where it can be easily retrieved through another large - sized femoral sheath , as in the present case . our case demonstrated several points : 1 ) 3d - tee can provide additional information on the anatomic relationship between the device and the asd ; 2 ) the device slipping in one portion and sitting on a floppy septum should be rectified , and 3 ) moving the migrated device into the femoral vein could be an alternative method for a safe retrieval in such a situation for the moderate sized aso .
percutaneous device closure for secundum atrial septal defects ( asds ) has been performed commonly and safely with high success rates . however , it is still challenging to close asds that are surrounded with deficient or hypermobile rims and could be compromised with an unexpected migration of device . we report a case of percutaneous amplazter septal occluder ( aso ; st . jude medical inc . , st . paul , minnesota , usa ) device closure for an asd with a thin and floppy interatrial septum , which immediately migrated into the right atrium and was not pulled back into the delivery sheath . to our knowledge , this is the first report on a successful percutaneous retrieval and redeployment of the device in such a situation , preventing any vascular injury or unplanned emergency open heart surgery.graphical abstract
of the @xmath2150 globular clusters associated with the milky way galaxy , 12 have been seen to harbor a bright ( @xmath4 ergss@xmath5 ) , or transient , low - mass x - ray binary ( lmxb ) @xcite . these binaries are presumably formed through stellar encounters in the dense cores of the clusters ; such events play an important role in the dynamical evolution of the clusters , as the formation of a single lmxb can impart enough kinetic energy to the surrounding stars to terminate a core collapse . at the same time , the globular cluster lmxbs provide a unique opportunity to study lmxbs at a well - known distance with a well - known ( and usually very poor ) metalicity level . the x - ray source x1832@xmath0330 in the globular cluster ngc 6652 is a lesser known example of this class . although the error box for the _ heao-1 _ source , h1825@xmath0331 , contained this cluster , it was originally not considered to be a secure identification because the error box covered a 2.7 deg@xmath6 area in sagittarius @xcite . the first secure detection of x1832@xmath0330 as a globular cluster lmxb was made during the course of the _ rosat _ all - sky survey @xcite . more recently , it was detected in pointed _ rosat _ observations @xcite , and two type i x - ray bursts from this source , as well as the persistent emission , have been detected using the wide field camera of the _ bepposax _ satellite @xcite . thus there is now a strong circumstantial evidence that the _ heao-1 _ source h1825@xmath0331 and x1832@xmath0330are the same source ; here we have adopted this identification as a working assumption . in the abovementioned papers , the distance to x1832@xmath0330 was assumed to be @xmath214.3 kpc . however , the first published color - magnitude diagram of this cluster @xcite has led to the re - evaluation of the distance to @xmath29.3 kpc , based on the measured v magnitudes v@[email protected] of its horizontal branch ( as well as the interstellar reddening , e@[email protected] ) . moreover , ngc 6652 appears to be significantly younger than the average globular clusters @xcite . thus the lmxb x1832@xmath0330 in ngc 6652 may provide an important comparison with other globular cluster sources , due to the relative youth and the relatively high metal content ( [ fe / h][email protected] ) of this cluster ( though not the highest among globular clusters with lmxbs ) . a search for the optical counterpart has recently been carried out using new ground - based data along with archival hst data @xcite . although the archival hst observations do not completely cover the x - ray error circle , the most promising candidate for the optical counterpart is their star 49 , which is relatively faint ( @xmath9=+5.5 ) compared with those of other globular cluster lmxbs . in this paper , we present our analysis of a serendipitous _ asca _ observation of x1832@xmath0330 ; in comparing the previous observations with the _ asca _ data , we have recalculated the previously - published source luminosities for a new fiducial distance of 9.3 kpc . the region of the sky containing ngc 6652 was observed with the japanese x - ray satellite , _ asca _ @xcite between 1996 apr 6 20:12ut and apr 7 19:00 ut ( seq no 54016000 ) . this observation was part of a program to observe diffuse galactic emission , and only serendipitously included x1832@xmath0330 in its field of view . there are four co - aligned x - ray telescopes on - board _ asca _ , two with sis ( solid - state imaging spectrometers , using ccds ) detectors and two with gis ( gas imaging spectrometers ) detectors . however , little useful data were taken with gis-2 , due to a problem with its on - board processor40 min of useful gis-2 data , which we have chosen not to analyze . ] . no such problems exist for the sis data ; however , the observation was done with both sis detectors in 4-ccd mode , with x1832@xmath0330 near the center of the field of view . this pointing direction minimizes the vignetting , but the photons from x1832@xmath0330 are spread over all 4 chips , complicating the analysis . moreover , 4-ccd observations suffer most severely from the cumulative effect of radiation damage . as a consequence , events below @xmath20.7 kev had to be discarded on - board to avoid telemetry saturation due to flickering pixels , and the spectral resolution and the quantum efficiency are both severely degraded @xcite . the degradation is believed to be due largely to residual dark distribution ( rdd ) : a significant fractions of the ccd pixels now show elevated levels of dark current , the histogram of which is strongly skewed . when rdd - affected data are processed ( on - board or on the ground ) assuming a gaussian distribution of dark current , this leads to incorrect pulse heights or spurious rejection of x - ray events as particle events . the current version of the response generator has a model of the degrading spectral resolution , but not one of the degrading quantum efficiency . we have used the ftool , correctrdd , which partially recovers the detection efficiency ; however , this algorithm is not 100% effective . moreover , the calibration of rdd - corrected data is uncertain . therefore , we have primarily relied on the gis data , cross - calibrated the rdd - corrected sis data against the gis data , and used the sis data only when gis data were unavailable . since a bright source is clearly detected ( see below ) , we have opted to use loose sets of screening criteria . for the gis , we use non - saa , non - earth - occult ( elv@xmath10 ) data at the standard high - voltage setting , and only exclude regions where the cut - off rigidity is less than 4 gev / c ; note that , for safety reasons , the high voltage is reduced well before the satellite enters the saa . after screening , we are left with @xmath242 ksec of good gis-3 data . for the sis , we use additional criteria that the line of sight must be @xmath1120@xmath12 away from the sunlit earth , the time after day / night and saa transitions must be @xmath11128 s , and the pixl monitor counts for the ccd chips must be within the 3@xmath13 of their mean value . moreover , we have imposed the condition that the data must have been taken in the faint mode . to correct for rdd and dfe ( dark frame error , the variation in the mean dark level of all the pixels due primarily to scattered light on the ccd ) , we have applied faintdfe to the original faint mode data first , followed by correctrdd , before converting to bright2 mode , to minimize the interference between dfe and rdd corrections ) . this resulted in @xmath221 ksec of good sis data . we have tested the calibration of rdd - corrected sis data , by performing simultaneous fits to the gis-3 and sis data . we find that , even after the rdd correction , the best - fit sis model contains a spurious excess n@xmath14 of 1.6@xmath15 @xmath16 as well as a normalization below that of the gis-3 data by a factor of 1.17 . in fig.1 , we have plotted the background subtracted light curve of x1832@xmath0330 in 128 s bins . the source appears variable on short timescales ( from about a few bins of this diagram down to 4 sec ) : a straight line fit to a 4-s bin light curve , after removing the longer - term trend ( by subtracting a 256-s running average of itself ) yields a @xmath17 of 1.114 for 10479 degrees of freedom , meaning that the source is variable at a formal confidence level of @xmath18 ) . however , some caution is required at this level : although background is negligible , there may be systematic contribution to this apparent variability from , e.g. , attitude jitter or the imperfect correction of the time - dependent detector gain . moreover , a fourier transform did not reveal a periodicity in the range 8 s to @xmath21 hr ( with an rms amplitude of @xmath20.5% in this range ) ; the highest peaks in the periodogram in this range have semi - amplitudes of 1.6 % , while a signal would have to have an amplitude of @xmath112 % to be detected at @xmath1199% confidence . although there are some possible peaks in the periodogram at longer periods , we consider these to be rather unreliable , since they can be explained as due to an interplay of the quasi - regular data gaps and the increased count rate between 0 and 2 ut on apr 7 ( see fig . 1 ) ; this flare - like event may well be part of the aperiodic variability . we do not see spectral changes during this flare - like event ( such as might be expected were it the tail of a type i burst ) . the highest peaks in the periodogram are at p=46600s ( @xmath11half the duration of the observation ) with a 4.5% amplitude , and at 17400 s ( 2.8% ) . although non - sinusoidal modulation ( e.g. , dipping activities ) with certain periods ( e.g. , near the 96 min spacecraft orbit ) may have eluded detection , this would seem to require an unfortunate coincidence . in fig . 2(a ) , we present the average gis-3 spectrum of x1832@xmath0330 with the best - fit power - law model . the fit is poor , with @xmath17 = 1.81 ; the parameters are photon index @xmath19 = [email protected] and n@xmath14=3.6@xmath20 cm@xmath5 , and a 28 kev flux of 1.54@xmath22 ergs@xmath16s@xmath5 . note that the fitted n@xmath14 is considerably greater than that estimated from the optical extinction ( @xmath23 @xmath16 ) or the value derived from the _ rosat _ pspc spectral fit ( @xmath24 @xmath16 ; see also the bottom panel of fig . moreover , the inferred photon index @xmath19 is radically different between the _ asca _ gis ( 1.75 ) and _ rosat _ pspc ( 1.07 ) observations . we therefore must conclude that the spectrum of x1832@xmath0330is highly variable , more complex than a simple power law , or both . as a likely candidate for the complex spectral shape , we have tried a power law modified by a partial covering absorber ( with a fixed interstellar absorption of @xmath23 @xmath16 ) to the gis-3 data . this has markedly improved the fit ( to @xmath17=1.15 ; @xmath19=1.86@xmath25 , with @xmath267% covering by a n@xmath14=7.6@xmath26 @xmath16 absorber ) . moreover , this model provides a plausible description of the spectrum in a simultaneous fit to the _ asca _ gis and _ rosat _ pspc spectra ( fig . we therefore conclude that the x - ray spectrum of x1832@xmath0330 is not a simple power law . however , given the energy range of the data , and the current level of calibration uncertainties , we can not say for sure if this description of the spectral shape is unique , or preferred . long - term variability of x1832@xmath0330 . + [ cols="<,<,<,<",options="header " , ] @xmath27measured or inferred flux in the 28 kev band , in ergs @xmath16s@xmath5 . + @xmath28inferred 28 kev luminosity in erg s@xmath5 for an assumed distance of 9.3 kpc . + @xmath29source identification remains tentative . + in table 1 , we have summarized the long - term history of the x - ray luminosity of x1832@xmath0330 . for the time intervals when both gis-3 and the sis instruments were taking data , the latter adds little . however , we have discovered a type i x - ray burst from x1832@xmath0330 in the section of sis data for which there is no gis-3 coverage . the light curves in three energy bands are shown in fig . the reason why this burst was not covered by the gis-3 data is that this happened just before _ asca _ went into the saa ; the high voltage level of the gis had already been reduced as a precaution . this segment of data ends when the sis also stopped taking data , as _ asca _ approached the saa . we have therefore examined the housekeeping as well as scientific data carefully to ascertain that this event is not an instrumental artefact . however , the radiation belt monitor counts indicate that the particle background was @xmath3010 cts@xmath5 , i.e. , at quiescent ( non - saa ) level ( all data with monitor rates up to 500 cts@xmath5 have routinely been included in gis data analysis , with no obvious ill effects ) . the monitor rate exceeded 1000 cts@xmath5 @xmath2200 s after the end of the sis data ( in contrast , the radiation belt monitor rates exceed 10,000 in the heart of the saa ) . moreover , the image of the burst is identical , to within statistics , to the quiescent image ( i.e. , has a distribution consistent with the xrt point spread function ) . thus , we believe that the burst originates from the same point - like source as the quiescent emission , i.e. , most likely x1832@xmath0330 in ngc 6652 . the longer duration at lower energies , shown in fig . 3(a ) , is what is expected in a type i x - ray burst , as the neutron star cools . to further investigate the spectral evolution , we have performed spectral fitting of the 4 time intervals indicated in fig . we have used the combined sis-0/sis-1 data , and the quiescent sis spectrum as the background . we present the results of blackbody fits in fig . for interval 1 , we find that a significant n@xmath14 is required to fit the data adequately ; for the other intervals , the fitted n@xmath14 values are consistent with the interstellar n@xmath14 ( @xmath25@xmath31 @xmath16 ) , once the systematic offset of 1.6@xmath15 @xmath16 ( see 2 ) has been taken into account . as is typical of type i bursts , the color temperature shows a significant decline during the decay of the burst . the inferred radius of the blackbody emitter ( we have used the distance of 9.3 kpc and included the normalization correction factor of 1.17 ) also shows behavior typical of type i bursts , although it may be on the small side . the inferred bolometric flux during interval 1 is 2.03@xmath32 ergs@xmath16s@xmath5 , thus the bolometric luminosity is 2.1@xmath33 ergss@xmath5 . this may underestimate the true peak flux / luminosity somewhat , due to the limited time resolution of our data ; judging by the light curve , the true peak values are unlikely to be greater by @xmath111.5 compared with the interval 1 averages . the burst fluence ( integrated over the 160s interval for which we have data ) is estimated to be 1.45@xmath34 ergs@xmath16 ( equivalently , total burst energy of 1.45@xmath35 ergs ) ; the fact that we did not see the return to quiescence may have resulted in underestimating this by @xmath210% . thus the average duration @xmath36 of the burst was @xmath271 s. the quiescent x - ray spectrum of x1832@xmath0330 appears to have a complex shape . a pointed x - ray observation of x1832@xmath0330 with a wide spectral coverage appears worthwhile : if the complex shape is indeed due to a partial covering absorber , we then need to understand where it could be located , particularly if x1832@xmath0330 is a low inclination system . we have observed a type i x - ray burst ; although this is not the first from this system to be reported @xcite , ours is the first time - resolved spectral analysis of a burst from x1832@xmath0330 . the spectral cooling we observe is typical of type i bursts , and can be considered the definitive evidence that x1832@xmath0330 is a neutron star binary . the burst appeared to have peaked at around @xmath210% eddington luminosity , but with a typical total fluence . we have approximately 160 s of data after the onset of the burst , and x1832@xmath0330 clearly had not completed its return to the quiescent state by the end of this data segment ; the duration @xmath36 was about 70 s. while this duration is relatively long among all x - ray bursts , it is actually typical of systems with low persistent luminosities ( @xmath37 , the ratio of persistent flux to eddington luminosity , is about is about 1% for x1832@xmath0330 ) : @xmath36 ranges from 30s to a few minutes at @xmath38 @xcite . we conclude that the _ asca _ burst was a typical type i event . @xcite have recently suggested a relatively faint star , star 49 , as a possible optical counterpart . this faintness may be intrinsic , or geometric : since most of the optical light from an lmxb is due to reprocessing in the @xmath2flat accretion disk , a high binary inclination can lead to an apparently faint optical counterpart . we can comment on this possibility , as the gis-3 light curve probably is the most suitable x - ray data for orbital period search ever obtained for x1832@xmath0330 . since we do not detect orbital modulations , such as eclipses , dips , or quasi - sinusoidal modulations , we conclude that x1832@xmath0330 is unlikely to be a high inclination system . x1832@xmath0330 was seen at x - ray luminosity levels ( @xmath210@xmath39 ergss@xmath5 ) typical of x - ray bursters during the _ rosat _ and _ asca _ observations . this lends additional support against x1832@xmath0330 being a high - inclination , accretion disc corona source . moreover , we ( as well as @xcite ) observed what appears to be a typical type i x - ray burst , suggesting that we do directly observe the neutron star in x1832@xmath0330 . these provide additional arguments against x1832@xmath0330 being a high inclination system . if this lmxb is at a low inclination , then a natural explanation for the optical faintness would be that it is ultra - compact , perhaps similar to x1820@xmath0303 in ngc 6624 @xcite . since optical luminosity is dominated by reprocessing , smaller systems tend to be optically fainter . we consider this to be a circumstantial evidence for x1832@xmath0330being an ultracompact binary , joining those in ngc 6624 , ngc 6712 , and perhaps ngc 1851 @xcite . anderson , s.f . , margon , b. , deutsch , e.w . , downes , r.a . & allen , r.g . 1997 , , 482 , l69 . chaboyer , b. , demarque , p. & sarajedini , a. 1996 , , 459 , 558 . deutsch , e.w . , anderson , s.a . , margon , b. & downes , r.a . 1996 , , 472 , l97 . deutsch , e.w . , margon , b. & anderson , s.a . 1998 , , 116 , 1301 . dotani , t. , yamashita , a. , rasmussen , a. & the sis team 1995 , _ ascanews _ 3 , 25 . hertz , p. & wood , k.s . 1985 , , 290 , 171 . homer , l. , charles , p.a . , naylor , t. , van paradijs , j. , aurier , m. & koch - miramond , l. 1996 , , 282 , 37 . int zand , j.j.m . , verbunt , f. , heise , j. , muller , j.m . , bazzano , a. , cocchi , m. , natalucci , l. & ubertini , p. 1998 , , 329 , l37 . johnston , h.m . , verbunt , f. & hasinger , g. 1996 , , 309 , 116 . ortolani , s. , bica , e. & barbuy , b. 1994 , , 286 , 444 . predehl , p. , hasinger , g. & verbunt , f. 1991 , , 246 , l21 . stella , l. , white , n.e . & priedhorsky , w. 1987 , , 312 , l17 . tanaka , y. , inoue , h. & holt , s.s . 1994 , , 46 , l37 . van paradijs , j. , pennix , w. & lewin , w.h.g . 1988 , , 233 , 437 . van paradijs , j. 1995 , in _ x - ray binaries _ , ed . w.h.g . lewin , j.van paradijs & e.p.j . van den heuvel ( cambridge : cambridge univ . press ) , 536 .
the low mass x - ray binary ( lmxb ) x1832@xmath0330 in ngc 6652 is one of 12 bright , or transient , x - ray sources to have been discovered in globular clusters . we report on a serendipitous _ asca _ observation of this globular cluster lmxb , during which a type i burst was detected and the persistent , non - burst emission of the source was at its brightest level recorded to date . no orbital modulation was detected , which argues against a high inclination for the x1832@xmath0330 system . the spectrum of the persistent emission can be fit with a power law plus a partial covering absorber , although other models are not ruled out . our time - resolved spectral analysis through the burst shows , for the first time , clear evidence for spectral cooling from [email protected] kev to [email protected] kev during the decay . the measured peak flux during the burst is @xmath210% of the eddington luminosity for a 1.4m@xmath3 neutron star . these are characteristic of a type i burst , in the context of the relatively low quiescent luminosity of x1832@xmath0330 .
SECTION 1. SHORT TITLE. This Act may be cited as the ``National Hate Crimes Hotline Act of 2009''. SEC. 2. FINDINGS. Congress makes the following findings: (1) On December 7, 2008, Jose Sucuzhanay, an Ecuadorian- born real estate agent and father of two, was beaten to death in Brooklyn while walking with his brother, who was visiting from Ecuador. Three men with baseball bats attacked the brothers while shouting anti-gay and anti-Hispanic slurs. (2) Marcelo Lucero, 37 years of age, came to the United States from Ecuador in 1993. He settled in Patchogue, New York, a middle-class village in central Long Island. He worked in a dry cleaning store and sent his savings home to his mother, a cancer survivor, whom he had not seen since he left 16 years ago. On the night of November 8, 2008, shortly before midnight, seven teenagers got out of their car and taunted Lucero with racist slurs as he walked home. They then beat and murdered Marcelo Lucero. According to the indictment, the boys set out that night to find someone of Hispanic heritage to assault. (3) The number of hate groups in the United States has increased by 54 percent over the past 8 years. (4) In 2008, the Federal Bureau of Investigation reported a 6 percent rise in the number of hate crimes against gay, lesbian, and transgender people. (5) According to the Federal Bureau of Investigation, attacks on Hispanics grew 40 percent from 2003 to 2007, even though the Hispanic population only grew 16 percent in the same time period and the total number of hate crimes has remained steady. SEC. 3. NATIONAL HATE CRIME HOTLINE AND HATE CRIME INFORMATION AND ASSISTANCE WEBSITE. (a) In General.--The Attorney General may award one or more grants to private, nonprofit entities-- (1) to provide for the establishment and operation of a national, toll-free telephone hotline to provide information and assistance to victims of hate crimes (hereafter in this section referred to as the ``national hate crime hotline''; and (2) to provide for the establishment and operation of a highly secure Internet website to provide that information and assistance to such victims (hereafter in this section referred to as the ``hate crime information and assistance website''). (b) Duration.--A grant under this section may extend over a period of not more than 5 years. (c) Annual Approval.--The provision of payments under a grant awarded under this section shall be subject to annual approval by the Attorney General and subject to the availability of appropriations for each fiscal year to make the payments. (d) Hotline Activities.--An entity that receives a grant under this section for activities described, in whole or in part, in subsection (a)(1) shall use funds made available through the grant to establish and operate a national hate crime hotline. In establishing and operating the hotline, the entity shall-- (1) contract with a carrier for the use of a toll-free telephone line; (2) employ, train, (including technology training), and supervise personnel to answer incoming calls and provide counseling and referral services to callers on a 24-hour-a-day basis; (3) assemble and maintain a current database of information relating to services for victims of hate crimes to which callers throughout the United States may be referred; (4) publicize the national hate crime hotline to potential users throughout the United States; and (5) be prohibited from asking hotline callers about their citizenship status. (e) Secure Website Activities.-- (1) In general.--An entity that receives a grant under this section for activities described, in whole or in part, in subsection (a)(2) shall use funds made available through the grant to provide grants for startup and operational costs associated with establishing and operating a hate crime information and assistance website. (2) Availability.--The hate crime information and assistance website shall be available to the entity operating the national hate crime hotline. (3) Information.--The hate crime information and assistance website shall provide accurate information that describes the services available to victims of hate crimes, including health care and mental health services, social services, transportation, and other relevant services. (4) Rule of construction.--Nothing in this section shall be construed to require any shelter or service provider, whether public or private, to be linked to the hate crime information and assistance website or to provide information to the recipient of the grant described in paragraph (1) or to the website. (f) Application.--The Attorney General may not award a grant under this section unless the Attorney General approves an application for such grant. To be approved by the Attorney General under this subsection an application shall-- (1) contain such agreements, assurances, and information, be in such form, and be submitted in such manner, as the Attorney General shall prescribe through notice in the Federal Register; (2) in the case of an application for a grant to carry out activities described in subsection (a)(1), include a complete description of the applicant's plan for the operation of a national hate crime hotline, including descriptions of-- (A) the training program for hotline personnel, including technology training to ensure that all persons affiliated with the hotline are able to effectively operate any technological systems used by the hotline; (B) the hiring criteria for hotline personnel; (C) the methods for the creation, maintenance, and updating of a resource database; (D) a plan for publicizing the availability of the hotline; (E) a plan for providing service to non-English speaking callers, including service through hotline personnel who speak Spanish; and (F) a plan for facilitating access to the hotline by persons with hearing impairments; (3) in the case of an application for a grant to carry out activities described in subsection (a)(2)-- (A) include a complete description of the applicant's plan for the development, operation, maintenance, and updating of information and resources of the hate crime information and assistance website; (B) include a certification that the applicant will implement a high level security system to ensure the confidentiality of the website, taking into consideration the safety of hate crime victims; and (C) include an assurance that, after the third year of the website project, the recipient of the grant will develop a plan to secure other public or private funding resources to ensure the continued operation and maintenance of the website; (4) demonstrate that the applicant has recognized expertise in the area of hate crimes and a record of high quality service to victims of hate crimes, including a demonstration of support from advocacy groups; (5) demonstrate that the applicant has a commitment to diversity, and to the provision of services to ethnic, racial, religious, and non-English speaking minorities, in addition to older individuals, individuals with disabilities, and individuals of various gender, gender identity, and sexual orientation; and (6) contain such other information as the Attorney General may require. (g) Hate Crime Defined.--For purposes of this Act, the term ``hate crime'' means a crime in which the defendant intentionally selects a victim, or in the case of a property crime, the property that is the object of the crime, because of the actual or perceived race, color, religion, national origin, ethnicity, gender, gender identity, disability, or sexual orientation of any person. (h) Authorization of Appropriations.-- (1) In general.--There is authorized to be appropriated to carry out this section $3,500,000 for each of fiscal years 2010 through 2014. (2) Website.--Of the amounts appropriated pursuant to paragraph (1) for a year, not less than 10 percent shall be used for purposes of carrying out subsection (a)(2). (3) Availability.--Funds authorized to be appropriated under paragraph (1) may remain available until expended. SEC. 4. LOCAL LAW ENFORCEMENT EDUCATION AND TRAINING GRANT PROGRAM. (a) In General.--The Attorney General may award grants to eligible State and local law enforcement entities for educational and training programs on solving hate crimes (as defined in section 1(g)) and establishing community dialogues with groups whose members are at-risk of being victims of such hate crimes. (b) Eligibility.--To be eligible to receive a grant under subsection (a), a State or local law enforcement entity must be in compliance with reporting requirements applicable to such entity pursuant to the Hate Crimes Statistics Act (28 U.S.C. 534 note). (c) Authorization of Appropriations.--There is authorized to be appropriated to carry out this section such sums as are necessary for fiscal year 2010 and each succeeding fiscal year. SEC. 5. LOCAL RESOURCES TO COMBAT HATE CRIMES GRANT PROGRAM. (a) In General.--The Attorney General shall establish a grant program within the Office for Victims of Crime in the Office of Justice Programs, under which the Attorney General may award grants to local community based organizations, nonprofit organizations, and faith-based organizations to establish or expand local programs and activities that serve targeted areas and that provide legal, health (including physical and mental health), and other support services to victims of hate crimes (as defined in section (1)(g)). Grant funds may be used for activities including hiring counselors and providing training, resources, language support services, and information to such victims. (b) Targeted Area Defined.--For purposes of this section, the term ``targeted area'' means an area with a demonstrated lack of resources, as determined by the Attorney General, for victims of hate crimes. (c) Funding Restriction.--None of the funds from a grant made under this section may be used-- (1) by an organization that discriminates against an individual on the basis of religion; or (2) for purposes of promoting religious beliefs or views. (d) Authorization of Appropriations.--There is authorized to be appropriated to carry out this section such sums as are necessary for fiscal year 2010 and each succeeding fiscal year.
National Hate Crimes Hotline Act of 2009 - Authorizes the Attorney General to award grants to: (1) private, nonprofit entities to establish and operate a national, toll-free telephone hotline and an Internet website to assist victims of hate crimes; and (2) state and local law enforcement entities for educational and training programs on solving hate crimes and establishing dialogues with members of communities who are at-risk of being victims of hate crimes. Directs the Attorney General to establish a program for awarding grants to local organizations to establish or expand programs that provide services to victims of hate crimes.
pelvic organ prolapse ( pop ) is extremely common , affecting up to 50% of parous women . a reoperation rate within 10 years of the primary prolapse surgery has been reported as high as 17% . the unacceptable surgical failure rate led surgeons to enforce the native tissue repairs with biological graft or synthetic mesh . level 1 evidence has proved that the use of synthetic mesh increased the anatomical cure rate in anterior vaginal wall repair but not in posterior vaginal wall repair . yet , mesh - related complications , including dyspareunia , mesh exposure , and mesh erosion are being reported with increasing frequency and negatively impacted patient quality - of - life . these findings suggest that urogynecologists should make a balance between anatomical cure and patient quality - of - life when using synthetic mesh to repair the pelvic floor . improvement in patient quality - of - life involves functional recovery , such as improvement of urinary , bowel , and sexual function . however , few studies have looked at the effect of posterior vaginal wall repair with mesh on bowel function . the aims of the study were to use anorectal manometry to compare bowel function outcome in pop patients who underwent prolapse repair with or without trans - vaginal synthetic mesh in the posterior vaginal compartment and specifically to determine whether the use of mesh was better for retention and improvement of anorectal function . between december 2011 and may 2012 , 22 women were referred to our outpatient clinic at the peking union medical college hospital for trans - vaginal mesh surgical correction of severe symptomatic pop ( overall stage iii or iv ) using trans - vaginal mesh ; all 22 were enrolled in the study . exclusion criteria were gynecological pathology in addition to prolapse and previous prolapse surgery or hysterectomy . in addition , patients would be excluded from the study if they had medical conditions , such as diabetes mellitus , hypothyroidism or irritable bowel syndrome that could affect anorectal physiology or cause bowel symptoms . the study was approved by the medical ethics board at the peking union medical college hospital ( s-453 ) . preoperative evaluations included medical history , the pop - quantification ( pop - q ) score measured determining prolapse severity , a chinese validated quality - of - life questionnaire ( the pelvic floor impact questionnaire short form-7 [ pfiq-7 ] ) and anorectal manometry . two surgeons performing all the procedures were blinded to the allocation , and another urogynecologist who neither participated in the surgeries nor knew the mesh status achieved anorectal measurements , pfiq-7 and pop - q scores pre- and post- operatively . the data were finally collected and analyzed by the investigator who knew the group information . anorectal manometry was performed using the solar gi pressure measurement system ( mms , enschede , the netherlands ) . the rectum was emptied before anorectal manometry , and the patient was put in the left lateral decubitus position . during the test , a catheter with four water - perfusion channels was inserted into the anus and placed in the zone of the anal canal with the highest pressure . the maximal anal resting pressure ( marp , a function of the internal anal sphincter ) and the maximal anal squeeze pressure ( masp ) ( a function of the external anal sphincter ) were measured by asking the subjects to rest for 2 min and voluntarily contract the anal sphincter as long as possible . the subject was then instructed to voluntary bowel movements ( straining test ) so we could measure intra - rectal pressure and anal canal residual pressure during defecation . while straining to defecate , the maximal intra - rectal pressure ( a ) clearly exceeded the anal residual pressure ( b ) in order for a bowel movement to occur , so the pressure difference ( p ) between a and b reflected defecation physiology . the straining test result was considered a dyssynergic defecation pattern if there was an inappropriate increase or if the relaxation was < 20% of the basal resting pressure . the operative methods included posterior compartment repair with or without synthetic mesh along with anterior and apical compartment reconstruction if necessary . the choice of operation method was made based on each patient 's pop - q stage , age , and sexual function , and the patient 's preference was considered as well . based on the use of mesh , the patients were divided into two groups that are , a mesh group and a nonmesh group . in the mesh group , we performed total pelvic floor reconstruction with commercial mesh kits such as total prolift ( ethicon , somerville , nj , usa ) or prosima ( ethicon , somerville , nj , usa ) . for the nonmesh group , we performed traditional colporrhaphy for posterior pelvic floor , but reinforced the anterior and the apical pelvic floor using the modified pelvic floor reconstructive surgical method or sacrospinous ligament fixation ( sslf ) . modified pelvic floor reconstructive surgery is a method for repairing the anterior and apical vaginal wall using two pieces of mesh that are cut from one 15 cm 10 cm piece of gynemesh ( ethicon , somerville , nj , usa ) . the same subjective and objective assessments of surgical outcome were repeated 3 months after surgery . anorectal manometry values were also reexamined , including anal canal resting and maximal squeeze pressures , rectal and anal pressure change during defecation . pop - q stage ii or a greater prolapse in any compartment postoperatively was usually defined as failure of the procedure . a two - tailed t - test was used for comparison of continuous data between posterior repair in the mesh and nonmesh groups . the two - tailed , paired t - test was used to calculate probability values for change from baseline to the 3 months postoperative follow - up . a value of p < 0.05 was considered as statistically significant . all statistical analyses were performed using statistical software ( spss version 17.0 ; spss inc . , chicago , il , usa ) . the operative methods included posterior compartment repair with or without synthetic mesh along with anterior and apical compartment reconstruction if necessary . the choice of operation method was made based on each patient 's pop - q stage , age , and sexual function , and the patient 's preference was considered as well . based on the use of mesh , the patients were divided into two groups that are , a mesh group and a nonmesh group . in the mesh group , we performed total pelvic floor reconstruction with commercial mesh kits such as total prolift ( ethicon , somerville , nj , usa ) or prosima ( ethicon , somerville , nj , usa ) . for the nonmesh group , we performed traditional colporrhaphy for posterior pelvic floor , but reinforced the anterior and the apical pelvic floor using the modified pelvic floor reconstructive surgical method or sacrospinous ligament fixation ( sslf ) . modified pelvic floor reconstructive surgery is a method for repairing the anterior and apical vaginal wall using two pieces of mesh that are cut from one 15 cm 10 cm piece of gynemesh ( ethicon , somerville , nj , usa ) . the same subjective and objective assessments of surgical outcome were repeated 3 months after surgery . anorectal manometry values were also reexamined , including anal canal resting and maximal squeeze pressures , rectal and anal pressure change during defecation . pop - q stage ii or a greater prolapse in any compartment postoperatively was usually defined as failure of the procedure . a two - tailed t - test was used for comparison of continuous data between posterior repair in the mesh and nonmesh groups . the two - tailed , paired t - test was used to calculate probability values for change from baseline to the 3 months postoperative follow - up . a value of p < 0.05 was considered as statistically significant . all statistical analyses were performed using statistical software ( spss version 17.0 ; spss inc . , chicago , il , usa ) . in this study , 22 patients underwent surgery for pop . of these , 17 patients were available for a 3 months follow - up examination . of the 17 patients , 5 underwent pelvic floor reconstruction with prosima , 2 underwent using total prolift , 8 had modified pelvic floor reconstructive surgery , and 2 underwent sslf . concomitantly , we performed vaginal hysterectomies in all 17 patients [ figure 1 ] . based on whether mesh was used to reinforce the posterior compartment , the subjects were divided into two groups , the mesh group ( n = 7 ) and the nonmesh group ( n = 10 ) . the baseline patient characteristics are shown in table 1 ; none of the characteristics were significantly different between the groups ( p > 0.05 for all comparisons ) . a total of 3 patients ( 42.9% ) in the mesh group and 5 patients ( 50% ) in the nonmesh group had constipation according to the rome iii criteria . patient characteristics sd : standard deviation ; bmi : body mass index ; pop - q : pelvic organ prolapse - quantification . there were no significant differences between the mesh group and the nonmesh group in terms of the pop - q measurements and pfiq scores either at baseline or at the 3 months follow - up . at follow - up , the pop - q measurements aa , ba , ap , bp , c , and d had improved significantly in both two groups compared to baseline ( p < 0.05 ) [ table 2 ] . the pfiq-7 scores were lower at the 3 months follow - up than at baseline in both groups , but the difference was only significant for the nonmesh group ( p < 0.05 ) . objective and subjective measures before and 3 months after surgery sd : standard deviation ; pfiq-7 : pelvic floor impact questionnaire short form-7 ; preop : preoperative ; postop : postoperative ( 3 months after surgery ) . values are reported as mean sd . the anorectal manometry results are shown in table 3 . for the mesh group , the preoperative maximum anal resting pressure ( marp ) and the masp values were 38.27 19.56 mmhg and 85.29 37.88 mmhg , respectively . for the nonmesh group , the preoperative marp and masp values were 39.61 11.36 mmhg and 93.78 20.67 mmhg , respectively . postoperatively , the marp value or the masp value increased slightly from the baseline level in the two groups , but the differences were not significant ( p > 0.05 ) . anorectal manometry outcomes before and 3 months after surgery marp : maximum anal resting pressure ; masp : maximum anal squeeze pressure ; p : anorectal pressure difference during defecation . * p < 0.05 compared with preoperative marp for the nonmesh group . at baseline , the anal residual pressure during defecation was significantly higher than the maximal resting level ( p < 0.05 ) for the nonmesh group . postoperatively , the anal residual pressure had decreased significantly compared with the preoperative level ( p < 0.05 ) and became not significantly different from the resting status ( p > 0.05 ) . there was a statistically significant increase in rectal pressure in the mesh group 3 months after surgery ( p < 0.05 ) . the postoperative p increased from 8.12 22.19 mmhg to 25.71 27.20 mmhg and from 0.87 25.47 mmhg to 17.66 16.19 mmhg in the mesh and nonmesh groups , respectively ; only the difference in the nonmesh group was significant ( p < 0.05 ) . before the surgery , 2 of the 7 patients ( 28.6% ) in the mesh group had dyssynergic defecation and had not improved at all 3 months after the surgery ; however , in the nonmesh group , the percentage of patient with defecation dyssynergia decreased sharply from 80.0% ( 8/10 ) to 20.0% ( 2/10 ) . in china , posterior colporrhaphy is considered the standard procedure for correcting the prolapse of the posterior compartment . however , this procedure carries a high risk of failure , especially for advanced pop . to date , a number of synthetic meshes have been used in posterior pelvic compartment repair to reinforce the intrinsic tissue defect . notably , repairs that use synthetic mesh have an 8197% anatomical success rate at > 1-year follow - up and have lower recurrence rates compared with traditional vaginal colporrhaphy . although the evidence shows better anatomical cure of prolapse with the use of synthetic mesh , the short- and long - term functional effects after repair must also be considered in clinical practice as mesh - related complications can develop . huang et al . reported that trans - vaginal pelvic reconstructive surgery using the prolift kit improves urogenital distress inventory-6 scores significantly after a median of 24.5 months of postoperative follow - up . another study demonstrated that the vaginal repair using the prosima system offers significant improvements in pelvic symptoms , quality - of - life , and sexual function 1-year after surgery . however , a cochrane review that was updated in 2010 and that included 40 randomized or quasi - randomized controlled trials concluded that there was no improvement in functional and patient - centered outcomes , as measured by validated pelvic floor questionnaires , for anterior compartment repair using polypropylene mesh . research related to improved function or improved quality - of - life outcomes after posterior vaginal wall repair is sparse , and very few manometry studies have evaluated anorectal function after pop correction surgery . to our knowledge , this is the first study to compare bowel functional outcome of posterior vaginal wall repair with mesh versus without mesh using both subjective and objective measures . we found that the nonmesh group had a significantly lower anal residual pressure and a significantly increased rectum - anus p postoperatively . these changes suggest that when self - tissue rather than synthetic mesh is used in posterior vaginal wall repair , the anal sphincter could relax more during evacuation , helping the rectum squeeze contents out of the anus and improving defecation coordination toward normal physiology . in the mesh group , however , relaxation of the anus remained impaired after the operation , as there was no significant alteration in anal pressure during defecation . these patients had to increase rectal pressure to compensate in order to maintain the anorectal p and therefore achieve a bowel movement . this is why the percentage of patients with defecation dyssynergia decreased sharply from 80.0% to 20.0% in the nonmesh group , while no such decrease was seen in the mesh group . these findings indicate that posterior vaginal wall repair without mesh could improve the anorectal motor function of pop patients better than repair using mesh . there are two possible explanations for the differences we found between mesh versus nonmesh repairs . first , a bridge repair that reconstructs the posterior vaginal wall using self - tissue has less of an effect on anorectal physiology compared to repair with synthetic mesh , which involves the placement of foreign material into the retro - vaginal space . in fact , some patients in the mesh group complained that they felt incapable of straining during evacuation after undergoing posterior pelvic repair with mesh . second , the vaginal bridge repair benefited from apical support with mesh even when there was no synthetic mesh to enforce the posterior pelvic floor . these outcomes illustrate petro 's integrity theory that there is a complex interplay that is , independence but divergence , among different pelvic compartments and delancey levels . the support that was given to the most important level , the vagina apex , was strong and sufficient ; therefore , this support indirectly helped reinforce the other levels . however , our measures did not show corresponding changes in the maximal anal resting pressure and squeeze pressure following anatomical repair , either with or without mesh . also reported that there is no relationship between preoperative manometry pressure changes and postoperative surgery outcome . taken together , these data suggest that there are factors other than structural changes that are involved in the development of pelvic floor dysfunction and may further suggest that the anorectal p might be a better index than resting and squeeze pressure for predicting the effects of posterior pelvic floor repair . repair with and without mesh both successfully cured pop , and no surgical failure was found in either group at the 3 months follow - up . furthermore , the pfiq-7 scores for both groups improved postoperatively , but only the nonmesh group showed a significant change ( p < 0.05 ) . these results indicate that as long as there is adequate apical / middle compartment support , posterior repair without mesh can achieve the same anatomic recovery as that of repair with mesh , along with better improvement in pelvic floor symptoms . moreover , repairs without the use of mesh avoid mesh - related complications and provide greater improvement in anorectal function . this is rarely mentioned in other reports , which simply note that these symptoms improve satisfactorily with the use of the prolift and prosima systems . one possible explanation for the difference we saw in anorectal function is that subjective symptom improvement does not necessarily mirror changes in objective parameter changes ; in addition , mesh exposure or erosion impairs bowel function recovery . one strength of this study was the use of an objective measure to assess the recovery of bowel function after pelvic floor reconstructive surgery , thus avoiding bias from subjective measures . the major drawbacks are the relatively small sample size and the relatively short follow - up time of 3 months . a larger sample size and/or randomized study with longer follow - up in addition , the two study groups were not the same size . despite the limitations , these objective anatomical and functional data will be useful for physicians to consider when counseling patients about the procedures used to treat posterior vaginal wall prolapse . in conclusion , as long as there is sufficient support for the anterior wall and apex of vagina with synthetic mesh , posterior vaginal compartment repair without mesh may be as effective as repair with mesh for anatomical recovery while providing better postoperative anorectal motor function and while avoiding mesh - related complications .
background : although repair augmented with mesh has been proved its priority in anatomical and functional recovery after anterior compartment reconstruction , the data about posterior compartment are scarce . the aim of this study was to compare bowel functional outcome of posterior vaginal compartment repair with and without mesh in patients with pelvic organ prolapse ( pop).methods : this was a prospective , double - blind , clinical pilot study of 22 postmenopausal women with symptomatic pop ( overall pop - quantification [ pop - q ] stage iii - iv ) who underwent total pelvic floor reconstruction . patients were grouped according to the use of mesh for posterior vaginal compartment repair : a mesh group and a nonmesh group . pop - q stage , the pelvic floor impact questionnaire short form-7 ( pfiq-7 ) and anorectal manometry were evaluated before and 3 months after surgery . anatomical success was defined as pop - q stage ii or less . a t - test was used to compare preoperative with postoperative data in the two groups.results:totally , 17 ( 71% ) were available for the follow - up . pop - q measurements improved significantly compared to baseline ( p < 0.05 ) in both groups . no recurrence was observed . subjects in both groups reported improvement in pelvic floor symptoms , and there was no significant difference in the pfiq-7 score between groups at follow - up ( p > 0.05 ) . compared with baseline , the nonmesh group exhibited a statistically significant decrease in anal residual pressure , a significant increase in the anorectal pressure difference during bowel movement , and a reduced rate of dyssynergia defecation pattern ( p < 0.05).conclusions : provided there is sufficient support for the anterior wall and apex of vagina with mesh , posterior compartment repair without mesh may be as effective as repair with mesh for anatomical recovery while providing better anorectal motor function .
in @xcite a three - loop computation of the ghost propagator was presented . as announced , we now report on a similar computation of the gluon propagator . the full propagators of gluons and ghosts calculated on the lattice encode information about the non - perturbative vacuum properties of qcd and of pure yang - mills theory @xcite . this requires a non - perturbative extension of the landau gauge . although brst invariance is essential for other non - perturbative approaches to non - abelian gauge theory , there are principal difficulties to reconcile it with present - day lattice gauge fixing technology ( as applied for calculating gauge - variant objects like gluon and ghost propagators ) in a way avoiding the neuberger problem ( see @xcite and references therein ) . so far , there is no generally accepted way to deal with the gribov ambiguity in lattice simulations . other non - perturbative methods for calculations in the landau gauge like schwinger - dyson ( sd ) equations @xcite or the functional renormalization group ( frg ) @xcite are not explicitly taking into account ( and seem not to be affected by ) the complication compared to the standard faddeev - popov procedure that the functional integration should be restricted to the gribov horizon . zwanziger has argued @xcite that the form of the sd equations will not be affected . rather supplementary conditions restricting solutions should account for it . the so - called scaling solution @xcite has been an early result of the sd approach obtained for power - like solutions . the power behavior has been further specified in @xcite and confirmed by ( time - independent ) stochastic quantization @xcite . concerning lattice simulations , zwanziger proposed @xcite to add nonlocal terms to the action that should control the simulation to stay within the gribov horizon has been obtained . ] . in the context of the schwinger - dyson approach this possibility has been analytically discussed first in ref . standard monte carlo simulations without these refinements have failed to reproduce the theoretically preferred far - infrared asymptotics of the scaling solution and have supported instead the so - called decoupling solution ( see ref . @xcite and references therein ) . this solution has later been shown to be possible in the sd and frg approaches with suitable boundary conditions @xcite , but at the expense of a conflict with global brst invariance . although we can assume that nspt remains in the vicinity of the trivial vacuum , we have to understand the present situation with respect to monte carlo results for gluon and ghost propagators . fortunately , the momentum range we are interested in and where we are going to compare with monte carlo lattice results is not influenced by the gribov ambiguity and the way of gauge fixing @xcite . from various studies it is known , however , that the intermediate momentum range of @xmath3 is not less important from the point of view of confinement physics , as seen for the gluon propagator @xcite and the quark propagator @xcite in landau gauge on the lattice . from the frg approach it is known that for the onset of confinement at finite temperature the mid - momentum region of the propagators is important @xcite . this is the region where violation of positivity @xcite invalidates a conventional , particle - like interpretation of the gluon propagator . specific non - perturbative configurations ( center vortices ) have been found to be essential @xcite to understand the behavior of the gluon and the quark propagator in the intermediate momentum range . in recent years , also the large - momentum behavior of the lattice gluon and ghost propagators has attracted growing interest , in particular with respect to the coupling constant of the ghost - gluon vertex @xcite , which has the potential to provide an independent precision measurement of @xmath4 from these propagators @xcite . first estimates of the zero- and two - flavor values of @xmath5 @xcite and a possible dimension - two condensate @xcite are available already and look promising . therefore , a detailed knowledge of the propagators lattice perturbative part would much foster these efforts . when our work begun in 2007 @xcite , the intention was to clarify the perturbative background , among other facts the type of convergence of the summed - up few - loop perturbative contributions to the propagators in various momentum ranges . in standard lattice perturbation theory ( lpt ) such calculations are very difficult beyond two - loop order . to overcome this obstacle , we have used the method of numerical stochastic perturbation theory ( nspt ) @xcite , which provides a stochastic , automatized framework for gauge - invariant and gauge - non - invariant calculations . stochastic gauge fixing is built in , and a high - precision procedure has been devised to fix the gauge to landau , at any given order . thus , the propagators ( for each lattice momentum ) can be calculated in subsequent orders of perturbation theory . a limit is set only by storage limitations and machine precision . there are no truncation errors . for direct comparison with monte carlo results , we present the low - loop results summed up with inverse powers of the bare inverse coupling @xmath6 . in this paper we also try , for the first time applied to the gluon propagator , to improve the convergence by applying boosted perturbation theory . the effectiveness of nspt relies on the fact that the parametrization of the ( leading and non - leading ) logarithmic terms can follow largely in accordance with standard perturbation theory . the essential difficulty left to nspt is the computation of the constant contributions , which are in general very difficult to achieve in diagrammatic lpt . only the one - loop constant term was known since long @xcite for the ghost and gluon propagator . reproducing these results , which are obtained in the continuum and infinite - volume limits , was the first feasibility test for nspt @xcite . in general , at any order , nspt results are obtained at finite lattice spacing and finite volume . a fitting procedure is needed to get the continuum ( @xmath1 ) and infinite - volume ( @xmath2 ) limits . while the extraction of the first ( continuum ) limit relies on hypercubic - invariant taylor series @xcite , a careful extraction of the second ( infinite - volume ) limit requires the accounting of @xmath7 contributions ( @xmath8 being the momentum scale relevant to the computation and @xmath9 the finite extent of the lattice ) . in the first paper ( abbreviated as i ) , we gave a quite comprehensive description of all this technology while applying it to the ghost propagator . in the present , second paper we are going to apply the method to an analysis of the gluon propagator . the paper is organized as follows . [ sec : gluonprop - formulation ] recalls the lattice definition of the gluon propagator , together with specific features that the calculation in the framework of nspt contains . [ sec : standard - lpt ] contains the nomenclature of standard lattice perturbation theory where our results have to fit in . in sect . [ sec : implementation ] we only briefly describe the implementation of nspt . the interested reader will find more information of this kind in part i of this series of papers . however , we document the statistics for different lattice volumes and different orders of perturbation theory that has been collected by the leipzig and parma part of the collaboration . here we also present the raw data before and after the extrapolation to the langevin time - step @xmath10 limit . in sect . [ sec : comparison ] we compare the results of nspt ( up to four loops ) with monte carlo results and try to improve the convergence by boosted perturbation theory . [ sec : fitting ] presents the fitting procedure and the final results for the leading and non - leading loop corrections . in sect . [ sec : summary ] we draw our conclusions and summarize our results . the lattice gluon propagator @xmath11 is the fourier transform of the gluon two - point function , _ i.e. _ the expectation value @xmath12 which is required to be color - diagonal and symmetric in the lorentz indices @xmath13 . for the definition of the lattice momenta @xmath14 , @xmath15 and @xmath16 to be used later we refer to ( i-17)-(i-19 ) . assuming reality of the color components of the vector potential and rotational invariance of the two - point function , the continuum gluon propagator has the following general tensor structure denotes here directly the continuum euclidean four - momentum . ] @xmath17 with @xmath18 and @xmath19 being the transverse and longitudinal propagator , respectively . the longitudinal propagator @xmath19 vanishes in the landau gauge . the lattice gluon propagator @xmath20 depends on the lattice four - momentum @xmath21 . due to the lower symmetry of the hypercubic group its general tensor structure can be expected to be more complicated than ( [ eq : decomposition ] ) that holds in the continuum . inspired by the continuum form ( [ eq : decomposition ] ) we consider as one strategy only the extraction of the following lattice scalars @xmath22 that should survive the continuum limit . note , however , that additional lattice scalars could be measured as well . the first scalar vanishes exactly in lattice landau gauge . in this gauge the second scalar function , corresponding to the transverse part of the gluon propagator in the continuum limit , is denoted by @xmath23 on the lattice , this function is influenced by the the lower symmetry of the hypercubic group . in nspt the different loop orders @xmath24 ( even orders in @xmath25 ) at finite langevin step size @xmath26 are constructed directly from the fourier transformed perturbative gauge fields @xmath27 with ( see ( i-6 ) and ( i-7 ) ) @xmath28 what leads to @xmath29 \ , \right\rangle \ , . \label{eq : dn}\ ] ] note that already the tree - level contribution to the gluon propagator , @xmath30 , arises from quantum fluctuations of the gauge fields with @xmath31 . in addition , terms with non - integer @xmath32 in the previous equation ( [ eq : dn ] ) which do not correspond to loop contributions should vanish numerically after averaging over configurations . similar to the ghost propagator in paper i , we present the various orders of the gluon dressing function ( or `` form factor '' ) in two forms : @xmath33 contrary to the ghost propagator the gluon dressing function can be calculated at the same time for all possible four - momenta given by four integers the four - momentum tuples @xmath34 . this makes a calculation of the gluon propagator significantly cheaper . the tree - level result for the dressing function , @xmath35 in the limit @xmath36 for all sets of tuples is non - trivial and is obtained as the result of averaging . as discussed in paper i we relate infrared singularities encountered in our finite volume nspt calculation to powers of logarithms of the external momentum obtained in the infinite volume limit . therefore , we need the anomalous dimension of the gluon field @xmath37 , the @xmath38-function and the relation between lattice bare and renormalized coupling . the procedure is outlined in detail in section 4 of i. here , we only repeat the essential equations and quote the final numbers . to avoid a possible mismatch of equations we add an index @xmath39 for the gluon propagator case . in the ri-mom scheme , the renormalized gluon dressing function @xmath40 is defined as @xmath41 with the standard condition @xmath42 the gluon dressing function @xmath43 is the gluon wave function renormalization constant @xmath44 at @xmath45 . the expansions of @xmath46 , @xmath47 and @xmath40 in terms of the renormalized coupling @xmath48 are completely analogous to ( i-39)-(i-41 ) . the gluon wave function renormalization as an expansion in the lattice coupling @xmath49 is represented as @xmath50 this is the expansion we can measure in nspt . again we restrict ourselves to three - loop expressions for the landau gauge in the quenched approximation . the coefficients in front of the logarithms are partly known from calculations of the gluon wave functions and the beta function in the continuum and given as follows ( compare _ e.g. _ @xcite ) @xmath51 \ , n_c^3 - \frac{1279}{108 } \ , z^{g,\rm ri'}_{1,0}\ , n_c^2 + \frac{31}{3 } \ , z^{g,\rm ri'}_{2,0}\ , n_c\ , . \nonumber\end{aligned}\ ] ] the finite one - loop constant @xmath52 is known from the gluon self energy in standard infinite volume lpt @xcite with the value @xmath53 again the constants @xmath54 and @xmath55 were not known so far . the form of the coefficients @xmath56 of @xmath57 in ( [ zabare ] ) up to three loops can be read off directly from equations ( i-51)-(i-53 ) replacing there all @xmath58 by @xmath56 . as a result , we present the gluon dressing function as function of the inverse lattice coupling @xmath38 to that order @xmath59 with the one - loop coefficients @xmath60 the two - loop coefficients @xmath61 and the three - loop coefficients @xmath62 let us repeat it once more : the leading logarithmic coefficients for a given order can be exclusively taken from continuum perturbative calculations . the non - leading log coefficients are influenced , however , by the finite lattice constants from corresponding lower loop orders . to obtain infinite volume perturbative loop results at vanishing lattice spacing , we have to study again the limit @xmath36 and different lattice sizes @xmath63 . we have used @xmath64 and @xmath65 and studied the maximal loop order for the propagator @xmath66 and @xmath67 , respectively . the accumulated statistics for the different @xmath26 s and lattice sizes are collected in tables [ tab : statistics ] and [ tab : statistics2 ] . .number of gluon propagator measurements up to four loops ( @xmath68 ) and up to one loop ( @xmath69 ) using the leipzig nspt code . [ cols="^,^,^,^,^,^",options="header " , ] the errors are estimated by equally weighting the mean deviations squared of both the individual fits and the sum of the `` best '' ten @xmath70 values . the results from the different criteria coincide within errors . we have obtained a very good agreement at the level of 0.5 percent with the expected exact one - loop result @xmath71 given in ( [ jg1loop ] ) . comparing the accuracy reached here with that of the ghost propagator , let us mention again that in the gluon propagator case already the tree level is calculated from quantum fluctuations of the gauge fields . so the one - loop accuracy in the gluon dressing function case should be fairly compared to the two - loop accuracy of the ghost dressing function . to estimate the influence of the missing larger volumes for the three - loop constant , let us compare the two - loop constants obtained in the lattice volume sets ( @xmath72 ) ( one- and two - loop ) and ( @xmath73 ) ( three - loop ) . the results are collected in table [ tab : fitcomp ] . @xmath74 & @xmath75 & @xmath75 + & @xmath72 & @xmath73 + 1 & 7.939(123 ) & 7.919(79 ) + 2 & 7.927(117 ) & 7.876(106 ) + 3 & 7.897(112 ) & 7.888(115 ) + from the numbers given there we conclude that the missing data sets in the three - loop fit of @xmath76 do not entail a significant change . there is a small tendency to somewhat larger numbers using larger volumes . this has to be taken into account as a systematic effect in our estimate of @xmath77 . finally we decided to take the selection criterion @xmath78 as the most suitable one and present our numerical results for the unknown non - logarithmic constants in the gluon dressing function of infinite volume lattice perturbation theory in landau gauge : @xmath79 collecting all results we can write ( [ zgluonbeta3loop ] ) in a numerical form ( restricting to at most five digits after the decimal point ) @xmath80 a transformation to the @xmath81 scheme can be performed using the relations given in section [ sec : standard - lpt ] . in the present work we have applied nspt to calculate the landau gauge gluon propagator in lattice perturbation theory up to four loops . the summed gluon dressing function is compared to recent monte carlo measurements of the berlin humboldt university group . both ( nspt ) perturbative and non - perturbative results are in terms of one and the same definition of the gauge fields , both in landau gauge fixing and in measurements of the propagator . to improve the comparison , we have also summed our results in a boosted scheme showing better convergence properties . the key goal of the lattice study of propagators is to reveal their genuinely non - perturbative content , which asks for disentangling perturbative and non - perturbative contributions . the commonly used procedure goes through the fit of the high momentum tail by continuum - like formulae ( anomalous dimensions and logarithms can be taken from continuum computation ) . while this lets us gain intuition , it opens the way to further ambiguities , since irrelevant effects give substantial contributions to the perturbative tail . at large lattice momenta our calculations indicate that the perturbative dressing function constructed by means of nspt with more than four loops will match the monte carlo measurements , thus enabling a fair accounting of the perturbative tail . the strong difference which is left over in the intermediate and moreover the infrared momentum region should then be attributed to non - perturbative effects . power corrections @xcite and contributions from non - perturbative excitations @xcite are serious candidates for the description of these ( better disentangled ) effects . the one - loop result for the perturbative gluon propagator of lattice @xmath0 in covariant gauges ( and in particular landau ) has been known for a long time . using our strategy for a careful analysis of finite volume and finite lattice size effects we find good agreement with this result . in ( [ j3gloopnum ] ) we have summarized our ( original ) two- and three - loop results . this work is supported by dfg under contract schi 422/8 - 1 , dfg sfb / tr 55 , by i.n.f.n . under the research project mi11 and by the research executive agency ( rea ) of the european union under grant agreement number pitn - ga-2009 - 238353 ( itn strongnet ) . f. di renzo , e .- m . ilgenfritz , h. perlt , a. schiller and c. torrero , nucl . * b 831 * ( 2010 ) 262 [ http://arxiv.org/abs/0912.4152[arxiv:0912.4152 [ hep - lat ] ] ] . r. alkofer , c. s. fischer , m. q. huber , f. j. llanes - estrada and k. schwenzer , pos * confinement8 * ( 2008 ) 019 [ http://arxiv.org/abs/0812.2896[arxiv:0812.2896[hep-ph ] ] ] . l. von smekal , d. mehta , a. sternbeck , and a. g. williams , pos * lat2007 * ( 2007 ) 382 [ http://arxiv.org/abs/0710.2410[arxiv:0710.2410[hep-lat ] ] ] . r. alkofer and l. von smekal , phys . * 353 * ( 2001 ) 281 [ http://arxiv.org/abs/hep-ph/0007355[arxiv:hep-ph/0007355 ] ] . j. berges , n. tetradis , and ch . wetterich , phys . * 363 * ( 2002 ) 223 [ http://arxiv.org/abs/hep-ph/0005122[arxiv:hep-ph/0005122 ] ] . j. m. pawlowski , annals phys . * 322 * ( 2007 ) 2831 [ http://arxiv.org/abs/hep-th/0512261[arxiv:hep-th/0512261 ] ] . d. zwanziger , phys . * d 69 * ( 2004 ) 016002 [ http://arxiv.org/abs/hep-ph/0303028[arxiv:hep-ph/0303028 ] ] . l. von smekal , r. alkofer , and a. hauck , phys . * 79 * ( 1997 ) 3591 [ http://arxiv.org/abs/hep-ph/9705242[arxiv:hep-ph/9705242 ] ] . l. von smekal , a. hauck , and r. alkofer , annals phys . * 267 * ( 1998 ) 1 [ http://arxiv.org/abs/hep-ph/9707327[arxiv:hep-ph/9707327 ] ] . lerche and l. von smekal , phys . * d 65 * ( 2002 ) 125006 [ http://arxiv.org/abs/hep-ph/0202194[arxiv:hep-ph/0202194 ] ] . d. zwanziger , phys . * d 65 * ( 2002 ) 094039 [ http://arxiv.org/abs/hep-th/0109224[arxiv:hep-th/0109224 ] ] . d. zwanziger , nucl . * b 412 * ( 1994 ) 657 . m. q. huber , r. alkofer , and s. p. sorella , phys . * d 81 * ( 2010 ) 065003 [ http://arxiv.org/abs/0910.5604[arxiv:0910.5604[hep-th ] ] ] . a. c. aguilar , d. binosi , and j. papavassiliou , phys . * d 78 * ( 2008 ) 025010 [ http://arxiv.org/abs/0802.1870[arxiv:0802.1870[hep-ph ] ] ] . fischer , a. maas , and j. m. pawlowski , . annals phys . * 324 * ( 2009 ) 2408 [ http://arxiv.org/abs/0810.1987[arxiv:0810.1987[hep-ph ] ] ] . a. sternbeck , e .- ilgenfritz , m. mller - preussker and a. schiller , phys . * d 72 * ( 2005 ) 014507 [ http://arxiv.org/abs/hep-lat/0506007[arxiv:hep-lat/0506007 ] ] . k. langfeld , h. reinhardt , and j. gattnar , nucl . * b 621 * ( 2002 ) 131 [ http://arxiv.org/abs/hep-ph/0107141[arxiv:hep-ph/0107141 ] ] . o. bowman , et al . * d 78 * ( 2008 ) 054509 [ http://arxiv.org/abs/0806.4219[arxiv:0806.4219[hep-lat ] ] ] . j. braun , h. gies , and j. m. pawlowski , phys . * b 684 * ( 2010 ) 262 [ http://arxiv.org/abs/0708.2413[arxiv:0708.2413[hep-th ] ] ] . j. braun , a. eichhorn , h. gies , and j. m. pawlowski , [ http://arxiv.org/abs/1007.2619[arxiv:1007.2619[hep-ph ] ] ] . a. sternbeck , e .- ilgenfritz , m. mller - preussker , a. schiller , and i. l. bogolubsky , pos * lat2006 * ( 2006 ) 076 [ http://arxiv.org/abs/hep-lat/0610053[arxiv:hep-lat/0610053 ] ] . o. bowman , et al . * d 76 * ( 2007 ) 094505 [ http://arxiv.org/abs/hep-lat/0703022[arxiv:hep-lat/0703022 ] ] . a. sternbeck , et al . , pos * lat2007 * ( 2007 ) 256 [ http://arxiv.org/abs/0710.2965[arxiv:0710.2965[hep-lat ] ] ] . ph . boucaud , et al . * d 79 * ( 2009 ) 014508 [ http://arxiv.org/abs/0811.2059[arxiv:0811.2059[hep-ph ] ] ] . a. sternbeck , et al . , pos * lat2009 * ( 2009 ) 210 [ http://arxiv.org/abs/1003.1585[arxiv:1003.1585[hep-lat ] ] ] . b. blossier , et al . , ( etm collaboration ) , [ http://arxiv.org/abs/1005.5290[arxiv:1005.5290[hep-lat ] ] ] . l. von smekal , k. maltman , and a. sternbeck , phys . * b 681 * ( 2009 ) 336 [ http://arxiv.org/abs/0903.1696[arxiv:0903.1696[hep-ph ] ] ] . ilgenfritz , h. perlt and a. schiller , pos * lat2007 * ( 2007 ) 250 [ http://arxiv.org/abs/0710.0560[arxiv:0710.0560 [ hep - lat ] ] ] . f. di renzo , l. scorzato and c. torrero , pos * lat2007 * ( 2007 ) 240 [ http://arxiv.org/abs/0710.0552[arxiv:0710.0552 [ hep - lat ] ] ] . f. di renzo , e. onofri , g. marchesini and p. marenzoni , nucl . * b 426 * ( 1994 ) 675 [ http://arxiv.org/abs/hep-lat/9405019[arxiv:hep-lat/9405019 ] ] ; for a review see f. di renzo and l. scorzato , jhep * 0410 * , 073 ( 2004 ) [ http://arxiv.org/abs/hep-lat/0410010[arxiv:hep-lat/0410010 ] ] . h. kawai , r. nakayama and k. seo , nucl . * b 189 * ( 1981 ) 40 . f. di renzo , v. miccio , l. scorzato and c. torrero , eur . j. * c 51 * ( 2007 ) 645 [ http://arxiv.org/abs/hep-lat/0611013[arxiv:hep-lat/0611013 ] ] . j. a. gracey , nucl . * b 662 * ( 2003 ) 247 [ http://arxiv.org/abs/hep-ph/0304113[arxiv:hep-ph/0304113 ] ] . g. p. lepage and p. b. mackenzie , phys . d 48 * ( 1993 ) 2250 [ http://arxiv.org/abs/hep-lat/9209022[arxiv:hep-lat/9209022 ] ] . ilgenfritz , y. nakamura , h. perlt , p. e. l. rakow , g. schierholz and a. schiller , pos * lat2009 * ( 2009 ) 236 [ http://arxiv.org/abs/0910.2795[arxiv:0910.2795 [ hep - lat ] ] ] . c. menz , diploma thesis , humboldt - universitt zu berlin ( 2009 ) ; we acknowledge receiving those data prior to publication .
this is the second of two papers devoted to the perturbative computation of the ghost and gluon propagators in @xmath0 lattice gauge theory . such a computation should enable a comparison with results from lattice simulations in order to reveal the genuinely non - perturbative content of the latter . the gluon propagator is computed by means of numerical stochastic perturbation theory : results range from two up to four loops , depending on the different lattice sizes . the non - logarithmic constants for one , two and three loops are extrapolated to the lattice spacing @xmath1 continuum and infinite volume @xmath2 limits .
SECTION 1. SHORT TITLE. This Act may be cited as the ``National Resilience Development Act of 2003''. SEC. 2. FINDINGS. The Congress finds as follows: (1) According to the New England Journal of Medicine, after September 11, 2001, Americans across the country, including children, had substantial symptoms of stress. Even clinicians who practice in regions that are far from the sites of the attacks should be prepared to assist people with trauma-related symptoms of stress. (2) According to Military Medicine, experiences from the 1995 chemical weapons attack by terrorists in the Tokyo subway system suggest that psychological casualties from a chemical attack will outnumber physical casualties by approximately 4 to 1. (3) According to Military Medicine, victims from the 1995 Tokyo attack continued to suffer from psychological symptoms 5 years later. (4) According to the Journal of the American Medical Association, the lessons learned from the 2001 anthrax attacks should motivate local health departments, health care organizations, and clinicians to engage in collaborative programs to enhance their communications and local preparedness and response capabilities. (5) According to the Institute of Medicine of the National Academy of Sciences, the Department of Health and Human Services and the Department of Homeland Security should analyze terrorism preparedness to ensure that the public health infrastructure is prepared to respond to the psychological consequences of terrorism, and Federal, State, and local disaster planers should address these psychological consequences in their planning and preparedness for terrorist attacks. (6) According to a national study by leading health care foundations, in this time of growing threats of terrorism, many doctors and other primary care providers are increasingly being confronted with patients who complain of aches and pains, or more serious symptoms, which mask serious anxiety or depression. (7) Substantial effort and funding are still needed to adequately understand and prepare for the psychological consequences associated with bioterrorism. (8) The integration of mental health into public health efforts, including integration and cooperation across Federal agencies and State public health and mental health authorities, is critical in addressing the psychological needs of the Nation with regard to terrorism. SEC. 3. GOALS. The goals of this Act are as follows: (1) To coordinate the efforts of different government agencies in researching, developing, and implementing programs and protocols designed to increase the psychological resilience and mitigate distress reactions and maladaptive behaviors of the American public as they relate to terrorism. (2) To facilitate the work of the Department of Homeland Security by incorporating programs and protocols designed to increase the psychological resilience, and mitigate distress reactions and maladaptive behaviors, of the American public into the Department's efforts in reducing the vulnerability of the United States to terrorism. (3) To identify effective interventions to the harmful psychosocial consequences of disasters and to integrate these interventions into the United States' plans to mitigate, plan for, respond to, and recover from potential and actual terrorist attacks. (4) To enable the States and localities to effectively respond to the psychosocial consequences of terrorism. (5) To integrate mental health and public health emergency preparedness and response efforts in the United States. SEC. 4. INTERAGENCY TASK FORCE ON NATIONAL RESILIENCE. Title III of the Public Health Service Act (42 U.S.C. 241 et seq.) is amended by inserting after section 319K the following: ``SEC. 319L. INTERAGENCY TASK FORCE ON NATIONAL RESILIENCE. ``(a) Establishment.--The Secretary shall convene and lead an interagency task force for the purpose of increasing the psychological resilience and mitigating distress reactions and maladaptive behaviors of the American public in preparation for, and in response to, a conventional, biological, chemical, or radiological attack on the United States. ``(b) Members.--The task force convened under this section shall include the Director of the Centers for Disease Control and Prevention, the Director of the National Institute of Mental Health, the Administrator of the Substance Abuse and Mental Health Services Administration, the Administrator of the Health Resources and Services Administration, the Director of the Office of Public Health Emergency Preparedness, the Surgeon General of the Public Health Service, and such other members as the Secretary deems appropriate. ``(c) Duties.--The duties of the task force convened under this section shall include the following: ``(1) Coordinating and facilitating the efforts of the Centers for Disease Control and Prevention, the National Institute of Mental Health, the Substance Abuse and Mental Health Services Administration, the Health Resources and Services Administration, the Office of Public Health Emergency Preparedness, and the Office of the Surgeon General of the Public Health Service in their endeavors to develop programs and protocols designed to increase the psychological resilience and mitigate distress reactions and maladaptive behaviors of the American public in preparation for, and in response to, a conventional, biological, chemical, or radiological attack on the United States. ``(2) Consulting with, and providing guidance to, the Department of Homeland Security in its efforts to integrate into its efforts in reducing the vulnerability of the United States to terrorism, programs and protocols designed to increase the psychological resilience and mitigate distress reactions and maladaptive behaviors of the American public in preparation for, and in response to, a conventional, biological, chemical, or radiological attack on the United States. ``(3) Consulting with the Department of Defense, the Department of Veterans Affairs, the American Red Cross, national organizations of health care and health care providers, and such other organizations and agencies as the task force deems appropriate. ``(4) Consulting with and providing guidance to the States for the purpose of enabling them to effectively respond to the psychosocial consequences of terrorism. ``(5) Developing strategies for encouraging State public health and mental health agencies to closely collaborate in the development of integrated, science-based programs and protocols designed to increase the psychological resilience and mitigate distress reactions and maladaptive behaviors of the public in preparation for, and in response to, a conventional, biological, chemical, or radiological attack on the United States. ``(6) Preparing and presenting to the Secretary of Health and Human Services and the Secretary of Homeland Security specific recommendations on how their respective departments, agencies, and offices can strengthen existing and planned terrorism preparedness, response, recovery, and mitigation initiatives by integrating programs and protocols designed to increase the psychological resilience and mitigate distress reactions and maladaptive behaviors of the American public. ``(d) Meetings.--The task force convened under this section shall meet not less than 4 times each year. ``(e) Staff.--The Secretary shall staff the task force as necessary to ensure it meets the goals set forth in section 3 of the National Resilience Development Act of 2003.''. SEC. 5. MENTAL HEALTH ACTIVITIES OF STATES, DISTRICT OF COLUMBIA, AND TERRITORIES REGARDING NATIONAL RESILIENCE. (a) Public Health Service Act.-- (1) Authorization.--Subsection (d) of section 319C-1 of the Public Health Service Act (42 U.S.C. 247d-3a) is amended by inserting after paragraph (18) the following: ``(19) To enable State mental health authorities, in close collaboration with the respective State public health authorities and the interagency task force convened under section 319L, to better understand and manage human emotional, behavioral, and cognitive responses to disasters, including by increasing the psychological resilience of the public and mitigating distress reactions and maladaptive behaviors that could occur in response to a conventional, biological, chemical, or radiological attack on the United States.''. (2) Funding.--Subparagraph (B) of section 319C-1(j)(1) of the Public Health Service Act (42 U.S.C. 247d-3a(j)(1)) is amended by adding at the end the following: ``Not less than 1 percent of the amounts appropriated pursuant to this subparagraph shall be used for the purpose of carrying out subsection (d)(19).''. (b) USA Patriot Act.-- (1) Authorization.--Subsection (b) of section 1014 of the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT ACT) Act of 2001 (42 U.S.C. 3714) is amended-- (A) by striking ``may be used to purchase'' and inserting ``may be used for the following: ``(1) To purchase''; (B) by striking ``In addition, grants under this section may be used to construct'' and inserting the following: ``(2) To construct''; and (C) by inserting at the end the following: ``(3) To enable State mental health authorities, in close collaboration with the respective State public health authorities and the interagency task force convened under section 319L of the Public Health Service Act, to better understand and manage human emotional, behavioral, and cognitive responses to disasters, including by increasing the psychological resilience of the public and mitigating distress reactions and maladaptive behaviors that could occur in response to a conventional, biological, chemical, or radiological attack on the United States.''. (2) Funding.--Subsection (c) of section 1014 of the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT ACT) Act of 2001 (42 U.S.C. 3714) is amended by adding at the end the following: ``(4) Mental health preparedness.--Not less than 1 percent of the amounts appropriated pursuant to this subsection shall be used for the purpose of carrying out subsection (b)(3).''. SEC. 6. EFFORTS BY FEMA REGARDING NATIONAL RESILIENCE. Paragraph (2) of section 507(a) of the Homeland Security Act of 2002 (6 U.S.C. 317(a)) is amended-- (1) in subparagraph (D), by striking ``; and'' at the end and inserting a semicolon; (2) in subparagraph (E), by striking the period at the end and inserting ``; and''; and (3) by adding at the end the following: ``(F) of integrating into each of the Federal Emergency Management Agency's functions of mitigation, planning, response, and recovery, efforts to increase communities' psychological resilience and decrease distress reactions and maladaptive behaviors in individuals, and of coordinating such efforts with efforts by the interagency task force convened under section 319L of the Public Health Service Act and other efforts by the Department of Homeland Security.''. SEC. 7. ANNUAL REPORT BY SECRETARIES OF HHS AND HOMELAND SECURITY. Not less than 1 year after the date of the enactment of this Act and annually thereafter, the Secretary of Health and Human Services and the Secretary of Homeland Security, acting jointly, shall submit a report to the Congress that includes the following: (1) The recommendations of the interagency task force convened under section 319L of the Public Health Service Act (as amended by section 4 of this Act) that are relevant to the Department of Health and Human Services or the Department of Homeland Security. (2) A description of the steps that have or have not been taken by each Federal department to implement the recommendations described in paragraph (1). (3) Thorough explanations for rejection of any recommendations made by the interagency task force convened under section 319L. (4) Other steps undertaken to meet the goals of this Act.
National Resilience Development Act of 2003 - Amends the Public Health Service Act to direct the Secretary of Health and Human Services to convene and lead an interagency task force for the purposes of mitigating distress reactions and maladaptive behaviors in Americans and increasing their psychological resilience in preparation for, and in response to, a conventional, biological, chemical, or radiological attack on the United States. Directs the task force to coordinate and facilitate the efforts of various public bodies to develop programs and protocols to achieve such purposes. Amends the Act and the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act (USA Patriot Act) of 2001 to permit certain grants to go to activities aimed at enabling State mental health authorities, in coordination with State public health authorities and the interagency task force, to better understand and manage human emotional, behavioral, and cognitive responses to disasters. States that such efforts shall include increasing the psychological resilience of the public and mitigating distress reactions and maladaptive behaviors that could occur in response to an attack. Amends the Homeland Security Act of 2002 to direct the Federal Emergency Management Agency to integrate into each of its functions of mitigation, planning, response, and recovery, efforts to increase communities' psychological resilience and decrease distress reactions and maladaptive behaviors in individuals. Directs that FEMA take such measures in coordination with the interagency task force and other efforts by the Department of Homeland Security.
the experimental observation of hadrons correlated back - to - back with a hard or semi - hard trigger hadron in au - au collisions at 200 agev has revealed a splitting of the away side correlation peak in a semi - hard momentum regime between 1 and 2.5 gev @xcite wich is absent in p - p collisions where two back - to - back peaks appear . this means that the main strength of the away side correlation in au - au collisions in this momentum region is not found in the direction of the away side parton but at a large angle with respect to it . this angle is found to remain constant if the trigger momentum is changed and also for a variety of associate hadron momenta in the semi - hard regime . this observation can be contrasted with back - to - back correlations at hard trigger and associate hadron momenta well above 4 gev @xcite which show a reappearance of back - to - back correlations as seen in p - p collisions , albeit suppressed . this pattern has given rise to the idea that while energy loss of a back - to - back parton pair is responsible for the suppression observed at high @xmath0 , the measurements at intermediate associate hadron @xmath0 show how this energy is redistributed into the medium and may in fact show the recoil of the medium in the form of a hydrodynamical shockwave @xcite . phenomenological comparisons of this scenario with the data using the same monte - carlo ( mc ) simulation for energy loss and energy redistribution in shockwaves found agreement with both the high @xmath0 correlation pattern @xcite and the low @xmath0 peak splitting @xcite . a comparison with the measured 3-particle correlations @xcite has also been made in the same framework @xcite , however remains somewhat inconclusive as to prove or disprove the existence of shockwaves as the chief mechanism for energy redistribution . however , in @xcite an important difference between sonic shockwaves and other conical emission mechanisms has been pointed out , i.e. the longitudinal elongation of the shock cone due to longitudinal flow which should result in a large extension of the correlation signal in rapidity for a hydrodynamical excitation of the medium . this elongation is obscured in single hadron triggered correlation measurements due to the fact that the rapidity of the away side parton is not determined by the rapidity of the trigger hadron and all possible rapidities of the away side parton have to be averaged . however , if the trigger is a sufficiently hard back - to - back hadron pair , then the rapidity position of the away side parton is very constrained and the elongation should be observable . unfortunately , requiring a hard trigger hadron on the away side introduces a bias towards small energy deposition into the medium . in addition , an away side parton emerging from the medium does not only produce the leading away side hadron ( which is part of the trigger ) but also subleading hadrons building up correlation strength along the jet axis also at intermediate @xmath0 , thus obscuring any large - angle signal of a shockwave by filling in the dip between the shockwave wings with a back - to - back peak . in this publication , we aim at a discussion of these effects . we simulate hard back - to - back hadron production in a monte carlo ( mc ) model . there are three important building blocks to this computation : 1 ) the primary hard parton production , 2 ) the propagation of the partons through the medium and 3 ) the hadronization of the partons . only step 2 ) probes properties of the medium , and hence it is here that we must specify details of the evolution of the medium and of the parton - medium interaction . the model is described in great detail in @xcite , here we will just provide a short overview . the production of two hard partons @xmath1 in leading order ( lo ) perturbative quantum choromdynamics ( pqcd ) is described by @xmath2 where @xmath3 and @xmath4 stand for the colliding objects ( protons or nuclei ) and @xmath5 is the rapidity of parton @xmath6 . the distribution function of a parton type @xmath7 in @xmath3 at a momentum fraction @xmath8 and a factorization scale @xmath9 is @xmath10 . the distribution functions are different for the free protons @xcite and nucleons in nuclei @xcite . the fractional momenta of the colliding partons @xmath7 , @xmath11 are given by @xmath12 + \exp[\pm y_2 ] \right)$ ] . expressions for the pqcd subprocesses @xmath13 as a function of the parton mandelstam variables @xmath14 and @xmath15 can be found e.g. in @xcite . by selecting pairs of @xmath1 while summing over all allowed combinations of @xmath16 , i.e. @xmath17 where @xmath18 stands for any of the quark flavours @xmath19 we find the relative strength of different combinations of outgoing partons as a function of @xmath20 . for the present investigation , we consider a dihadron trigger at midrapidity @xmath21 . by mc sampling eq . ( [ e-2parton ] ) we generate a back - to - back parton pair with given parton types and flavours at transverse momentum @xmath20 . to account for various effects , including higher order pqcd radiation , transverse motion of partons in the nucleon ( nuclear ) wave function and effectively also the fact that hadronization is not a collinear process , we fold into the distribution an intrinsic transverse momentum @xmath22 with a gaussian distribution , thus creating a momentum imbalance between the two partons as @xmath23 . the probability density @xmath24 for finding a hard vertex at the transverse position @xmath25 and impact parameter @xmath26 is given by the product of the nuclear profile functions as @xmath27 where the thickness function is given in terms of woods - saxon the nuclear density @xmath28 as @xmath29 . rotating the coordinate system such that the near side parton propagates into the ( @xmath30 ) direction , the path of a given parton through the medium @xmath31 is determined by its primary vertex @xmath32 and we can compute the energy loss probability @xmath33 for this path . we do this in a radiative energy loss picture @xcite by evaluating the line integrals @xmath34 along the path where we assume the relation @xmath35 between the local transport coefficient @xmath36 ( specifying the quenching power of the medium ) , the energy density @xmath37 and the local flow rapidity @xmath38 with angle @xmath39 between flow and parton trajectory @xcite . @xmath37 and @xmath38 are taken from medium evolution models @xcite as discussed in @xcite . @xmath40 is the characteristic gluon frequency , setting the scale of the energy loss probability distribution , and @xmath41 is a measure of the path - length weighted by the local quenching power . we view the parameter @xmath42 as a tool to account for the uncertainty in the selection of @xmath43 and possible non - perturbative effects increasing the quenching power of the medium ( see discussion in @xcite ) and adjust it such that pionic @xmath44 for central au - au collisions is described . using the numerical results of @xcite , we obtain @xmath45 for @xmath40 and @xmath46 for given jet production vertex and angle @xmath47 . in the mc simulation , we first sample eq . ( [ e - profile ] ) to determine the vertex of origin . for a given choice of @xmath47 , we then propagate both partons through the medium evaluating eqs . ( [ e - omega ] ) and use the output to determine @xmath45 which we sample to determine the actual energy loss of both partons in the event . finally , we convert the simulated partons into hadrons , provided that a back - to - back pair emerges from the medium after energy loss . more precisely , in order to determine if there is a trigger hadron above a given threshold , given a parton @xmath48 with momentum @xmath20 , we need to sample @xmath49 , i.e. the probability distribution to find a hadron @xmath50 from the parton @xmath48 where @xmath50 is the most energetic hadron of the shower and carries the momentum @xmath51 . in previous works @xcite we have approximated this by the normalized fragmentation function @xmath52 , sampled with a lower cutoff @xmath53 which is adjusted to the reference d - au data . this procedure can be justified by noting that only one hadron with @xmath54 can be produced in a shower , thus above @xmath55 the @xmath52 and @xmath49 are ( up to the scale evolution ) identical , and only in the region of low @xmath56 where the fragmentation function describes the production of multiple hadrons do they differ significantly . we improve on these results by extracting @xmath57 from the shower evolution code herwig @xcite . the procedure is described in detail in @xcite . sampling @xmath57 for any parton which emerged with sufficient energy from the medium provides the energy of the two most energetic hadrons on both sides of the event . the harder of these two defines the near side . the hadron opposite to it is then the leading away side hadron . for the present investigation , we require both to be in given momentum windows to count a dihadron triggered event . we average the energy loss on near and away side parton over many such events to determine the average energy deposition into the medium . in order to compute the correlation strength associated with subleading fragmentation of a parton emerging from the medium we evaluate @xmath58 ( also extracted from herwig ) , the conditional probability to find the second most energetic hadron at momentum fraction @xmath59 _ given that the most energetic hadron was found with fraction @xmath60_. this contribution to the strength of the away side correlation is competing with the shockwave signal . our way of modelling hadronization corresponds to an expansion of the shower development in terms of a tower of conditional probability denities @xmath61 with the probability to produce @xmath62 hadrons with momentum fractions @xmath63 from a parton with momentum @xmath20 being @xmath64 . taking the first two terms of this expansion is justified as long as we are interested in sufficiently hard correlations . however , in the following we also consider situations in which the near side trigger momentum is rather hard @xmath65 gev , the away side trigger momentum is likewise hard @xmath66 gev , but with a substantial gap between near and away side to allow for energy deposition in the medium , but observe fragmentation yield associated with this trigger in a regime where hydrodynamics is valid , i.e. @xmath67 gev . since the dihadron trigger forces the parton to high momenta , multi - hadron production at the low associate scale is likely . consequently , we have to include the next terms in the expansion . a detailed numerical treatment is very complicated , however we estimate the next two terms as @xmath68 and @xmath69 this procedure explicitly guarantees energy - momentum conservation and preserves the correct ordering in hadron momenta inside the jet . for the results quoted in the following , we have verified that the results converge and that @xmath70 is only a correction , and that hence the inclusion of further terms does not alter the result substantially . in fig . [ f - edep ] we show the away side energy deposition into the medium created in central au - au collisions at 200 agev as a function of the trigger momenta on near and away side for two different medium evolution models , a hydrdynamical code @xcite and a parametrized evolution model @xcite . this is the energy available to excite a shockwave . note that according to the phenomenological analysis @xcite a large fraction @xmath71 ( but not all ) of the available energy actually excites a shockwave . the energy deposition is always largest when the gap between near side and away side trigger momentum is maximal . there is some dependence on what model for the medium evolution is assumed to be valid , however some general trends remain robust : the energy deposition is roughly a third of the highest ( near side ) trigger energy . the additional variation with the away side @xmath0 is about 50% . on the other hand , if _ no _ away side trigger is required , typically all of the energy of the away side parton is lost to the medium @xcite . since the parton energy is on average roughly a factor two more than the energy of the leading hadron , requiring a dihadron trigger reduces the signal strength of the shockwave by about a factor six as compared to a single hadron triggered event . let us now compare the strength of the shockwave correlation signal with next - to - leading and higer order fragmentation of the away side parton . for this comparison , we consider the associate momentum range of 1 - 2.5 gev where the phenix collaboration has first seen indications for a shockwave @xcite . as explained in detail in @xcite , we can not reliably compute the precise magnitude of the shockwave per - trigger yield in a given momentum window , especially as long as the trigger is in a semi - hard regime below 6 gev , as the yield is not only dependent on assumptions about flow in the medium , but also recombination / coalescence processes @xcite need to be addressed below this scale . however , let us boldly assume that the per - trigger yield in single - hadron triggered shockwave events scales with the average trigger momentum and based on this assumption extrapolate from the phenix data with a trigger of 2.5 - 4 gev to the two fragmentation - dominated trigger ranges of 6 - 8 gev and 10 - 12 gev considered in this publication ( note that there is good evidence from star data @xcite that the rise of the yield is in fact substantially slower with trigger @xmath0 ) . with this maximal assumption , the per trigger yield given the phenix acceptance in the 1 - 2.5 gev associate momentum window for a 6 - 8 gev trigger would be @xmath72 and for a 10 - 12 gev trigger @xmath73 , and , again on the level of a rough approximation , reduced down to @xmath74 and @xmath75 in dihadron triggered events due to the bias on energy loss . on the other hand , the per - trigger yield into the 1 - 2.5 gev associate momentum window due to subleading fragmentation of the away side parton can be computed in our hadronization scheme using the approximations for @xmath76 and @xmath70 described above . the results are summarized in fig . [ f - frag ] . as seen from the figure , the yield is chiefly determined by the highest momentum scale ( i.e. the near side trigger momentum ) which is natural , given that this sets the overall energy available for hadron production in the jet . as the away side momentum scale is increased , the associated yield decreases . this is not unexpected , as requiring a larger fraction of the parton momentum to end up in the leading hadron , less momentum is available for subleading hadrons . however , the most striking result is that the expected per - trigger yields are of order @xmath67 , i.e. they are in fact by about a factor two larger than the upper limit for the per - trigger yields caused by the medium recoil due to the shockwave . this means that if dihadron triggers are used to study shockwave production , the dominant signal at midrapidity where the away side trigger hadron is observed is not the shock cone , but rather hadrons produced in nl fragmentation processes of the trigger parton . the shockwave must then be observed as a correction to this signal . most importantly , a splitting of the peak with a dip at zero degrees and strength at large angles is not expected under these conditions . we have discussed the expected changes in the correlation pattern seen in a hydrodynamical momentum regime when one goes from single hadron triggered events to dihadron triggered events . the main advantage of a dihadron trigger is that the rapidity of the away side parton is tightly constrained , thus a study of the medium recoil on the away side as a function of rapidity becomes meaningful . however , there are two effects which complicate the observation of the medium recoil substantially . first , by requiring a hard away side hadron , there is a significant bias towards events in which little or no energy was deposited into the medium . this reduces the energy available to excite a shockwave , and hence the strength of the correlation by at least a factor six . furthermore , once a hard away side hadron is detected , it is almost unavoidable that subleading , softer hadrons are created within the shower . this contribution is rather strong at low momenta and competes with the bulk recoil of the medium . we estimated here that it is at the position of the away side parton about a factor two stronger than the medium recoil . however , it is possible to eliminate the latter contribution due to its different shape in rapidity : while any shockwave signal is expected to be elongated in rapidity due to longitudinal flow , the jet cone due to fragmentation in vacuum would not be elongated at all . thus , by observing associate hadron production displaced in rapdidity from a hard dihadron trigger , a ( weak ) shockwave signal should become visible without any contamination from soft hadron production in the jet .
the experimental observation of hadrons correlated back - to - back with a ( semi-)hard trigger in heavy ion collisions has revealed a splitting of the away side correlation structure in a low to intermediate transverse momentum ( @xmath0 ) regime . this is consistent with the assumption that energy deposited by the away side parton into the bulk medium produced in the collision excites a sonic shockwave ( a mach cone ) which leads to away side correlation strength at large angles . a prediction following from assuming such a hydrodynamical origin of the correlation structure is that there is a sizeable elongation of the shockwave in rapidity due to the longitudinal expansion of the bulk medium . using a single hadron trigger , this can not be observed due to the unconstrained rapidity of the away side parton . using a dihadron trigger , the rapidity of the away side parton can be substantially constrained and the longitudinal structure of the away side correlation becomes accessible . however , in such events several effects occur which change the correlation structure substantially : there is not only a sizeable contribution due to the fragmentation of the emerging away side parton , but also a systematic bias towards small energy deposition into the medium and hence a weak shockwave . in this paper , both effects are addressed .
five subjects who were diagnosed with csc were recruited for this study at the medical college of wisconsin ( milwaukee , wi , usa ) . informed consent was obtained from all subjects after explanation of the nature and possible consequences of the study . the study protocol was approved by the institutional review board at the medical college of wisconsin and was conducted in accordance with the tenets of the declaration of helsinki . patients were diagnosed by clinical examination , fluorescein angiography , autofluorescence imaging , and sd - oct . axial length measurements were obtained on all subjects ( zeiss iol master ; carl zeiss meditec , dublin , ca , usa ) to determine the scale of retinal images . images were obtained using a bioptigen sd - oct ( bioptigen , research triangle park , nc , usa ) . horizontal and vertical line scan sets were acquired ( 640 a - scans / b - scan ; 120 repeated b - scans ) through the foveal center with a nominal scan length of 7 mm . scans were registered and averaged as described previously to increase the signal - to - noise ratio . volume scans ( 640 a - scans / b - scan ; 400 b - scans / volume ) were acquired over a nominal 3 3 mm area with horizontal and vertical b - scans . each b - scan within the volume scan was aligned and manually segmented to create en face views as described previously . en face oct images were generated for the onl , elm , and ez , each using a thin contour to avoid contamination from other layers of the retina and create the highest - contrast image . active csc was defined as the presence of subretinal fluid on oct , while resolved csc was defined as the resolution of fluid and reattachment of the retina . a custom aoslo system was used for this study , which was modified to simultaneously acquire confocal and split - detector images . imaging sequences consisted of 150 frames , which were processed to remove distortions from the sinusoidal scanning motion and then registered using a strip registration method described previously . up to 40 registered frames with the highest normalized cross - correlation then were averaged to increase the signal - to - noise ratio . because the confocal and split - detector images were collected in synchrony , the same registration transforms were applied to both , resulting in perfect spatial registration . these images then were montaged using adobe photoshop ( adobe systems , san jose , ca , usa ) . at each imaging session , the main region of interest was captured with a set of at least nine images , each with a 1.0 field of view . temporal and superior strips also were acquired , which extended 5 to 10 peripherally from the region of interest using images with a 1.5 or 2.0 field of view . cluster location was determined after manual coregistration of the aoslo montage and en face oct images using adobe photoshop . the greatest linear dimension of each hyperreflective cluster was measured manually on the aoslo image . clusters located in the outer retina were included for analysis if they were in focus on confocal aoslo and corresponded to the location of hyperreflective foci in the onl on en face oct . in the inner retina , clusters were included for analysis if they were in focus in the plane of the retinal capillaries . five subjects who were diagnosed with csc were recruited for this study at the medical college of wisconsin ( milwaukee , wi , usa ) . informed consent was obtained from all subjects after explanation of the nature and possible consequences of the study . the study protocol was approved by the institutional review board at the medical college of wisconsin and was conducted in accordance with the tenets of the declaration of helsinki . patients were diagnosed by clinical examination , fluorescein angiography , autofluorescence imaging , and sd - oct . axial length measurements were obtained on all subjects ( zeiss iol master ; carl zeiss meditec , dublin , ca , usa ) to determine the scale of retinal images . images were obtained using a bioptigen sd - oct ( bioptigen , research triangle park , nc , usa ) . horizontal and vertical line scan sets were acquired ( 640 a - scans / b - scan ; 120 repeated b - scans ) through the foveal center with a nominal scan length of 7 mm . scans were registered and averaged as described previously to increase the signal - to - noise ratio . volume scans ( 640 a - scans / b - scan ; 400 b - scans / volume ) were acquired over a nominal 3 3 mm area with horizontal and vertical b - scans . each b - scan within the volume scan was aligned and manually segmented to create en face views as described previously . en face oct images were generated for the onl , elm , and ez , each using a thin contour to avoid contamination from other layers of the retina and create the highest - contrast image . active csc was defined as the presence of subretinal fluid on oct , while resolved csc was defined as the resolution of fluid and reattachment of the retina . a custom aoslo system was used for this study , which was modified to simultaneously acquire confocal and split - detector images . imaging sequences consisted of 150 frames , which were processed to remove distortions from the sinusoidal scanning motion and then registered using a strip registration method described previously . up to 40 registered frames with the highest normalized cross - correlation then were averaged to increase the signal - to - noise ratio . because the confocal and split - detector images were collected in synchrony , the same registration transforms were applied to both , resulting in perfect spatial registration . these images then were montaged using adobe photoshop ( adobe systems , san jose , ca , usa ) . at each imaging session , the main region of interest was captured with a set of at least nine images , each with a 1.0 field of view . temporal and superior strips also were acquired , which extended 5 to 10 peripherally from the region of interest using images with a 1.5 or 2.0 field of view . cluster location was determined after manual coregistration of the aoslo montage and en face oct images using adobe photoshop . the greatest linear dimension of each hyperreflective cluster was measured manually on the aoslo image . clusters located in the outer retina were included for analysis if they were in focus on confocal aoslo and corresponded to the location of hyperreflective foci in the onl on en face oct . in the inner retina , clusters were included for analysis if they were in focus in the plane of the retinal capillaries . the first two sessions took place during active csc , and the final two sessions took place during resolved csc . the first imaging session took place 1 month after diagnosis , and the final imaging session took place 13 months after diagnosis . this subject also underwent inner retinal imaging during all sessions and split - detector aoslo imaging during the final session . the first imaging session took place 9 months after diagnosis during active but improving disease , and the final imaging session took place 14 months after diagnosis during resolved csc . . subjects dw_1175 and dw_1178 each underwent one imaging session with active and resolved csc , respectively . clusters found in active csc over areas of subretinal fluid were located primarily in the onl , but also were seen within the elm and ez ( fig . these clusters were surrounded by dark areas or defects in the photoreceptor mosaic , which corresponded with hyporeflectivity of the ez on oct ( fig . the mean size of the clusters ( n = 170 ) within the detached retina , as measured by greatest linear dimension , was 24.7 7.6 m . multimodal imaging of active csc in subjects jk_1190 ( top row ) , dw_1227 ( middle row ) , and dw_1175 ( bottom row ) . optical coherence tomography line scans show hyperreflective foci ( arrows ) primarily in the onl . ( b13 ) en face oct images of the onl show numerous hyperreflective foci within that layer . ( c13 ) enlarged en face oct images corresponding to the location of the rectangles in ( b13 ) . ( d13 ) confocal aoslo at the location of ( c13 ) shows type-1 hyperreflective clusters with associated dark areas in the photoreceptor mosaic . the clusters seen with aoslo correspond to the location of the hyperreflective foci on en face oct . scale bars : 150 m ( a13 ) and 50 m ( c13 , d13 ) colored boxes are added to compare given regions between different imaging modalities and retinal layers . ( a ) confocal aoslo montage overlaid onto an infrared reflectance slo image shows type-1 hyperreflective clusters with associated dark areas in the photoreceptor mosaic . ( b ) optical coherence tomography line scan at the location of the horizontal line in ( a ) with manual segmentation of the onl ( yellow ) , elm ( magenta ) , and ez ( cyan ) . ( c d ) en face oct images of the onl ( c ) , elm ( d ) , and ez ( e ) corresponding to the location of the colored bands in ( b ) . the horizontal lines represent the location of the oct line scan in ( b ) . many of the hyperreflective clusters in ( a ) are seen as hyperreflective foci in the onl ( c ) , as well as the elm ( d ) and ez ( e ) . the dark areas associated with the clusters in ( a ) are seen as areas of ez disruption ( e ) . clusters within areas of the retina that had reattached also were located primarily in the onl ( fig . 3 ) . many of these clusters remained stable in location throughout an 8-month follow - up period ( fig . the mean size of these clusters , as measured by greatest linear dimension , was 19.9 6.0 m ( n = 28 ) . these clusters also were associated with dark areas in the photoreceptor mosaic that correspond to areas of ez disruption ; however , the dark areas were smaller than those associated with clusters within detached retina . split - detector images were collected during the final imaging session with subject jk_1190 during resolved csc . this imaging modality showed that the dark areas seen with confocal aoslo contain photoreceptor inner segments ( fig . multimodal imaging of resolved csc in subjects jk_1190 ( top row ) and dw_1227 ( bottom row ) . ( b12 ) en face oct images of the onl show hyperreflective foci within that layer . ( c12 ) enlarged en face oct images corresponding to the location of the rectangles in ( b12 ) . ( d12 ) confocal aoslo at the location of ( c12 ) shows type-2 hyperreflective clusters ( d1 ) or the corresponding dark areas in the photoreceptor mosaic ( d2 ) , depending on the level of focus . scale bars : 150 m ( a12 ) and 50 m ( c12 , d12 ) . longitudinal imaging of subject jk_1190 showing oct line scans ( top row ) with vertical lines representing the boundaries of the en face oct images of the onl ( bottom row ) . imaging took place during active csc ( a12 ) and during resolved csc 4 ( b12 ) and 12 ( c12 ) months later . many of the hyperreflective foci are stable in location between time points ( b ) and ( c ) . ( a ) confocal aoslo montage of subject jk-1190 during resolved csc overlaid onto an infrared reflectance slo image with rectangles ( 1 ) and ( 2 ) corresponding to the location of insets ( b1 , c1 ) and ( b2 , c2 ) , respectively . ( b1 - 2 ) confocal aoslo shows dark areas in the photoreceptor mosaic , which correspond to the location of type-2 hyperreflective clusters ( not shown ) . ( c1 - 2 ) split - detector aoslo at the location of ( b1 - 2 ) shows cone inner segments within the areas that appear dark on confocal aoslo ( dashed rectangles ) . these clusters were smaller than the other two types of clusters with the mean greatest linear dimension being 15.3 4.1 m ( n = 28 ) . they also differed markedly from clusters in the outer retina in appearance with split - detector imaging , with the former having clearly demarcated borders ( fig . the clusters along the retinal capillaries seemed more prevalent during the imaging session of active csc , compared to the two imaging sessions of resolved disease . ( a ) confocal aoslo image during active csc in subject jk_1190 shows type-3 clusters along the retinal capillaries ( arrowheads ) . larger type-1 clusters in the outer retina also are visible ( arrows ) , but are out of focus . the same area 2 ( b ) and 10 ( c ) months later , both during resolved disease , shows persistent but less prevalent clusters . ( d ) perivascular clusters that are faintly seen in ( c ) are clearly demarcated with split - detector aoslo ( arrowheads ) . we observed multiple intraretinal hyperreflective clusters in subjects with csc using confocal and split - detector aoslo . by comparing different imaging modalities , we classified these clusters into three distinct types , which we termed type-1 , type-2 , and type-3 ( table ) . types of intraretinal hyperreflective clusters and characteristics clusters in the areas of detached retina were labeled type-1 and were located primarily in the onl as demonstrated using en face oct ( fig . 1 ) . some of the hyperreflective clusters seen in the aoslo images did not have a corresponding hyperreflective focus on en face oct . these represented structures that are located outside of the contour used to generate the en face oct of the onl . moreover , the aoslo has a large depth of focus ( 30 m ) , but can not necessarily capture the entire width of the onl when the level of focus is set on the photoreceptor mosaic . this explains why some structures seen on en face oct of the onl were not visible on aoslo . it also should be noted that small distortions are present in the aoslo and en face oct images due to the scanning nature of the imaging systems and the fact that the eye moves during image acquisition . nevertheless , the overall concordance between the two imaging modalities was excellent , as demonstrated in figures 1 and 3 . adaptive optics slo showed that type-1 hyperreflective clusters are associated with dark areas or defects of the photoreceptor mosaic that surrounded the individual clusters . dark areas in the photoreceptor mosaic in csc have been described previously by ooto et al . using aoslo and were attributed to lost or damaged cones . our study demonstrated that the dark areas tend to correspond to the areas of hyporeflectivity within the ez and also often are associated with hyperreflective clusters in the onl ( fig . 1 ) . the elm between the ez and the hyperreflective clusters did not show signs of disruption on en face oct ( fig . one possibility for this finding is that the clusters cause displacement of the photoreceptor cell bodies , which is transmitted to the photoreceptor os , thereby altering the os alignment . light reflected off the retina is directionally sensitive as described by the optical stiles - crawford effect , and this effect is thought to be due to the waveguiding properties and angular tuning of individual cones . split - detector aoslo detects multiple - scattered light from photoreceptor inner segments , and enables visualization of these structures regardless of the wave - guiding status of the os . with split - detector , we have shown that some of the dark areas on confocal imaging contain cone photoreceptors ( fig . the explanation for the darker areas on the split - detector images themselves is not well - established , but may be due to the subtraction of the left from right halves of the recorded nonconfocal signal in areas of small undulations of the photoreceptor inner segments . these clusters also were associated with dark areas in the photoreceptor mosaic , though they were smaller than the dark areas associated with type-1 clusters . the presence of hyperreflective clusters / foci in resolved csc contrasts with several studies , which described these structures on oct as being confined to areas of serous retinal detachment . it is likely that these studies were describing the more numerous and slightly larger type-1 clusters . unlike type-1 clusters , which seem to be transient , type-2 clusters remained stable in location . the duration of their stability will require further follow - up , but in subject jk_1190 , these structures remained in the same location for 8 months ( fig . 4 ) . given this stability , these structures are less likely to be migratory cells , but could represent residual cellular debris . type-3 clusters are those that are located along the retinal capillaries in the inner retina ( fig . 6 ) . these clusters are not seen clearly on oct and to our knowledge have not been described previously in csc , likely due to their small size . shen et al . described increased macrophages that lined the retinal vessels and wrapped around the smaller vessels in ischemic retinas . other studies also have determined that macrophages , as well as microglia , have a role in vascular growth and regression , but with a larger contribution from macrophages . the identity and significance of the intraretinal hyperreflective clusters are currently unknown , though it is clear that type-1 and possibly type-2 hyperreflective clusters represent the intraretinal hyperreflective foci previously described on oct . others have hypothesized that intraretinal hyperreflective foci are an accumulation of proteins or lipids , activated microglia , or macrophages with phagocytized os . during retinal detachment , the microglial cell response also was highly localized to the detached areas of the retina . the aoslo features support the hypothesis that the hyperreflective foci are cellular in nature , rather than deposits of material given their discrete , granular , and demarcated appearance and consistent size . we observed multiple hyperreflective clusters in the inner and outer retina in eyes with active and resolved csc using aoslo . confocal and split - detector aoslo provides information about these structures that could not be obtained with oct alone , such as their granular appearance , size , association with retinal vessels , and relationship with the photoreceptor mosaic . this allowed us to categorize the clusters into three distinct types and provided evidence that the clusters may be cellular in nature . the combined use of en face oct with aoslo allows for precise axial and lateral localization . larger studies with longitudinal follow - up using aoslo are needed to determine the clinical significance of these clusters and their relationship to the pathogenesis and resolution of csc .
purposeto improve our understanding of central serous chorioretinopathy ( csc ) , we performed an analysis of noninvasive , high - resolution retinal imaging in patients with active and resolved csc.methodsadaptive optics scanning light ophthalmoscopy ( aoslo ) and spectral - domain optical coherence tomography ( sd - oct ) were performed on five subjects with csc . a custom aoslo system was used to simultaneously collect confocal and split - detector images . spectral domain oct volume scans were used to create en face views of various retinal layers , which then were compared to montaged aoslo images after coregistration.resultsthree distinct types of intraretinal hyperreflective clusters were seen with aoslo . these clusters had a well - demarcated , round , and granular appearance . clusters in active csc over areas of serous retinal detachment were termed type-1 . they were found primarily in the outer nuclear layer ( onl ) and were associated with large defects in the photoreceptor mosaic and ellipsoid zone . clusters in areas where the retina had reattached were termed type-2 . they also were located primarily in the onl but showed stability in location over a period of at least 8 months . smaller clusters in the inner retina along retinal capillaries were termed type-3.conclusionsretinal imaging in csc using en face oct and aoslo allows precise localization of intraretinal structures and detection of features that can not be seen with sd - oct alone . these findings may provide greater insight into the pathophysiology of the active and resolved phases of the disease , and support the hypothesis that intraretinal hyperreflective foci on oct in csc are cellular in nature .
Facebook was a dominant platform in the 2008 and 2010 elections, a Pew director says. Study: Facebook users more politically engaged Facebook users are more politically engaged compared with other Internet users, a new study from Pew finds. Someone who visits Facebook multiple times per day is 2½ times more likely to attend a political rally or meeting, 57 percent more likely to influence someone else’s vote and 43 percent more likely to have said he or she would vote, the survey found. Story Continued Below While the study did not conclude why Facebook users are so politically motivated, Lee Rainie, director of the Pew Research Center’s Internet and American Life Project and co-author of the report, suggests one explanation: “Facebook became a favorite platform of young people that were involved in politics in 2008 and 2010.” The survey polled 2,255 American adults during the fall of 2010 in an effort to explore how people’s use of social networking services relates to trust, tolerance, social support and community and political engagement. Pew “gets asked a lot about whether the movement of people into technology spaces makes them more partisan,” Rainie told POLITICO. “What we see here is the opposite, particularly in the case of MySpace.” According to the study, MySpace users are more likely to be open to opposing points of view. The research measured respondents’ ability to consider multiple points of view. When it comes to voting, LinkedIn users took the crown. Out of LinkedIn users surveyed, 79 percent said they voted or intended to vote, compared with 65 percent of Facebook users, 62 percent of Twitter users and 57 percent of MySpace users. Among other interesting findings, the research found that — controlling for demographic factors — Facebook users are more trusting, have more close relationships and receive more emotional support and companionship than other Internet users. “When you have trust, and are willing to engage other people, we see benefits in what kind of support you can get,” Keith Hampton, co-author of the study, told POLITICO. Barack Obama’s 2008 campaign was widely described as mastering the use of social media to organize supporters and communicate the candidate’s message, and the president’s reelection campaign believes technology will play an even more important role in 2012. “The idea is to use new technology to pursue the old-style grass-roots campaigning,” Obama campaign chief David Axelrod told POLITICO in April. To see the rest of the Pew report, go here. This article first appeared on POLITICO Pro at 5:47 a.m. on June 16, 2011. ||||| A new study by the Pew Internet & American Life Project reveals some interesting details about social networking users, debunking the myth that people who hang on Facebook a lot tend to have less real-life friends and contacts. Someone who uses Facebook several times per day, the study found, has on average "9% more close, core ties in their overall social network compared with other internet users." Furthermore, Facebook users tend to get more emotional support, companionship as well as instrumental aid (meaning they're more likely to get help when sick, etc). Finally, Facebook users tend to friend other users with whom they've actually met in real life; the average Facebook user has never met only 7% of his/hers Facebook friends. Since the study was conducted during the November 2010 elections, it revealed that Facebook users also tend to be more politically active than other internet users. A Facebook user that interacts with the site multiple times per day was two and half times more likely to attend a political rally, 43% more likely to have said they would vote, and 57% more likely to persuade someone on their vote. The study also shows that Facebook is, by far, the most engaging social platform out there, as 52% of Facebook users engage with the site daily. For comparison, 33% of Twitter users engage with the service every day, while only 7% of MySpace and 6% of LinkedIn users do the same. The report is based on data from telephone interviews conducted by Princeton Survey Research Associates International from October 20 to November 28, 2010 on a sample of 2,255 adults, age 18 and older. Read the full report here.
– Facebook friends aren’t real buddies, right? Wrong, according to a new study. People who use the social network multiple times a day have an average of “9% more close, core ties in their overall social network compared with other internet users,” the Pew study says. And yes, they have actually met almost all their Facebook friends: Only 7% of their friend list consists of people they’ve never spoken to in real life, Mashable reports. Other newly emerged social networking stats: Those on Facebook get more support, companionship, and “instrumental aid”—when they’re sick, for example, friends lend a hand. Facebookers are also more politically involved than other people on the Internet: Those on the site several times daily are 2.5 times more likely to go to a meeting or rally, 57% more likely to affect another’s vote, and 43% more likely to say they’ll vote, notes Politico. Today’s average American Internet user spends almost 16% of his or her online time on social networking sites, compared to 8% in July 2007; social networking has jumped 25% over the past year.
The original Most Interesting Man in the World is back in a cocktail bar surrounded by two fawning women. But this time he's sipping tequila, not a Mexican beer. Jonathan Goldsmith, who played the iconic Dos Equis character from 2006 until last year, is starring in a new video for Astral Tequila that borrows creative elements from the classic beer campaign. "I told you, I don't always drink beer," he says in the video, referencing the Dos Equis line. Then he simply says, "Astral tequila." The scene is similar to the setting that ended many of the Dos Equis spots, in which he typically delivered his classic line -- "I don't always drink beer, but when I do, I prefer Dos Equis" -- from a cocktail table surrounded by women. In the tequila video, Latin acoustic guitar music plays in the background, just like the old beer ads, like this one: The ad marks the beginning of a new partnership with the tequila brand, which is owned by Davos Brands, whose brands include Tyku Sake and Aviation American Gin. A spokeswoman described the video being released today as a "teaser" that will be posted on Astral's social channels and website. It marks Goldsmith's first alcohol endorsement since Dos Equis owner Heineken USA parted ways with him last year before swapping a younger actor into the role. Advertisers own the characters they create. So in the case of Dos Equis, Heineken USA owns the Most Interesting Man. If an actor performs in character, then that would be infringing the rights of the advertiser. But actors are free to reference their old roles. So it's a fine line. "Astral Tequila has obliged by all trademark legal requirements," the brand's spokeswoman stated when asked about the similarities with the Dos Equis campaign. A Heineken USA spokeswoman stated: "We thank Jonathan Goldsmith for his long-time contributions to the brand, and wish him the best in his next endeavors." Ad Age showed the Astral video to Douglas Wood, senior partner at Reed Smith and general counsel to the Association of National Advertisers, for an opinion. He noted the similarities in the humor and setting to the Dos Equis spot. But he added: "That alone, however, is not likely to be actionable. While they arguably take some of the expression and trade off on some goodwill associated with the character in the context of the Dos Equis campaign, it's hard to see any damages or a taking that goes beyond fair use. While humor alone is not a defense, it's unlikely the lighthearted references to his old character rise to an actionable cause. That said, they are on a slippery slope and should be cautious of going too far." In a press release touting the new Astral video, the brand stated: "We've always wondered what pop culture icon Jonathan Goldsmith really drinks. The answer is Astral Tequila." But that was not always the case. In an interview with Ad Age in 2012, Goldsmith said when not drinking beer he prefers a martini or Scotch. Asked if he's had a change of heart, the Astral spokeswoman stated in an email that "he will have to answer that for you when he is available for interviews again. Although we can say that many scotch aficionados are swapping their scotch for high-end tequilas like Astral." Goldsmith was not available for an interview on Tuesday. The actor, who was 77-years-old when he appeared in his last Dos Equis ad last year, has been making the most of his Dos Equis fame. He has a new memoir out called "Stay Interesting: I Don't Always Tell Stories About My Life, But When I Do, They're True and Amazing," that includes accounts of his romantic flings, including one with Tina Louise, who played Ginger on "Gilligan's Island," according to an account of the book in the New York Post. Politico Magazine recently published an excerpt from the book that describes how Goldsmith was invited to Barack Obama's 50th birthday party. Last year he started endorsing Wi-Fi network brand Luma in an ad in which he played himself. That campaign is expected to continue.
– "The Most Interesting Man in the World" is giving up beer for tequila. Actor Jonathan Goldsmith appeared as the sophisticated, eccentric, and worldly pitchman for Dos Equis beer for nearly a decade. He's now promoting Astral Tequila, reports the AP. In a new ad (watch it here), the 78-year-old Goldsmith nods at his Dos Equis days by raising a glass of tequila and saying, "I told you I don't always drink beer" (a reference to his famed Dos Equis line: "I don't always drink beer, but when I do, I prefer Dos Equis"). AdAge digs in to the legality of the Dos Equis-referencing spot, with one lawyer saying Astral is on a "slippery slope" but not likely to find itself in court over any sort of infringement. AdAge also offers up this nugget: Tequila is not Goldsmith's professed drink of choice, or at least it wasn't in 2012 when he told the magazine he favored martinis and Scotch when not consuming beer. (Here's how Goldsmith befriended a president.)
as a recent development , the possibility of unambiguous discrimination between unknown quantum states can be potentially useful for many application in quantum communication and quantum computing . a universal device that can unambiguously discriminate between two unknown states has been constructed by bergou and hillery @xcite . it has three registers , labeled a , b and c , and each register can store a qubit that is in some arbitrary state . in their work , it is assumed that register a is prepared in the state @xmath2 , register c is prepared in the states @xmath3 , and register b is guaranteed to be prepared in either @xmath2 or @xmath3 . here , @xmath2 and @xmath3 are the states to be distinguished , which are both unknown @xmath4,@xmath5,@xmath6 and @xmath7 are arbitrary unknown complex variables satisfying the normalization equation @xmath8 and @xmath9 . furthermore , it is assumed that register b is prepared in the state @xmath2 with probability @xmath10 and in the state @xmath3 with probability @xmath11 , such that @xmath12 , which guarantees that the state in register b is always one of these two states . the states viewed as a program which are sent into registers a and c ( called program registers ) are called program states , while the unknown state to be conformed which is sent into the register b ( called data register ) is called data state . the device constructed here can measure the total input states latexmath:[\[\begin{aligned } & = & |\psi_1\rangle_a|\psi_1\rangle_b|\psi_2\rangle_c,\nonumber\\ which are prepared with apriori probabilities @xmath10 and @xmath11 . with the symmetry properties of the input states , this device will then , with some probabilities of success , tell us whether the unknown state in the data register b matches the state stored in program register a or c. in later works , a series of new devices have been introduced and widely discussed by other authors @xcite . in these schemes we may have @xmath14 and @xmath15 copies of the states in the program register a and c respectively , and @xmath16 copies of the states in the data register b , then our task is to distinguish the two states @xmath17 with the minimum error or with the minimum inconclusive probability , if the minimum error strategy or optimum unambiguous strategy is applied , respectively @xcite . in both previous papers @xcite and @xcite , it has been proved that the optimal success probability of discrimination between two unknown qubits is an increasing function of @xmath16 , the number of copies in data register b. in this paper , we study the qudit states in n - dimensional hilbert space and demonstrate that the optimal success probability of discrimination between two unknown states is independent of the dimension @xmath0 . unlike the discrimination between two known states , we can not consider the subspace spanned by the two states only , but we should consider the full @xmath0-dimensional space , as the two states are completely unknown to us . we adopt the equivalence between the discrimination of unknown pure states and that of known mixed states , and then reduce the problem to the unambiguous discrimination between two pure states with jordan - basis method . finally , we get the optimal success probability of the unambiguous discrimination between the two mixed states and give the detection operators . the detection operators are also applicable to the discrimination between the pure states without averaging over them . the organization of the paper is as follows . in sec . [ sec2 ] , we give a brief description of the programmable states discrimination , and adopt the equivalence between the discrimination of unknown states to that of known mixed states . in sec . [ sec3 ] , we introduced the jordan basis for the mean input states . in sec . [ sec4 ] , we find the optimal unambiguous discrimination and introduce its implementation . finally , a brief summary is given in sec . in previous works , states such as @xmath18 have been considered @xcite . these states lie in a two - dimensional hilbert space . here we introduce a more generalized state @xmath19 where @xmath20 is an arbitrary unknown complex variable . all these states span a n - dimensional hilbert space @xmath21 , for which @xmath22 form a mutually orthogonal basis . the two states we want to distinguish are denoted by @xmath2 and @xmath3 . as we mentioned before , a copy of each of the two unknown states is provided for the program registers a and c , denoted as @xmath23 and @xmath24 , respectively . the state to be conformed is provided for the data register b as input . we assume that the state in the data register b is guaranteed to be prepared in @xmath2 with probability @xmath10 and in @xmath3 with probability @xmath25 respectively . thus we have two possible inputs latexmath:[\[\begin{aligned } & = & |\psi_1\rangle_a|\psi_1\rangle_b|\psi_2\rangle_c,\nonumber\\ here , @xmath2 and @xmath3 are completely arbitrary states @xmath27 and @xmath28 are unknown , and the states @xmath2 and @xmath3 making up these two inputs can change from preparation to preparation , it is only the pattern , middle states ( b ) matching the first ( a ) or the last ( c ) state that is preserved from one preparation to the next . therefore , we can introduce the corresponding density operators @xmath29 where the average is taken over the entire parameter space of states @xmath2 and @xmath3 . here , we should notice that the optimal strategy for discrimination between the two states is the strategy that is optimal on average . that s to say , we can unambiguously discriminate between @xmath2 and @xmath3 as soon as we can unambiguously discriminate between @xmath30 and @xmath31 . to this end , we now define the space and operators that we will need . let @xmath21 be the n - dimensional space for the unknown states and then the full space is @xmath32 . the space of symmetric states in @xmath33 is denoted by @xmath34 , which is an @xmath35-dimensional subspace . @xmath36 is an element of @xmath37 and @xmath28 is an element of @xmath38 . therefore , @xmath39 , the intersection of @xmath40 and @xmath41 , is the space of symmetric states in the full space @xmath32 . @xmath42 is a subspace of dimension @xmath43 , while @xmath40 and @xmath41 both @xmath44 . let @xmath45 be the subspace of @xmath32 generated by @xmath40 and @xmath41 , and the dimension of @xmath45 is @xmath46 . let @xmath47 be the orthogonal complement of @xmath42 in @xmath40 , let @xmath48 be the orthogonal complement of @xmath42 in @xmath41 , and let @xmath49 be the orthogonal complement of @xmath42 in @xmath45 . clearly , the average in @xmath30 uniformly fills the symmetric subspace of a and b and the entire subspace of c , whereas the average in @xmath31 uniformly fills the symmetric subspace of b and c and the entire subspace of a. therefore , the corresponding density operators averaged over the unknown states , can be expressed as @xmath50 where @xmath51 is the projection onto @xmath34 , and @xmath52 onto @xmath21 . @xmath51 can be expressed as @xmath53 where @xmath54 are the unique unit vectors in the symmetric subspace @xmath34 , @xmath55$]-dimensional spaces @xmath40 and @xmath41 in @xmath45 , which is equivalent to discriminating between subspaces @xmath47 and @xmath48 in the @xmath56$]-dimensional space @xmath49 . it has been shown in @xcite that two nonorthogonal subspaces @xmath57 and @xmath58 of a hilbert space can be unambiguously discriminated if we can find their canonical or jordan bases @xcite . the definition of the jordan bases is as follows . the set of orthogonal and normalized basis vectors @xmath59 in @xmath57 and @xmath60 in @xmath58 form jordan bases if and only if @xmath61 where @xmath62 are the jordan angles @xmath63 . in the jordan form , the full hilbert space is decomposed into @xmath0 orthogonal subspaces . here , we denote the @xmath64th subspace spanned by @xmath65 and @xmath66 by @xmath67 . clearly , if the dimension of @xmath67 is 1 , we can get @xmath68 , and no further discrimination is possible in @xmath67 . on the other hand , if @xmath69 , @xmath65 and @xmath66 are linearly independent , and they can be distinguished as two known pure states , which is familiar to us . thus our task is reduced to some separate discriminations in each subspace , between two known pure states . now , we will choose some bases to construct jordan bases for the density operators @xmath30 and @xmath31 . first , we should notice that the fully symmetric subspace of the three states @xmath42 must be common to both inputs . here , we denote the basis for the subspace @xmath42 by @xmath70 , where @xmath71 is satisfied , and there are @xmath43 these unique unit vectors , which can be expressed as follows , @xmath72 then , the structure of the two density operators in eq . ( [ eq1 ] ) , in particular the decomposition on the right - hand side , suggests that we consider @xmath73 and @xmath74 , where @xmath75 and @xmath76 . these vectors form orthogonal bases for @xmath40 and @xmath41 , respectively . thus the unique unit vector @xmath70 in @xmath42 can be expressed in terms of either @xmath40 or @xmath41 , since it is in both spaces . here , we neglect the vectors @xmath70 that satisfy @xmath77 , and a direct calculation shows that @xmath78 and @xmath79 we can now introduce the vectors @xmath80 for @xmath81 , @xmath82 for @xmath83 , @xmath84 and @xmath85 for @xmath86 . it is easy to see that @xmath87 s form orthogonal bases for subspace @xmath47 , while @xmath88 s form orthogonal bases for subspace @xmath48 . and there are @xmath89 @xmath87 s and @xmath89 @xmath88 s . thus we can rearrange the footnotes , and get @xmath65 and @xmath66 , where @xmath64 can change from @xmath90 to @xmath89 . from the explicit expressions of @xmath65 and @xmath66 , we can easily get @xmath91 and @xmath92 and @xmath93 form jordan bases for @xmath47 and @xmath48 . the two density operators that we want to distinguish can now be expressed as @xmath94 where @xmath95 . @xmath96 is the projection onto the subspace @xmath42 , and @xmath97 now , let @xmath67 be the two - dimensional space spanned by the nonorthogonal but linearly independent vectors @xmath65 and @xmath66 . the @xmath67 form a decomposition of subspace @xmath49 into @xmath98 mutually perpendicular two - dimensional subspaces . our next problem is how to distinguish the two pure sates in every subspace @xmath67 . we now want to unambiguously discriminate between the subspaces @xmath47 and @xmath48 in @xmath49 . first , we distinguish the jordan basis states @xmath65 and @xmath66 within their subspace @xmath67 . in subspace @xmath67 , the apriori probabilities of @xmath65 and @xmath66 are @xmath10 and @xmath11 respectively . here we will use the method mentioned in @xcite . let us define @xmath99 and @xmath100 the reciprocal states of @xmath65 and @xmath66 respectively , where @xmath101 and then @xmath102 thus , @xmath65 and @xmath66 can be rewritten as latexmath:[\[\begin{aligned } \label{eq2 } 2}|g_i^\bot\rangle- \frac{\displaystyle 1}{\displaystyle 2}|h_i\rangle,\nonumber\\ we can give a physical implementation based on neumark s theorem @xcite . following the proposals given in @xcite : ( a ) any pure state can be realized by a single - photon state and ( b ) following reck s theorem @xcite , any unitary transformation can also be realized by an optical network consisting of beam - splitters , phase - shifter , etc . @xcite , we can construct an optical device , which is presented in fig . 1a , to unambiguously discriminate between @xmath65 and @xmath66 . an additional port is initially prepared in the vaccum state @xmath104 . the operation of such a device is described by a unitary matrix @xmath105 which gives the probability amplitudes for a single photon entering via inputs @xmath99 and @xmath66 to leave the device by outputs @xmath106 , @xmath107 and @xmath108 . here , the four - port optical interferometer , which is presented in fig . 1b , is used and it is capable of realizing any @xmath109 unitary transformation @xmath110 @xmath111 where we have tacitly assumed that the action of all the @xmath109 beam splitter upon the states of the photon is described by a unitary matrix of real coefficient @xmath112 the parameter @xmath113 describes the transmittance ( @xmath114 ) and the reflectivity ( @xmath115 ) of the beam splitter . as shown in fig . 1a , the action of this device , denoted by @xmath105 gives , @xmath116 from eq . ( [ eq2 ] ) , @xmath105 will transform @xmath65 and @xmath66 into @xmath117 as shown in fig . 1a , both @xmath65 and @xmath66 have their own detectors @xmath118 and @xmath119 , and these two detectors will tell us which the input is , while @xmath120 corresponds to failure . this suggests that no photon can be detected by detector @xmath119 @xmath121 when the input is @xmath65 @xmath122 . thus , from eq . ( [ eq3 ] ) we get @xmath123 eq . ( [ eq3 ] ) and eq . ( [ eq4 ] ) are reduced to @xmath124 in other words , we can choose @xmath125 with @xmath126 to be our detection operators . here , also from fig . 1a , we can get @xmath127 where eq . ( [ eq5 ] ) has been used . by projecting these detection operators back onto the space @xmath67 , we can get @xmath128 which form the povm detection operators for the unambiguous discrimination between @xmath65 and @xmath66 . so we find @xmath129 and the probability of successfully identifying the two states is @xmath130 by letting @xmath131 , @xmath132 can be rewritten as @xmath133 this function , @xmath134 , has the property that @xmath135 happens at @xmath136 the optimal value of @xmath134 denoted by @xmath137 in the domain @xmath138 , can be gotten in following three cases : + ( a ) if @xmath139 , the requirement @xmath138 is satisfied , thus we have @xmath140 ( b ) if @xmath141 , we should choose @xmath142 and ( c ) if @xmath143 , @xmath144 our finding can be summarized as follows @xmath145 now , the discrimination between the two density operators @xmath30 and @xmath31 is to achieve @xmath89 discriminations described above simultaneously . but , here the probability for the occurrence of the input state in @xmath67 is @xmath146 . we can also give its implementation in fig . 2 . we leave @xmath147 alone , since they are in both subspaces @xmath40 and @xmath41 , and ca nt be distinguished . the povm which unambiguously distinguishes @xmath40 and @xmath41 has the form @xmath148 where @xmath149 . thus , the probability of successfully identifying the two density operators @xmath30 and @xmath31 is @xmath150 then , the optimal success probability @xmath151 can be expressed as @xmath152 now , we come back to the original problem that discrimination between two pure states @xmath2 ( @xmath36 ) and @xmath3 ( @xmath28 ) . as mentioned before , the povm formed by @xmath153 , @xmath154 and @xmath155 with @xmath156 can be used . its implementation is the same as discrimination between @xmath30 and @xmath31 in fig . the success probability is @xmath157 where the relationship @xmath158 has been used . finally , we can also get the optimal success probability as follows @xmath159 which is apparently independent of @xmath0 , the dimension of space @xmath21 , as we mentioned at the beginning . how to prepare the states with single photon is another important question in the optical realization of the discriminator . we shall give a brief discussion about this . an optical setting shown in fig . 3 can prepare the state @xmath160 . it is constructed by a series of @xmath161 unitary transformations which can be realized by the four - port optical interferometer in fig . 1b . if the parameters of each four - port interferometer are fixed properly , this setting achieves a single - photon state in n - dimensional space @xmath21 . we denote the operation of this device by @xmath162 , and @xmath163 then , by using @xmath164 , @xmath165 and @xmath166 in succession , we can get @xmath167 where @xmath168 is either @xmath2 or @xmath3 and @xmath169 denote the only input for the device . and @xmath36 or @xmath28 is to be prepared depending on @xmath165 being @xmath164 or @xmath166 . in conclusion , we have reconsidered the problem of the universal programmable quantum state discriminator originally introduced in @xcite . and we solved a more generalized problem that the unknown states are qudit states in the @xmath0-dimensional ( @xmath1 ) hilbert space @xmath21 . we adopted the equivalence between the discrimination of unknown pure quantum states and that of known mixed states . with the jordan - basis method , we simplified the problem and finally achieved the optimal unambiguous discrimination between the two unknown states , and the corresponding detection operators have already been given . significantly , we arrived at an important conclusion that the optimal success probability of the discrimination between two unknown pure states is independent of @xmath0 , the dimension of the space @xmath21 . furthermore , we also give the implementation of the optimal povm based on the neumark s theorem . the povm can be implemented on a larger hilbert space , where the additional degrees of freedom called ancilla are needed . 99 bergou , j.a . , hillery , m. : phys . lett . * 94 * , 160501 ( 2005 ) . bergou , j.a . , buek , v. , feldman , e. , herzog , u. , hillery , m. : phys . a * 73 * , 062334 ( 2006 ) . he , b. , bergou , j.a . a * 359 * , 103 ( 2006 ) . dariano , g. m. , sacchi , m.f . , kahn , j. : phys . a * 72 * , 032310 ( 2005 ) . zhang , c. , ying , m.s . , qiao , b. : phys . rev . a * 74 * , 042308 ( 2006 ) . sens , g. , bagan , e. , calsamiglia , j. , muoz - tapia , r. , phys . a * 82 * , 042312 ( 2010 ) . he , b. , bergou , j.a . a * 356 * , 306 ( 2006 ) . and th he , b. , bergou , j. a. , phys . a * 75 * , 032316 ( 2007 ) . duek , m. , buek , v. : phys . a * 66 * , 022112 ( 2002 ) . bergou , j.a . , feldman , e. , hillery , m. : phys . a * 73 * , 032107 ( 2006 ) . gallagher , p.x . , proulx , r.j . : in _ contributions to algebra _ , edited by bass , h. , cassidy , p. , kovacic j. ( academic press , new york , 1977 ) , pp , 157 - 164 . neumark , m.a . nauk sssr , ser , mat . * 4 * , 277 ( 1940 ) . bergou , j.a . , hillery , m. , sun , y. : j. mod . opt . * 47 * , 487 ( 2000 ) . sun , y. , hillery , m. , bergou , j.a . : phys . rev . a * 64 * , 022311 ( 2001 ) . mohseni , m. , steinberg , a.m. , bergou , j.a . 93 * , 200403 ( 2004 ) . reck , m. , zeilinger , a. , bernstein , h.j , bertani , p. : phys . lett . * 73 * , 58 ( 1994 ) . wu , x. , gong , y. : phys . a * 78 * , 042315 ( 2008 ) . wu , x. , yu , s. , zhou , t. : phys . a * 79 * , 052302 ( 2008 ) .
we consider the unambiguous discrimination between two unknown qudit states in @xmath0-dimensional ( @xmath1 ) hilbert space . by equivalence of unknown pure states to known mixed states and with the jordan - basis method , we demonstrate that the optimal success probability of the discrimination between two unknown states is independent of the dimension @xmath0 . we also give a scheme for a physical implementation of the programmable state discriminator that can unambiguously discriminate between two unknown states with optimal probability of success .
1 of 2. Rescuers try to reach a trapped infant inside a piece of sewage pipe, in this still image taken from video, in Jinhua city, Zhejiang province May 25, 2013. BEIJING (Reuters) - Firefighters in eastern China have rescued an abandoned newborn baby boy lodged in a sewage pipe directly beneath a toilet commode, state television reported, in a case which has sparked anger on social media sites. There are frequent reports in Chinese media of babies being abandoned, often shortly after birth, a problem attributed variously to young mothers unaware they were pregnant, the birth of an unwanted girl in a society which puts greater value on boys or China's strict family planning rules. In the latest case the infant was found in the sewage pipe in a residential building in Jinhua in the wealthy coastal province of Zhejiang on Saturday afternoon after residents reported the sound of a baby crying, state television said late on Monday. Firefighters had to remove the pipe and take it to a nearby hospital, where doctors carefully cut around it to rescue the baby boy inside, the report said. The child is in a stable condition and the police are looking for his parents, state television added. The case has been widely discussed on China's Twitter-like service Sina Weibo due to the graphic nature of the footage, with calls for the parents to be severely punished. "The parents who did this have hearts even filthier than that sewage pipe," wrote one user. (Reporting by Ben Blanchard and Sally Huang; Editing by Michael Perry) ||||| Chinese firefighters have rescued a newborn boy from a sewer pipe below a squat toilet, sawing out an L-shaped section and then delicately dismantling it to free the cocooned baby, who greeted the rescuers with cries. A tenant heard the baby's sounds in the public restroom of a residential building in Zhejiang province in eastern China on Saturday and notified authorities, according to the state-run news site Zhejiang News. A video of the two-hour rescue that followed was broadcast widely on Chinese news programs and websites late Monday and Tuesday. The child _ named Baby No. 59 from the number of his hospital incubator _ was reported safe in a nearby hospital, and news of the rescue prompted an outpouring from strangers who came to the hospital with diapers, baby clothes, powdered milk and offers to adopt the child. Police are treating the case as an attempted homicide, and are looking for the mother and anyone else involved in the incident. The landlord of the building in Pujiang county told Zhejiang News that it was unlikely the birth took place in the toilet room because there was no evidence of blood and she was not aware of any recent pregnancies among her tenants. The baby was stuck in the L-joint of pipe with a diameter of about 10 centimeters (3 inches). The video shows rescuers sawing out a section of the pipe along a ceiling that apparently was just below the restroom. The rescuers then rushed that section of pipe to a hospital, were firefighters and medics alternately used pliers and saws to rip apart the L-joint and free the baby. Despites the offers to adopt Baby No. 59, a doctor at the hospital said the boy would be handed over to social services if his parents do not claim him, Zhejiang News said. ___ Online: http://www.youtube.com/watch?vrHW_fn2W9HQ
– A baby boy, likely just days old, is safe after Chinese rescuers pulled him from a four-inch sewage pipe. It appears the baby was flushed down the toilet; apartment building residents heard him crying, the BBC reports. When rescuers arrived on the scene, they were unable to pull the baby out, so they sawed off a portion of the pipe and brought him to the hospital. There, firefighters and doctors cut the baby out of the pipe. The case has made waves in China, with users of the Weibo microblogging service hammering the parents: "The parents who did this have hearts even filthier than that sewage pipe," noted one user, per Reuters. Meanwhile, locals headed to the hospital to provide the baby with clothes, diapers, and milk, the AP reports. The baby has been dubbed Baby No. 59 after his incubator number. Police are seeking the parents in an attempted homicide case.
as all - ceramic crowns have become one of the best aesthetic restorative materials , the need for good practice and skills to perform such restorations within patient expectations and the recommended guidelines for tooth preparations becomes mandatory . all - ceramic resin bonded crowns appear to have a number of advantages compared with conventional metal - ceramic crowns . first , their better aesthetic properties may be due to the fact that the composite resin luting material is more translucent than conventional cements used with porcelain fused to metal crowns , which improves the transmission of light through the restored unit , and because of a good peripheral blend at the gingival margin without a black - line margin due to the metal substructure . second , the gingival response may be better , given that the periodontal response to porcelain known to be relatively excellent . furthermore , given the insoluble nature of the resin luting material , the periodontal response associated with dentin - bonded all - ceramic crowns may be superior to that associated with conventional crowns in which the luting agent at the margins may dissolve , resulting in possible plaque accumulation as well as a risk of caries lesion formation . third , laboratory studies have shown that the fracture resistance of dentin - bonded all - ceramic crowns was good , even though minimal preparations were used . when comparing porcelain - fused - to - metal to all - ceramic crowns , patient selection and technique sensitivity may be more critical with all - ceramic than with metal - ceramic restoration . furthermore , the coping design and luting system may be critical to maximize long - term success . tooth preparation is one of the important aspects of restorative dentistry because it establishes the foundation for whatever restoration is being placed . unfortunately , training in dental schools relative to tooth preparation is too often oriented to the dimensions of rotary instruments rather than tooth morphology . understanding of tooth morphology is essential for developing preparations that will permit the restorations placed upon them to be functionally durable , provide optimal esthetics , and be biologically compatible with the periodontal tissues . in general , preparation principles applied in all - ceramic systems . the margin design should be either modified shoulder and rounded internal angles or chamfer . the prepared tooth should have a taper of 6 to 10. all contours should be smoothened and rounded off to reduce the risk of stress concentration areas in the ceramic , facilitate impression making , die pouring , fabrication of the restoration and cementation . undercuts should be blocked out using a glass ionomer material or dentin - bonded composite . the occlusal clearance should be a minimum of 1 millimeter in centric relation and lateral excursions . overall , the preparation should be as conservative as possible with retention of some enamel if possible but in case of sever discoloration , minimum reduction maybe insufficient to provide adequate porcelain depth to cover the discoloration . the marginal preparations should produce an optimal peripheral seal from restoration to tooth and should be supragingival as possible , because achieving isolation for the bonding and luting procedure may be difficult in subgingival areas . furthermore , margins ideally should be on enamel , where marginal microleakage may be reduced compared with dentinal margins . therefore , the margins should be well adapted , not deformed during function and be accessible to the dentist for finishing and for the patient for cleaning . variations in tooth preparation for rbcs are well seen among general dental practitioners ( gdps ) around the world . sutton and mccord ( 2001 ) showed that 29% of the preparations on the buccal aspect had subgingival margins and the majority of the margins ( 84% buccally and 79% lingually ) of the dies examined exhibited appropriate shoulder or chamfer finishes . ( 2004 ) found that the average values of all preparation parameters of all - ceramic crowns investigated were within the borders as defined in the preparation guidelines of the manufacturer . however , on an individual tooth level , nearly all preparations showed to have one or more locations with imperfections . although several studies have discussed the importance of proper tooth preparation techniques that provide optimal integrity and increase longevity of the existing restoration , there were few studies that discussed the dental practitioner 's clinical performance following these guidelines in their private practice , therefore , this study aims to find out the variations in preparing anterior teeth for all - ceramic crowns to show if the preparation techniques follow the recommended guidelines . this study is based on analysis of samples of dies , which were prepared to receive resin - bonded all - ceramic crowns ( rbcs ) for anterior teeth obtained from dental laboratories in jordan . the results deducted from this research will show the most common clinical errors in preparations of anterior teeth for all - ceramic crowns between general dental practitioners in jordan and will focus on the most accepted recommendations needed when preparing anterior teeth for all - ceramic crowns , which will ultimately lead to increased life time expectancy of the prosthesis , enhanced clinical performance , increased procedural efficiency , and elevated prosthesis quality . one hundred ( n=100 ) of laboratory models featuring tooth preparations for rbcs for anterior teeth from different general dental practitioners were chosen from private dental laboratories in 2 major cities in jordan ( amman and irbid ) . all dies have been examined visually and have been found to be sound without defects or cracks . all samples included master casts supplied directly from dental laboratories containing sound anterior teeth of maxillary or mandibular jaw from canine to canine ( investigation area ) before die preparation . the ceramic crown systems available in dental laboratories in jordan investigated included : in - ceram ( vita zahnfabrik h , rauter gmbh & co. , bad sckingen , germany ) and ips empress ( ivoclar vivadent inc . , schaan , liechtenstein ) . the master casts were first used for measurement of the tooth margin positions in relation to the gingival margin positions on the buccal and lingual aspects before die trimming and then all master cast were trimmed to carry on the rest of the measurements as mentioned below . a specially designed wax cylinder ( 23 mm length and 20 mm width ) the positions of tooth preparation margin in relation to the gingival margin on the buccal and lingual aspects . this was measured before die trimming using williams periodontal probe ( ash ) according to the following criteria : > 2 mm supragingival margin 2 mm supragingival margin level with gingival margin 2 . the total amount of tooth reduction in the buccolingual and mesiodistal planes . the measurements were carried out using digital vernier caliper and calculated by deducting the width of the prepared teeth from the unprepared contralateral tooth width in the two planes . it was assessed according to the following criteria : > 3 mm ( overpreparation ) 3 mm>2 mm ( recommended ) 2 mm>1 mm ( recommended ) 1 mm and 0 mm ( underpreparation ) 3 . this was measured using the digital caliper by comparison to contralateral crown height according to the following groups : 4 . the buccal and lingual margin design of tooth preparations ( shoulder , chamfer , feathered or no clear margin ) . the prepared die was held in a vertical position over a graded rotary table , and viewed under the microscope . the graded rotary table that hold the die was then adjusted and turned around until the line overlaps the opposing axial wall . the angle formed between the two positions of the line represents the convergence angle the variables were assessed as if : less than 6 axial convergence between 6 and 10 axial convergence more than 10 axial convergence 6 . the die was held vertically over the graded table and viewed under the microscope . the vertical line of the microscope lens was adjusted vertically to across the internal line angle of the finish line . while keeping the line at the same position and direction , the table moves laterally until the line become tangent to the external surface of the prepared tooth ; the distance which the table moves calculated as finish line depth . the depth of the finish line was assessed as if : less than 0.5 mm depth between 0.5 and 1.5 mm depth 7 . depth continuity of the finish line was measured using the toolmaker microscope in the 4 aspects of the prepared die . the die was held horizontally over the graded table and viewed under the microscope . the m - d axis of the prepared tooth was held perpendicular to the graded table . the external x - y axis of the microscopic lens was adjusted to across the labial surface of the die . if the x - axis kept in close contact from the cervical third to the incisal third of the prepared die this indicated non - anatomical ( 1 plane ) labial tooth reduction and if the x - axis kept in close contact till middle third only and then deviated from the labial surface , this indicated anatomical labial ( 2 planes ) . frequency tables were used to describe criteria of aspects of preparations examined on dies and numbers of preparations which followed the identified criteria . to investigate the inter - examiner reproducibility of the scoring systems , a random subsample of dies ( n=20 ) was selected and re - scored after 7 days and the results compared . of the total 100 casts examined in this study , 62 casts containing 141 dies ( 67% ) were preparations for ips empress , while 38 casts containing 67 dies were preparation for in - ceram ( 33% of the total dies examined ) . a supragingival finish line was noticed in 12 dies ( 6% ) , while 53% ( 110 dies ) demonstrated equi - gingival margins , 36% ( 76 dies ) had subgingival margins and 5% ( 10 dies ) demonstrated no clear margin ( table 1 ) . tooth preparation margin positions in relation to the gingival margin position on the buccal / labial and lingual / palatal aspect * percentage ( % ) within each group . rbc= resin - bonded all - ceramic crowns it was possible to measure the total reduction in the buccolingual and mesiodistal planes and incisal reduction in 55 dies ( 39% ) of the 141 ips empress preparations , and in 39 of the 67 dies of in - ceram samples ( 58% ) . of the total suitable 94 dies , 54% demonstrated overpreparation ( > 3 mm ) , 33% exhibited the recommended depth of preparation ( < 3>2 ) and 13% showed underpreparation . total amount of tooth tissue reduction in the buccolingual and mesiodistal planes of the rbc tooth preparations rbc= resin - bonded all - ceramic crowns twenty percent ( 42 dies ) of the 208 dies demonstrated a shoulder finish line while a chamfer margin design was noticed in 39% . twenty - nine percent and 12% of samples had either a feathered or no clear margin design , respectively ( table 3 ) . buccal / labial and lingual / palatal margin designs for the resin - bonded all - ceramic crowns ( rbc ) tooth preparations table 4 shows that of the 94 dies , 18% demonstrated underpreparation incisally ( < 1 mm ) . only 17% of all rbc preparations were found to follow the recommended anatomical labial preparations . amount of incisal reduction of the tooth preparation rbc= resin - bonded all - ceramic crowns table 5 shows the degree of axial convergence angle between opposing walls of the rbc tooth preparations . seventy one percent ( 148 dies ) exceeded the recommended angle in a range between 21 and 28 degrees . axial convergence angle between opposing walls of prepared die rbc= resin - bonded all - ceramic crowns forty three tooth preparations were found to have the recommended depth of the finish line and 30% were found to have under prepared depth of finish line ( table 6 ) . rbc= resin - bonded all - ceramic crowns the results showed that ( 83% ) of all rbc preparations had non - anatomical preparations , while only ( 17% ) had the recommended anatomical labial preparations ( table 6 ) . the kappa statistics quantifying the inter - examiner variability for the various measurements performed showed 130 variables out of 160 having the same scores between first and second measurements readings while 30 variables showed dissimilar agreements between first and second measurements readings . the kappa statistics quantifying the inter - examiner variability for the various measurements performed showed 130 variables out of 160 having the same scores between first and second measurements readings while 30 variables showed dissimilar agreements between first and second measurements readings . the use of rbc crowns has increased and there appears to be a wide variety of clinical indications , particularly in situations in which a minimal preparation is indicated or in which where there are already tooth substance loss13 . currently , crowns such as inceram ( vivadent ) , ips empress ( ivoclar ) and others , bonded with resin cement , can provide acceptable service when they are performed in the right way that should be . the higher percentages of ips empress preparations compared with in ceram as found in this study may be due to the fact that dental technicians tend to work with ips empress more than with in - ceram crowns , as the fabrication for the ips empress is less time consuming and it gives the same aesthetic result while having the same cost of fabrication as pointed out verbally by dental technicians . ideally , finish line position should be placed supragingivally on sound tooth tissue , but in reality this is often not possible . sometimes aesthetics dictates a margin to be placed subgingivally and in these situations it should extend by 0.5 - 1 mm , but certainly not more than half the depth of the gingival sulcus , to ensure the epithelial attachment is not compromised . the placement of rbc margins subgingivally is critical because of the possibility of microleakage if the margins are placed either on dentin or cementum . it has been shown that the bonding of luting material will be compromised if moisture control is inadequate , which is the case in subgingival preparations . however , subgingival finish lines frequently are required in cases of inadequate occluso - cervical dimension needed for retention and resistance form , to extend beyond dental caries , fractures , or erosion / abrasion , to produce a cervical crown ferrule on endodontically treated teeth and to improve the aesthetics of discolored teeth and certain restorations . in such circumstances in the present study , 36% of the tooth preparations of rbcs had subgingival margin on the labial aspect which is not recommended for resin bonded crowns . several studies have also shown that subgingival finish lines for rbcs were produced in general dental practice . sufficient axial reduction is important to provide structural durability for the restoration and avoid over reduction . the use of depth orientation groove burs would be a useful method to ensure adequate axial tooth reduction . in addition , overpreparation of the teeth negates the advantages that rbcs demonstrate and may lead to loss of pulpal vitality and periradicular pathology . however , underpreparation will result in inappropriate labial and palatal contours , leading to compromised aesthetics . overbulking of the rbcs at the gingival margin may be necessary to allow for adequate material strength , which results in a poor emergence profile . considered an improper emergence profile as a significant etiologic factor in the marginal inflammation associated with crowns . insufficient labial reduction , particularly near the finish line , may also result in distortion of the ceramic during fabrication and clinical service which leads to poor marginal adaptation , debonding , and long - term cement failure , all of which have been cited as major factors in the failure of ceramic crowns . in the present investigation , the method used to measure the total amount of tooth reduction merely took into account the total amount of tooth reduction in one plane and not the individual axial wall preparation depth , therefore one aspect of the tooth may have been appropriately prepared , whilst the other may have been incorrect . however , the measurements still served as a guide for axial wall tooth reduction . ninety - four ( 45% ) samples were suitable for analysis of the total reduction in the buccolingual and mesiodistal planes , the remaining were not used due to lack of an unrestored contralateral or missing contralateral tooth . as much as 54% of the rbcs showed overpreparation with a tendency to overprepare the teeth on the mesiodistal plane rather than buccolingual plane . the results of this study also indicated that 12% of samples showed underpreparation of axial walls , which may result in a bulbous restoration with plaque retention leading to periodontal problems and/or an unsightly emergence profile . several designs have been advocated to optimize aesthetics , minimize marginal openings , and reduce stress concentration at the marginal aspect . it has been reported that strong correlations exist between finish lines designs and all - ceramic crowns strength . crowns with a chamfer finish line were significantly weaker than those with a shoulder finish line . the flat shoulder margin provides the required aesthetics and marginal stability necessary during porcelain firing , and it is the most suitable for a labial finishing line for anterior all - ceramic crowns . however , when resin cement was used with internally etched all - ceramic crowns , there was no significant strength reduction in a laboratory study or in a longitudinal retrospective clinical evaluation of all - ceramic crowns compared with non etched all - ceramic crowns . therefore , a shoulder or definitive chamfer finish lines are recommended for all - ceramic crowns that are not etched and bonded to the teeth . beveled or feather margins can lead to higher chance for ceramic fracturing during the seating or at some point after cementation . the technician , in an attempt to strengthen the margin , may overbuild the rbc , which may result in a bulbous margin with plaque retention leading to periodontal problems and/or an unsightly emergence profile . with regard to the marginal design , 59% of the samples demonstrated a shoulder or chamfer margin design on the buccal / labial and lingual / palatal aspects , 29% and 12% had either a feathered or no clear margin design respectively . in a similar study 84% of the buccal and 79% of the lingual margins had shoulder or chamfer preparations while 16% on the buccal and 21% on the lingual aspects demonstrated a feathered margin design or no detectable margin . proper incisal reduction is of importance as it will improve subsequent preparation access and helps to ensure correct proportioning of axial reduction planes . it was proposed that incisal / occlusal surfaces reduction should be 2 mm because this depth permits the development of normal morphology and has been identified as safe and reasonable amount to remove from tooth . it is also important to provide an adequate bulk of porcelain in areas exposed to heavy loading . the number of dies that was suitable for measurements of incisal reduction were 55 empress and 39 in - ceram , the other dies were unsuitable for the test due to absent of the contralateral tooth or the present of a restoration on the contralateral tooth . the results showed that 18% of empress and 18% of the in - ceram tooth preparation dies demonstrated under preparation occlusally . sutton and mccord ( 2001 ) found that tooth preparations for low fusing , chameleon fortress and empress rbcs demonstrated under preparation occlusally . poor contour of the restoration may in addition result in an unaesthetic restoration since the eye perception of tooth form is of a higher order than tooth shade . non - anatomical preparations may also result in a preparation with an overcontoured or " bulky " restoration and may also result in periodontal problems unless oral hygiene standards are exceptionally high . regarding the buccal / labial planes of the rbc tooth preparations , 83% of all rbc preparations had non - anatomical preparations while only 17% had the recommended anatomical labial preparations . in a similar study they , found that the majority of teeth ( 56% ) were found to be prepared with respect to tooth morphology , while 42% were prepared with only one plane of preparation on the buccal / labial aspect . it is helpful to know that many tapered burs have a 5 - 6 convergence angle which can be used to survey preparation taper by holding the hand piece in the same plane for all axial surfaces . resin bonded crowns are the important exception to the rule of minimizing taper , especially rbcs which may benefit from having tapers of about 20 to avoid generating high seating hydrostatic pressures during luting resulting in crown fracture . therefore , it is proposed that total occlusal convergence ideally should range between 10 and 20 degrees . the results of axial convergence angle between opposing walls for rbcs showed 29% of the rbc tooth preparations having the recommended axial convergence angle ( 5 to 10 ) and 71% have exceeded the recommended angle in a range between 21 and 28 degrees . as a general rule when using porcelain crowns , adequate clearance is required to achieve good aesthetics . this is achieved with a shoulder or heavy chamfer of 0.8 - 1 mm width for rbcs . however , shoulders of these depths may compromise tooth strength and pulp health especially for small teeth such as mandibular incisors . a similar problem occurs on teeth with long clinical crowns because of the narrowing of their diameter in the cervical region . therefore , the recommended finish line depths for all - ceramic crowns have ranged from 0.5 to 1.0 mm . regarding the finish line depth of rbc preparations , the majority of rbc tooth preparations ( 43% ) had the recommended depth ( 0.5 mm to 1.5 mm ) of the finish line . this result shows that gdps in jordan are aware of the proper finish line depth required for all - ceramic crowns . thirty percent had underpreparation depth of finish line , which may be related to the finding that 29% of the gpds in the present investigation had feathered margin design of the finish line . twenty - seven percent had overpreparation finish line depth , which , as mentioned above , will lead to poor aesthetic outcomes and may compromise tooth strength and pulp health . incomplete and/or non - uniform shoulder causes the porcelain in the cervical areas to vary significantly in thickness , with a potential for premature fracture during fabrication , in the process of seating , or after cementation . seventy - seven percent of rbc tooth preparations had continues finish line depth , while 23% had non - continuous finish line depth . finally , from the results obtained in this study , it has been shown that there are wide variations in the preparations of rbc crowns for anterior teeth in general dental practice in jordan . the results also showed the most common clinical errors in preparation of anterior teeth for all - ceramic crowns among gdps in jordan and focused on the most accepted recommendation needed when preparing anterior teeth for all - ceramic crowns , which will lead to increased life time expectancy of the prosthesis , enhanced clinical performance , and increased procedural efficiency . long - term studies in general dental practice , where the majority of the rbc restorations are placed , are still needed . graduate education for gdps who were not trained to use rbcs as undergraduates is probably necessary to improve the knowledge of the required preparation designs . this study showed that preparations for rbcs of the jordanian clinicians ' work investigated varied widely . therefore , under the tested conditions , the following conclusions may be drawn : gdps followed the guidelines for rbc preparation finish lines by preparing either shoulder or chamfer finish line . however , there were a number of cases with either no detectable margin or feathered margins . they were found to locate the finish line margins preparation subgingivally with no clear margins in some cases . the majority of cases had overpreparation not taking in consideration tooth morphology , which made most preparation done in one plane of reduction . the majority of dies examined exceeded the recommend axial convergence angle in a range of 21 and 28 degrees . the majority of dies examined had the recommend finish line depth and showed continuity of the finish line depth . relevant guidelines for the preparations of rbcs are not being entirely adhered to in private practice in jordan .
objectivesto investigate if general dental practitioners ( gdps ) in private practice in jordan follow universal guidelines for preparation of anterior teeth for resin bonded all - ceramic crowns ( rbcs ) . material and methodsa sample ( n=100 ) of laboratory models containing 208 tooth preparations for ips empress and in ceram , featuring work from different gdps , was obtained from 8 commercial dental laboratories . aspects of preparations were quantified and compared with accepted criteria defined following a review of the literature and recommendations of the manufactures ' guidelines . resultssubgingival margins on the buccal aspect were noticed in 36% of the preparations , 54% demonstrated overpreparation with a tendency to overprepare the teeth on the mesiodistal plane more than buccolingual plane . twenty percent of samples presented a shoulder finish line while a chamfer margin design was noticed in 39% . twenty - nine percent and 12% of samples had either a feathered or no clear margin design respectively . incisal under preparation was observed in 18% of dies of each type . only 17% of all preparations were found to follow the recommended anatomical labial preparations while 29% of the rbc preparations were found to have the recommended axial convergence angle . in total , 43% of preparations were found to have the recommended depth of the finish line . conclusionsit was found that relevant guidelines for rbc preparations were not being fully adhered to in private practice in jordan .
FORT PIERCE, Fla. -- A federal jury has cleared a deputy of using excessive force in the 2014 shooting death of Gregory Hill Jr. and awarded $4 to Hill's family, a family lawyer said. The jury, which was weighing a lawsuit filed by Hill's family, ruled last week that St. Lucie County Deputy Christopher Newman did not violate Hill's civil rights, reports CBS affiliate WPEC-TV of West Palm Beach, Floirda. And it placed the bulk of the blame -- 99 percent -- on Hill. It found Sheriff Ken Mascara one percent liable for the killing. Help us tell his story. Help us help his kids who the jury awarded $1 to. His daughter saw everything from across the street. Police have repeatedly change the story. https://t.co/lC51NZiJFF — John M. Phillips (@JohnPhillips) May 31, 2018 WPEC-TV reports that Newman shot and killed Hill, a 30-year-old father of three children, in January 2014 after two officers responded to a complaint that Hill was playing loud music at the family's home, court documents show. The sheriff's office said Hill, a Coca-Cola warehouse employee, was carrying a gun and was drunk when deputies got to the home. Newman testified that he fired his handgun four times through the home's garage door after seeing Hill with a gun, and after the garage door closed. A single bullet fatally struck his head. An unloaded gun was later found in Hill's back pocket. The sheriff's office said in a statement that Newman was "placed in a very difficult situation" and "made the best decision he could for the safety of his partner, himself, and the public given the circumstances he faced." "We appreciate the jury's time and understanding and wish everyone involved in this case the best as they move forward," the statement said. Attorney John Phillips, who represents Hill's family, said his clients will likely appeal, WPEC-TV reported. Editor's note: This story has been corrected to indicate the Hill family was awarded $4. ||||| For more than four years, questions swirled about the shooting death of Gregory Vaughn Hill Jr. at his home in Fort Pierce, Fla. After all, there were only three witnesses to how the entire episode unfolded: two St. Lucie County sheriff’s deputies and Mr. Hill. Mr. Hill, a 30-year-old African-American, was fatally shot by a white sheriff’s deputy who had responded to a noise complaint about music Mr. Hill had been playing in his garage. Toxicology reports showed Mr. Hill was drunk at the time. And after a brief encounter with the deputies, he was discovered dead inside the garage with a gun in his back pocket; the deputies said he had been holding it during their confrontation, though that claim is in dispute. Mr. Hill had been shot three times by one of the deputies, Christopher Newman. Among other things, a federal jury hearing a wrongful-death lawsuit brought by Mr. Hill’s family was asked to decide whether his constitutional rights had been violated and whether his estate should be awarded damages. How much, jurors were asked, were the pain and suffering of Mr. Hill’s three children worth? Last week, the jurors delivered their verdict. Deputy Newman had not used excessive force, they concluded, but the St. Lucie County sheriff, Ken Mascara, had been ever so slightly negligent given Deputy Newman’s actions. The jury awarded $4 in damages: $1 for funeral expenses and $1 for each child’s loss.
– The family of a Florida man who was shot and killed in his garage by a police deputy has been awarded just $4 in damages, reports CBS. Gregory Hill was playing music in his garage on an afternoon in 2014 when a neighbor called the police to complain, the New York Times reports. Deputy Christopher Newman and another officer responded to the call and confronted 30-year-old Hill, a warehouse employee, in the garage of his Fort Pierce home. The police said Hill was holding a gun during the confrontation, and toxicology reports later concluded Hill was intoxicated at the time. After a less than two-minute encounter, the garage door (which had been opened for the deputies) was closed again and Newman shot at Hill through the door four times, striking him twice in the abdomen and once in the head. Hill was found dead inside the garage with an unloaded gun in his back pocket. The family filed a wrongful death suit and requested damages for the pain and suffering of Hill’s three children. Last week, a federal jury delivered a verdict that baffled them. It found Newman had not used excessive force, but that St. Lucie County Sheriff Ken Mascara had been negligent (due to Newman's actions)—but only minimally so. Specifically, the sheriff's office was found to be just 1% at fault for Hill's death, so the damages ($1 for funeral expenses and $1 for each child’s suffering) were reduced to 4 cents from $4. The family's lawyer says, due to Hill's intoxication, the judge will further reduce the award from 4 cents to nothing. “I don’t get it,” the family's lawyer tells the Times. "I think [the jurors] were trying to insult the case. Why go there with the $1? That was the hurtful part."
Firefighters were left "seriously unimpressed" after spending an hour freeing a YouTuber whose head had been "cemented" into a microwave. Jay Swingler, 22, became stuck after filling the oven with Polyfilla and then sticking his head in it, which was wrapped in a plastic bag. The unplugged microwave was being used as a mould in the stunt, but the mixture soon set and friends became concerned as Swingler struggled breathe through the plastic tube he was using for air. The group had already spent an hour and a half trying to free the YouTuber when five firefighters from the West Midlands Fire Service arrived at the address in Fordhouses, Wolverhampton, on Wednesday. ||||| Image copyright West Midlands Fire Service Image caption Crews took an hour to free the man An internet "prankster" had to be freed by firefighters after cementing his head inside a microwave oven. West Midlands Fire Service said it took an hour to free the man after they were called to a house in Fordhouses, Wolverhampton. Friends had managed to feed an air tube into the 22-year-old's mouth to help him breathe, the service said. Watch Commander Shaun Dakin said the man "could quite easily have suffocated or have been seriously injured". Image copyright West Midlands Fire Service Image caption The fire service said the mixture had been poured around the man's head, which was protected by a plastic bag Mr Dakin said: "He and a group of friends had mixed seven bags of Polyfilla which they then poured around his head, which was protected by a plastic bag inside the microwave. "The oven was being used as a mould and wasn't plugged in. The mixture quickly set hard and, by the time we were called, they'd already been trying to free him for an hour and a half." Crews from the technical rescue team helped with taking the microwave apart, he added. "It took us nearly an hour to free him," added Mr Dakin. "All of the group involved were very apologetic, but this was clearly a call-out which might have prevented us from helping someone else in genuine, accidental need."
– There's stupid, there's extremely stupid, and then there's "cementing your head inside a microwave" stupid. Firefighters in Wolverhampton, England, say they were "seriously unimpressed" after five of them had to spend an hour dealing with a case of the latter Thursday, the BBC reports. The West Midlands Fire Service says a 22-year-old man it describes as a "YouTube prankster" and his friends poured several bags of a fast-hardening product into an unplugged microwave oven they were using as a mold around the man's head, which was protected by a plastic bag. Firefighters say that by the time they were called, the man's friends had been trying to free him for 90 minutes and had given him an air tube to help him breathe. Firefighters had to call a technical rescue team for help freeing the man, which involved taking apart the microwave and very carefully removing the cement, using a screwdriver. "As funny as this sounds, this young man could quite easily have suffocated or have been seriously injured," says Watch Commander Shaun Dakin, per the Telegraph. "All of the group involved were very apologetic, but this was clearly a call-out which might have prevented us from helping someone else in genuine, accidental need."
non - zero cosmological constant ( @xmath2 ) models have found increased popularity ( e.g. , ostriker & steinhardt 1995 ) owing to the problem of the age discrepancy implied by the latest hubble space telescope ( hst ) measurements of the hubble constant : the ages of globular clusters are apparently larger than the age of the universe predicted by the standard @xmath5 model which is favored by standard inflationary theory ( e.g. , freedman et al . 1994 ; for a brief overview of these arguments , see rees 1996 ) . in order to measure @xmath2 , it has been suggested that strong gravitational lenses might be used , i.e. isolated galaxies or clusters of galaxies for which the gravitational potential results in multiple imaging of a background object ( paczynski & gorski 1981 ; alock & anderson 1986 ; gott , park , & lee 1989 ) . following these suggestions , the use of the lens number counts ( or the optical depth ) was advocated since this is very sensitive to @xmath2 ( turner 1990 ; fukugita , futamase , & kasai 1990 ; fukugita et al . maoz & rix ( 1993 ) and kochanek ( 1995 ) have applied this method , obtaining upper limits of @xmath8 . kochanek ( 1992 ) has also suggested the lens redshift method , which , compared with the method based on lens counts , requires less presumptions about the properties of lenses and sources , properties which might bias the lens counts considerably ( helbig & kayser 1995 ; kochanek 1992 ; fukugita & peebles 1995 ) . taking into account the selection effects which were neglected in his early study ( kochanek 1992 : for a discussion on this selection effect , see helbig & kayser 1995 ) , kochanek ( 1995 ) finds @xmath9 at @xmath10 with a peak at @xmath11 . however , the estimated value of @xmath2 in kochanek ( 1995 ) is sensitive to the detection threshold which is not well understood , and thus it can not be considered very seriously at the present stage . it has been recognized that the mean splitting of the lensed images alone is useful for studies of the dynamical properties of lens galaxies , but not for the measurement of @xmath2 ( turner , ostriker , & gott , hereafter tog84 ; fukugita et al . however , when the mean separation is used together with other information such as the lens redshift , the lens magnitude , and the velocity dispersion of the lens galaxy , then the mean separation does become sensitive to @xmath2 ( paczynski & gorski 1981 ; gott et al . 1989 ; kochanek 1992 ; miralda - escude 1991 ) . in this _ letter _ , we will try to measure @xmath2 using a method which we call the `` lens parameter method '' , which is basically similar to those discussed in the above references . the commonly observed parameters for gravitational lenses are the lens redshift ( @xmath12 ) , the source redshift ( @xmath13 ) , the mean deflection of the lensed object ( or similarly the critical radius ( @xmath14 ) ) , the lens magnitude ( @xmath15 ) , and the source magnitude ( @xmath16 ) . for some systems , one or two of these observational parameters may be missing . how sensitive is @xmath14 to @xmath0 and @xmath2 for a given set of @xmath12 , @xmath13 , and @xmath15 ? to calculate @xmath14 , we will adopt the singular isothermal sphere ( sis ) model for the lens , along with the filled - beam approximation ( see section 4 ) , and the faber - jackson relation ( faber & jackson 1976 ) . the faber - jackson relation relates the velocity dispersion ( @xmath17 ) of e / s0 galaxies to their luminosities ( l ) : @xmath18 . here , we adopt @xmath19 , @xmath20 and @xmath21 , taken from kochanek ( 1992,1995 ) . then @xmath22 can be expressed as ; @xmath23 where @xmath24 is the angular size distance between the redshifts @xmath25 and @xmath26 in mpc , m is the total apparent magnitude of the lens galaxy , k(z ) and e(z ) are the k - correction and the evolutionary correction , respectively , for the lens galaxy . the ( e+k ) correction is important , and to calculate it we will use the 1 gyr burst model of bruzual & charlot ( 1993 ) at the formation redshift @xmath27 . this ( e+k ) correction is consistent with the results from the hst medium deep survey ( mds ) on the evolution of the luminosity function of elliptical galaxies ( i m et al . 1996 ) , which shows a brightening in luminosity by about 1 magnitude looking back to @xmath28 . 1 shows the @xmath29 relation for the strong gravitational lens systems hst 12531 - 2914 and hst14176 + 5226 , taking parameters from ratnatunga et al . the two curves show the model predictions under the adoption of different cosmological parameters , and the horizontal line shows the observed value ( the source redshift is unknown ) . the value of @xmath30 is quite sensitive to @xmath2 when values are known for @xmath12 , @xmath13 , and @xmath15 . however , the uncertainty in the prediction is about a factor of @xmath31 , which arises mainly from the uncertainties in the faber - jackson relation and in the apparent lens magnitude . hence , a single lens system such as hst 12531 - 2914 can not be used alone to measure @xmath2 . in order to set a useful limit on @xmath2 with this method , a sample of at least five lenses is required ( e.g. , see kochanek 1992 ) . in order to combine the information on cosmological parameters from all available lenses , we therefore construct a likelihood function which is the product of the probability of each lens having the observed value of @xmath30 for the given values of @xmath13 , @xmath12 , @xmath15 and the cosmological parameters . this probability @xmath32 is defined as , @xmath33 where @xmath34 is the dispersion in the predicted @xmath35 due to the uncertainty arising from the faber - jackson relation , together with other minor uncertainties , @xmath36 is the uncertainly in @xmath12 , and @xmath37 is the uncertainty in @xmath15 . @xmath38 is the gaussian function with the mean of @xmath39 and the dispersion of dx . we adopt @xmath40 ( or in terms of magnitude , @xmath41 ) which is a combination of the uncertainties arising from the faber jackson relation ( @xmath42 : de zeeuw & franx 1991 ) , @xmath43 ( @xmath44 : marzke et al . 1994 ; loveday et al . 1992 ) , and the e+k correction ( @xmath45 : i m et al . 1996 ) . when @xmath13 is not available ( hst12531 - 2914 ) , we also integrate eq.(2 ) over @xmath13 , assuming a uniform distribution in redshift space . finally , the likelihood function can be written @xmath46 where @xmath47 is the normalized probability of eq.(2 ) . we did not adopt the @xmath48 factor ( hereafter tog factor ) which was suggested by tog84 in order to account for the possible difference between the velocity dispersion of the underlying dark matter and the luminous material . recent studies show that this factor is not necessary ( kochanek 1993,1994 ; breimer & sanders 1993 ; franx 1993 ) . independently , we also checked the necessity of the tog factor by considering the mean image splittings ( see section 4 ) . the advantage of this method over the previous methods is the explicit use of the lens magnitude and the e+k correction , of which the latter has been observationally constrained only recently ( i m et al . 1996 ; pahre , djorgovski , & de carvalho 1996 ; bender , ziegler , & bruzual 1996 ; barrientos , schade , & lopez - cruz 1996 ) . these measurements enable us to make a reasonably good estimate of the dynamical properties of each lens galaxy . the probability of each individual lens having its unique configuration can then be calculated based on these individual properties , so that we do not have to use statistical measurements ( e.g. , the luminosity function ) which may decrease the dependence on the cosmological parameters when they are averaged over large numbers of objects . although our method is , in principle , not as sensitive to the value of @xmath2 as is the lens number count method ( e.g. , fukugita et al . 1992 ) , the latter method is possibly subject to greater uncertainties ( see section 4 ) . our method is slightly more susceptible to a small change in one of the input parameters , but , in common with the lens redshift method , we have a smaller number of parameters than the lens count method . in this respect , our method has an edge over the latter . in particular , the properties of lens galaxies at high redshift ( @xmath49 ) are highly uncertain . they could be dusty enough that the result from the lens counts might be biased against the non - zero @xmath2 model ( fukugita & peebles 1995 ) . in contrast , the lens parameter method uses lensing galaxies which lie at @xmath50 ( see section 3 ) , and the method is thus less affected by the unknown properties of high redshift galaxies . gravitational lenses are selected using the following criteria . \1 ) the strong lensing must be caused by a single galaxy lens . for example , we do not include 2016 + 112 in our sample since there are two lensing galaxies in this system . also , we have excluded lens systems which are clearly influenced by strong perturbations due to cluster potentials ( e.g. , 0957 + 561 , b1422 + 231 ) . \2 ) it must be known that the lens galaxy is likely to be elliptical . for example , we do not include b0218 + 357 in our study since there is good evidence that the lensing galaxy is a spiral or a late - type galaxy ( patnaik et al . 1993 ) . \3 ) the apparent magnitude and the redshift of the lens galaxy must be known or estimated to reasonable accuracy . accurate values for @xmath15 and @xmath12 are important for estimates of the dynamical properties of the lens galaxy . \4 ) for lens candidates that do not have a measured value for @xmath13 , we select only those which show distinctive features such as rings or crosses . we find that there are seven strong gravitational lenses that meet these selection criteria in the published literature , including objects found in our hst surveys ( table 1 ) . b1422 + 231 is excluded from this list because of the possible cluster perturbation as well as the ambiguity in the lens redshift ( @xmath51 from hammer et al . 1995 vs. @xmath52 from impey et al . 1996 ) . for mg0414 + 0534 , there have been speculations that the source redshift of the system is @xmath53 ( burke 1990 ; kochanek 1992 ) , but it was later established to be @xmath54 ( lawrence et al . 1995 ) , suggesting that the @xmath55 measurement pertains to the lens galaxy ( surdej & soucail 1994 ) . we have analyzed the archived hst observations of this system and find a preliminary result of @xmath56 for the lens galaxy ( ratnatunga et al . 1996 ) , suggesting that @xmath57 , consistent with the previous estimates of @xmath58 , and hence we will adopt @xmath57 for this system . finally , we have subtracted a few tenths of a magnitude from some of the quoted lens magnitudes in the literature , in order to correct for the total apparent magnitude . when the uncertainty in the lens magnitude is not quoted in the relevant reference , errors of about 0.3 magnitude are assigned to these lens galaxies . 2 , we present the relative likelihood of our measurement against @xmath0 for two cases which are of cosmological interest : i ) @xmath4 and ii ) @xmath59 . both likelihood functions are normalized with the maximum likelihood of case ( i ) , and direct comparison of cases ( i ) and ( ii ) is possible using fig.2 . when a flat universe is assumed ( case ( i ) ) , we find that @xmath60 , and we exclude the @xmath5 model with 97 % confidence . also , a universe with @xmath61 is excluded at the 95 % confidence level . if @xmath59 is assumed ( case ( ii ) ) , then @xmath62 is favored . the difference in the likelihood function between a flat universe with @xmath63 and an open universe with @xmath64 is about 0.5 1 . hence , a flat universe with non - zero @xmath2 is favored over an open universe at 68 % 82 % confidence . our result is only marginally consistent with the previous estimate of @xmath65 based on the lens counts which strongly favored the zero @xmath2 flat universe ( kochanek 1995 ; maoz & rix 1993 ) . to see what might have caused the disagreement between our result and the previous results , we have investigated the possible systematic errors in our analysis and these are listed below : \1 ) the filled - beam approximation vs. the empty beam approximation to relate redshift to distance , the filled beam approximation assumes that light rays propagate through smoothly averaged spacetime . in reality , spacetime is inhomogeneous , and therefore the filled beam approximation may not be correct ( e.g. , fukugita et al . 1992 ) . to see how our result could be affected by the filled beam approximation , the analysis was repeated adopting another extreme assumption , namely the empty beam approximation . we find that the latter approximation does not change our result significantly , but strengthens our finding slightly in favor of the non - zero @xmath2 model . \2 ) singular isothermal sphere vs. softened isothermal sphere . we can also assume different mass models for the lens , rather than the sis model . recent studies show that the sis model may be too simple to adequately describe the mass of e / s0s ( lauer 1988 ; krauss & white 1992 ) , although the size of the core radius may be small enough to be negligible ( wallington & narayan 1993 ) . if the softened isothermal sphere is used , the predicted @xmath14 will be a bit smaller than the predicted @xmath30 with the sis model . in fig.1 , this means that the predicted lines need to be shifted down along the @xmath14 axis , making the @xmath5 flat model more inconsistent with the prediction . the adoption of the softened isothermal sphere model will thus strengthen our result . \3 ) morphological misclassification in our analysis , we have assumed that each lens galaxy is an e / s0 . this assumption may be wrong , and to estimate the bias introduced by treating a spiral galaxy lens as an elliptical galaxy lens , we repeated our analysis with the inclusion of one known spiral lens system ( b0218 + 357 ) treated as an e / s0 lens . this caused the result to be strongly biased in favor of the @xmath59 model , because of the small predicted @xmath30 of the spiral lens system , a result similar to the issue discussed in ( 2 ) . thus , if one of the seven lenses we used was actually a spiral galaxy rather than an elliptical , then the correction of it would only strengthen our result . \4 ) wrong lens magnitude because the lens galaxy is much fainter than the lensed object in some cases , there is a possibility that the lens magnitudes are not well determined . systematic overestimate of the lens magnitudes by more than 0.6 magnitude can bias the result against the @xmath66 model . recent hst observations provide clues as to the accuracy of the ground - based estimates of lens magnitudes . the preliminary result by falco ( 1995 ) from the hst observation of @xmath67 gives an aperture magnitude of @xmath68 for the lens galaxy , while the original measurement by surdej et al . ( 1987 ) is r=19 . for mg0414 + 0534 , our preliminary analysis of the hst observation shows @xmath69 for the lens galaxy ( ratnatunga et al . 1996 ) , agreeing with the previous estimate of @xmath70 from the ground ( schechter & moore 1993 ) . on the other hand , impey et al . ( 1996 ) find @xmath71 in v for the lens galaxy of b1422 + 231 . the ground based estimate is @xmath72 for this object ( yee & ellingson 1994 ; yee 1995 ) . at @xmath73 , @xmath74 for e / s0 galaxies , thus the observed ground - based lens magnitude for this system disagrees with the hst observation by about 1 magnitude . these three examples may indicate that the early ground - based measurements are not very accurate . errors seem to go both ways , and hence there may be no systematic overestimate of the lens magnitudes . but there are only seven lenses in our sample , and it may be premature to say that there are no systematic errors in the lens magnitudes . \5 ) the tog factor with the tog factor included , we find that the peak of the likelihood function shifts to the region where both @xmath0 and @xmath2 are very small , where our test becomes quite insensitive to the cosmological parameters . however , many studies have shown in different ways that the tog factor is not needed ( kochanek 1995 and refs therein ) . in order to confirm earlier findings , we analyzed the mean image deflections of the known lens systems with the known source redshifts . using criteria ( 1 ) and ( 2 ) described in section 3 , we find that there are 11 lens systems available for this analysis ( see table 1 in keeton & kochanek 1995 ) . for these systems , we calculated the ratio @xmath75 . since @xmath76 is fairly independent of the cosmological parameters ( tog84 ; fukugita et al . 1992 ) , the average of 11 @xmath77 values will be about 1 if the tog factor is not necessary and about 1.5 if the tog factor is appropriate . we find an average value of @xmath78 , confirming that the tog factor is not necessary . in order to test the tog factor independently , the study of strong lenses at low redshift ( @xmath79 ) might be fruitful , since their lens parameters are then insensitive to the cosmological parameters . an optical survey which covers a large fraction of sky ( e.g. , sdss ) should be able to find a statistically significant number ( @xmath80 100 ) of such lenses . \6 ) cluster perturbation since elliptical galaxies preferentially live in a cluster environment , the gravitational potential of the lens may include a cluster component . the strong cluster perturbation generally increases the mean image deflection , and hence we tried to exclude such lenses from our study ( see section 3-(1 ) ) . nevertheless , we can not completely exclude the possibility that some of the lenses in our sample include a considerable amount of cluster perturbation . if that has happened , our result could be biased against the @xmath5 universe . to understand the possible contribution to the image splitting from the cluster potential , detailed modeling of the lens systems is desired using high resolution images from the hst , or else radio observations . \7 ) source redshift for hst14176 + 5226 crampton et al . ( 1996 ) have recently published a tentative source redshift for the lens system hst14176 + 5226 . a strong emission line is found at 5324 @xmath81 , along with a possible weak emission feature at 6822 @xmath81 . the strong emission line is very likely to be ly @xmath82 at @xmath83 if the weak emission feature at 6822 @xmath81 is real , and the latter can then be identified as civ 1549 . if the 6822 @xmath81 feature is not real , then the source object could be located at a redshift lower than @xmath84 . if @xmath85 for the hst14176 + 5226 , then the predicted @xmath30 will be reduced . this would bring the peak of the likelihood function toward the large @xmath2 value , strengthening the result in favor of the non - zero @xmath2 model ( see fig . \8 ) e+k correction the adopted e+k correction assumes a formation redshift of @xmath27 with a 1 gyr burst of star formation . we find that the e+k correction is most sensitive to the value of @xmath86 , and insensitive to the other parameters . if we adopt @xmath87 , then the result changes insignificantly towards the zero @xmath2 model . when @xmath88 , the result changes in favor of the non - zero @xmath2 model , and the change is significant when @xmath89 . if @xmath90 , @xmath2 could be as large as @xmath91 . if our result is an overestimate in the value of @xmath2 , then there must have been a large systematic overestimate of the lens magnitudes and/or there are strong cluster perturbations . on the other hand , if the lens counts have led to an underestimate in the value of @xmath2 , then that could have been caused by : i ) the dusty nature of high redshift elliptical galaxies ( see section 2 for more discussion ) , ii ) a decrease in the number density of ellipticals as a function of look - back time , as expected if most elliptical galaxies were created via major merging events ( i m et al . 1996,1997 ; baugh , cole , & frenk 1996 ; kauffmann , charlot , & white 1996 ) , and/or iii ) other uncertainties in the properties of lens galaxies , such as the lf and the dynamical properties of the low mass ellipticals . future hst observations of faint galaxies , as well as the accumulating redshift data from ground based telescopes , will hopefully put stringent constraints on elliptical galaxy evolution at @xmath92 . these data will possibly give us indications as to why the results from the lens counts have strongly favored the zero @xmath2 model while our result strongly rejects the flat universe with @xmath59 . it is noteworthy that neither method strongly rejects the low @xmath93 universe . we have described and applied the lens parameter method to measure cosmological parameters using strong gravitational lenses . using seven strong lenses each with an identified lens galaxy , we find that a model universe with @xmath94 and low @xmath93 is favored and that the flat model with @xmath59 is excluded at greater than 95 % confidence . a universe with low @xmath93 and @xmath59 can be marginally excluded with respect to the flat universe with a non - zero @xmath2 at 68 % 82 % confidence . our result is not biased in favor of a non - zero @xmath2 model due to any conceivable systematic errors , except for possible strong perturbations from cluster potentials , and systematic overestimate of the lens magnitudes . future hst observations should uncover new lens systems with measurable lens properties suitable for this kind of study , and they should also provide a better understanding of the known lens systems . we should therefore be able to get a stronger constraint on @xmath2 in the near future . the hst medium deep survey is funded by stsci grants go2684 _ et seqq._. we would like to thank the other members of the medium deep survey team at jhu , especially eric j. ostrander for his efforts on retrieving and reducing the archival hst data . we are grateful to emilio falco for providing the lens magnitudes of 0142 - 100 system . we also thank chris kochanek , howard k. c. yee , joel primack , and stefano casertano for useful discussions and communications , and mark subbarao and the anonymous referee for helpful comments on the manuscript . alock , c. , & anderson , n. 1986 , , 302 , 43 barrientos , l. f. , schade , d. , & lopez - cruz , o. 1996 , , 460 , l89 baugh , c. m. , cole , s. , & frenk , c. s. 1996 , , submitted bender , r. , ziegler , b. , & bruzual , g. , , 463 , l51 breimer , t. g. , & sanders , r. h. 1993 , , 274 , 96 bruzual , g. a. , & charlot , s. 1993 , , 405 , 538 burke , b. f. 1990 , in lecture notes in physics , 360 , gravitational lensing , ed . y. meillier , b. fort , & g. soucail , ( berlin : springer ) , 127 crampton , d. , le fevre , o. , hammer , f. , & lilly , s. j. 1996 , , in press faber , s. , & jackson , r. 1976 , , 204 , 668 falco , e. 1995 , private communication fassnacht , c. d. , et al . 1996 , , in press franx , m. 1993 , in galactic bulges , ed . h. dejonghe & h. j. habing ( dordrecht : kluwer ) 243 freedman w. l. et al . 1994 , nature , 371 , 757 fukugita , m. & peebles , p. j. e. 1995 , princeton preprint fukugita , m. , futamase , t. , kasai , m. , & turner , e. l. 1992 , , 393 , 3 fukugita , m. , futamase , t. , & kasai , m. 1990 , , 246 , 24p gott , j. e. iii . , park , m - g . , & lee , h. m. 1989 , , 338 , 1 helbig , p. , & kayser , r. 1996 , , in press hammer et al . 1995 , , 298 , 737 hewitt et al . 1992 , , 104 , 968 i m , m. , griffiths , r. e. , ratnatunga , k. u. , & sarajedini , v. l. 1996 , , 461 , l79 i m , m. , griffiths , r. e. , & ratnatunga , k. u. 1997 , , submitted impey , c. d. et al . 1996 , , in press kauffmann , g. , charlot , s. , & white , s. d. m. 1996 , , submitted keeton ii , c. r. , & kochanek 1995 , in iau symposium 173 : astrophysical applications of gravitational lensing , ed . kochanek & j.n . hewett , ( dordrecht : kluwer ) 419 kochanek , c. s. 1995 , , submitted kochanek , c. s. 1994 , , 436 , 56 kochanek , c. s. 1993 , , 419 , 12 kochanek , c. s. 1992 , , 384 , 1 krauss , l. m. , & white , m. 1992 , , 394 , 385 kristian , j. et al . 1993 , , 106 , 1330 langston et al . 1989 , , 97 , 1283 lauer , t. 1988 , , 325 , 49 lawrence , c. r. , cohen , j. g. , & oke , j. b. 1995 , , 110 , 2583 loveday , j. , peterson , b. a. , efstathious , g. , & maddox , s. j. 1992 , , 390 , 338 maoz , d. , & rix , h .- w . 1993 , , 416 , 425 marzke et al . 1994 , , 108 , 437 miralda - escude , j. 1991 , , 370 , 1 myers , s. t. et al . 1995 , , 447 , l5 ostriker , j. p. , & steinhardt , p. j. 1995 , preprint paczynski , b. , & gorski 1981 , , 248 , l101 pahre , m. a. , djorgovski , s. g. , & de carvalho , r. r. 1996 , , 456 , l79 patnaik , a. r. , et al . 1993 , , 261 , 435 ratnatunga , k. u. , ostrander , e. j. , griffiths , r. e. , & i m , m. 1995 , , 453 , l5 ratnatunga , k. u. , et al . 1996 , in preparation rees , m. 1996 , in the cosmological constant and the evolution of the universe , proc . first rescue symposium , ed . m. fukugita ( universal academy press , tokyo ) 1 schechter , p. l. , & moore , c. b. 1992 , , 105 , 1 surdej , j. & soucail , g. 1994 , in gravitational lenses in the universe , ed . j. surdej , d. fraipont - caro , e. gosset , s. refsdal , & m. remy ( liege , universite de liege ) 153 surdej , j. et al . 1987 , nature , 329 , 695 turner , e. l. 1990 , , 365 , l43 turner , e. l. , ostriker , j. p. , & gott , j. r. iii . 1984 , , 284 , 1 wallington , s. , & narayan , r. 1993 , , 403 , 517 weymann , r. j. et al . 1980 , nature , 285 , 641 yee , h. k. c. 1995 , private communication yee , h. k. c. , & ellingson 1994 , , 107 , 28 de zeeuw , t. , & franx , m. 1991 , , 29 , 239
we have identified seven ( field ) elliptical galaxies acting as strong gravitational lenses and have used them to measure cosmological parameters . to find the most likely value for @xmath0 ( @xmath1 ) and @xmath2 , we have used the combined probabilities of these lens systems having the observed critical radii ( or image deflection ) for the measured or estimated values of lens redshifts , source redshifts , and lens magnitudes . our measurement gives @xmath3 if @xmath4 , and the @xmath5 model is excluded at the 97 % confidence level . we also find , at the 68 % ( @xmath6 ) 82 % ( @xmath7 ) confidence level , that an open universe is less likely than a flat universe with non - zero @xmath2 . except for the possibility of strong perturbations due to cluster potentials and the systematic overestimate of the lens magnitudes , other possible systematic errors do not seem to influence our results strongly : correction of possible systematic errors seems to increase the significance of the result in favor of a non - zero @xmath2 model . _ submitted to ap.j feb . 29 1996 , accepted aug . 20 1996 _
the incidence of intertrochanteric fractures has been increasing significantly due to the rising age of modern human populations . generally , intramedullary fixation and extramedullary fixation are the 2 primary options for treatment of such fractures . the dynamic hip screw ( dhs ) , commonly used in extramedullary fixation , has become a standard implant in treatment of these fractures . proximal femoral nail ( pfn ) and gamma nail are 2 commonly used devices in the intramedullary fixation . previous studies showed that the gamma nail did not perform as well as dhs because it led to a relatively higher incidence of post - operative femoral shaft fracture . pfn , introduced by the ao / asif group in 1997 , has become prevalent in treatment of intertrochanteric fractures in recent years because it was improved by addition of an antirotation hip screw proximal to the main lag screw . however , both benefits and technical failures of pfn have been reported [ 79 ] although the effects of pfn and dhs in treatment of intertrochanteric fractures have been reported , the results and conclusions are not consistent [ 1015 ] . therefore , we conducted this meta - analysis to investigate whether there is a significant difference between pfn and dhs fixation in treatment of intertrochanteric fractures . our aim was to evaluate clinical results comparing pfn with dhs , including comparison of operative time , intraoperative blood loss , length of incision , postoperative infection rate , lag screw cut - out rate , and reoperation rate . we hypothesized that pfn would be a superior treatment for intertrochanteric fractures compared with dhs . we searched for randomized or quasi - randomized controlled studies comparing the effects of pfn and dhs according to the search strategy of the cochrane collaboration . it included searching of the cochrane musculoskeletal injuries group trials register , computer searching of medline , embase , and current contents , and hand searching of orthopedic journals . the inclusion and exclusion criteria used in selecting eligible studies were : ( 1 ) target population individuals with intertrochanteric fractures , excluding subtrochanteric and pathological fractures ; ( 2 ) intervention dhs fixation compared with pfn fixation ; ( 3 ) methodological criteria prospective , randomized , or quasi - randomized controlled trials ; ( 4 ) duplicate or multiple publications of the same study were not included . data were collected by 2 independent researchers who screened titles , abstracts , and keywords both electronically and by hand ; differences were resolved by discussion . full texts of citations that could possibly be included in the present meta - analysis were retrieved for further analysis . the assessment method from the cochrane handbook for systematic reviews of interventions was used to evaluate the studies in terms of blinding , allocation concealment , follow - up coverage , and quality level . the study quality was assessed according to whether allocation concealment was : adequate ( a ) , unclear ( b ) , inadequate ( c ) , or not used ( d ) . operative time ( min ) , intraoperative blood loss ( ml ) , length of incision , post - operative infection , lag screw cut - out rate , and reoperation rate were the main measures in the studies included , which the present meta - analysis evaluated to compare the effects of pfn and dhs . we did not undertake a subgroup analysis for different fracture types because not all of the studies included described the fracture types . in each eligible study the relative risk ( rr ) was calculated for dichotomous outcomes and the weighted mean difference for continuous outcomes using the software review manager 5.0 , with a 95% confidence interval ( ci ) adopted in both . heterogeneity was tested using both the chi - square test and the i - square test . a significance level of less than 0.10 for the chi - square test was interpreted as evidence of heterogeneity . when there was no statistical evidence of heterogeneity , a fixed - effect model was adopted ; otherwise , a random - effect model was chosen . we did not include the possibility of publishing bias due to the small number of studies included . a total of 48 articles comparing pfn and dhs that had been published from 1969 to august 2012 were retrieved : 37 were from medline , 6 from the cochrane library , and 5 from the embase library . among them , 13 trials met the inclusion criteria . after excluding non - randomized control trials and retrospective articles , 6 randomized and quasi - randomized controlled trials [ 1015 ] were included ( figure 1 ) . the number of fractures included in a single study ranged from 64 to 206 . three research papers targeted asian patients , and the other 3 targeted caucasians . all studies except 1 had more female than male patients ; 308 fractures were managed with pfn and 361 managed with dhs . the quality of the 6 studies included was level b because the allocation concealment was unclear according to the evaluation criteria mentioned above ( tables 1 and 2 ) . four studies provided data on operative time . the random - effects model was used because of the statistical heterogeneity ( i=97% ) . the meta - analysis indicated that the operative time for the pfn group was significantly shorter than for the dhs group ( wmd : 21.15 , 95% ci : 34.91 7.39 , p=0.003 ) ( figure 2 ) . four studies provided data on intraoperative blood loss . the random - effects model was used because of the statistical heterogeneity ( i=94% ) . the meta - analysis indicated that the intraoperative blood loss for the pfn group was significantly less than for the dhs group ( wmd : 139.81 , 95% ci : 210.39 69.22 , p=0.0001 ) ( figure 3 ) . two studies provided data on length of incision . the random - effects model was used because of the statistical heterogeneity ( i=91% ) . the meta - analysis indicated that the length of incision in the pfn group was significantly shorter than in the dhs group ( wmd : 6.97 , 95% ci : 9.19 4.74 , p<0.00001 ) ( figure 4 ) . five studies [ 1014 ] provided data on postoperative infection rate . postoperative infection was observed in 6 of the 254 fractures managed with pfn , and in 7 of the 273 fractures managed with dhs . data pooled by a fixed - effects model and the meta - analysis indicated an insignificantly higher rate of postoperative infection in the dhs group ( rr : 0.96 , 95% ci : 0.332.8 , p=0.94 ) ( figure 5 ) . lag screw cut - out was observed in 5 of the 205 fractures managed with pfn and in 7 of the 253 fractures managed with dhs . data pooled by a fixed - effects model and the meta - analysis indicated an insignificantly higher rate of lag screw cut - out in the dhs group ( rr : 0.95 , 95% ci : 0.302.97 , p=0.92 ) ( figure 6 ) . reoperation was needed in 13 of the 172 fractures managed with pfn and in 7 of the 182 fractures managed with dhs . data pooled by a fixed - effects model and the meta - analysis indicated an insignificantly higher rate of reoperation in the pfn group ( rr : 2.03 , 95% ci : 0.795.23 , p=0.14 ) ( figure 7 ) . four studies provided data on operative time . the random - effects model was used because of the statistical heterogeneity ( i=97% ) . the meta - analysis indicated that the operative time for the pfn group was significantly shorter than for the dhs group ( wmd : 21.15 , 95% ci : 34.91 7.39 , p=0.003 ) ( figure 2 ) . four studies provided data on intraoperative blood loss . the random - effects model was used because of the statistical heterogeneity ( i=94% ) . the meta - analysis indicated that the intraoperative blood loss for the pfn group was significantly less than for the dhs group ( wmd : 139.81 , 95% ci : 210.39 69.22 , p=0.0001 ) ( figure 3 ) . two studies provided data on length of incision . the random - effects model was used because of the statistical heterogeneity ( i=91% ) . the meta - analysis indicated that the length of incision in the pfn group was significantly shorter than in the dhs group ( wmd : 6.97 , 95% ci : 9.19 4.74 , p<0.00001 ) ( figure 4 ) . postoperative infection was observed in 6 of the 254 fractures managed with pfn , and in 7 of the 273 fractures managed with dhs . data pooled by a fixed - effects model and the meta - analysis indicated an insignificantly higher rate of postoperative infection in the dhs group ( rr : 0.96 , 95% ci : 0.332.8 , p=0.94 ) ( figure 5 ) . four studies provided data on lag screw cut - out rate . lag screw cut - out was observed in 5 of the 205 fractures managed with pfn and in 7 of the 253 fractures managed with dhs . data pooled by a fixed - effects model and the meta - analysis indicated an insignificantly higher rate of lag screw cut - out in the dhs group ( rr : 0.95 , 95% ci : 0.302.97 , p=0.92 ) ( figure 6 ) . three studies provided data on reoperation rate . reoperation was needed in 13 of the 172 fractures managed with pfn and in 7 of the 182 fractures managed with dhs . data pooled by a fixed - effects model and the meta - analysis indicated an insignificantly higher rate of reoperation in the pfn group ( rr : 2.03 , 95% ci : 0.795.23 , p=0.14 ) ( figure 7 ) . the varus collapse of the head and neck caused by lag screw cut - out or lateral protrusion is one of most common post - operative complications that lead to surgical failure in treatment of intertrochanteric fractures . the cut - out ( including z effect ) rates were about 310% in pfn and dhs . most studies reported that lag screw position might be associated with the rate of cut - out in dhs fixation . cut - out was thought to be caused either by improper lag screw placement in the anterior superior quadrant of the head or by not placing the screw close enough to the subchondral region of the head . another explanation for cut - out is that because the screw is rotationally unstable within the bone when a single lag screw is used , flexion - extension of the limb results in loosening of the bone screw interface , leading to the secondary cut - out of the screw . in 1997 it was designed to overcome implant - related complications and facilitate the surgical treatment of unstable intertrochanteric fractures . biomechanical analysis of pfn showed a significant reduction of distal stress and an increase in overall stability compared with the gamma nail . despite the mechanical advantages of pfn , lag screw cut - out remains a significant problem , especially in the more unstable fractures . this meta - analysis also found a higher rate of lag screw cut - out in the dhs group , though it was not statistically significant . this indicates that the anti - rotation screw of the pfn may not be beneficial enough . however , herman et al . showed that the mechanical failure rate increased from 4.8% to 34.4% when the centre of the lag screw was not in the second quarter of the head - neck interface line ( the so - called safe zone ) ( p=0.001 ) and that the lag screw insertions lower or higher than the head apex line by 11 mm were associated with failure rates of 5.5% and 18.6% , respectively ( p=0.004 ) . they suggested that placing the lag screw within the safe zone could significantly reduce the mechanical failure rate when pfn was used to treat intertrochanteric fractures . pfn , inserted by means of a minimally invasive procedure , allows surgeons to minimize soft tissue dissection , thereby reducing surgical trauma and blood loss . the results of this meta - analysis also demonstrates that operative time , intraoperative blood loss , and length of incision in the pfn group are significantly less than in the dhs group . therefore , because of its minimal invasiveness , we recommend pfn as a better choice than dhs in the treatment of elderly patients with intertrochanteric fracture . firstly , the number of studies included and the sample size of patients were quite limited . in addition , the 6 studies were of relatively poor quality ( level b ) , which might weaken the strength of the findings . secondly , we did not undertake a subgroup analysis of different fracture types because not all the studies included described the fracture types . furthermore , not all the studies included had long enough follow - up periods , which also reduces the power of our research . in summary , the current available data indicate that pfn may be a better choice than dhs in the treatment of intertrochanteric fractures .
backgroundthe aim of this meta - analysis was to compare the outcomes of proximal femoral nail ( pfn ) and dynamic hip screw ( dhs ) in treatment of intertrochanteric fractures.material/methodsrelevant randomized or quasi - randomized controlled studies comparing the effects of pfn and dhs were searched for following the requirements of the cochrane library handbook . six eligible studies involving 669 fractures were included . their methodological quality was assessed and data were extracted independently for meta-analysis.resultsthe results showed that the pfn group had significantly less operative time ( wmd : 21.15 , 95% ci : 34.91 7.39 , p=0.003 ) , intraoperative blood loss ( wmd : 139.81 , 95% ci : 210.39 69.22 , p=0.0001 ) , and length of incision ( wmd : 6.97 , 95% ci : 9.19 4.74 , p<0.00001 ) than the dhs group . no significant differences were found between the 2 groups regarding postoperative infection rate , lag screw cut - out rate , or reoperation rate.conclusionsthe current evidence indicates that pfn may be a better choice than dhs in the treatment of intertrochanteric fractures .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Detroit Growth and Stability Act of 2012''. SEC. 2. FINDINGS. The Congress makes the following findings: (1) The City of Detroit is an essential part of the Nation's economy and, in particular, the Nation's manufacturing sector. (2) Absent decisive action from the Federal Government, the City of Detroit risks bankruptcy and loan default. (3) A bankruptcy or default of the City of Detroit would have broad negative economic consequences on the State of Michigan and the Nation. SEC. 3. DEFINITIONS. In this Act: (1) The term ``city'' means the city of Detroit, Michigan. (2) The term ``State'' means the State of Michigan. (3) The term ``financing agent'' means any agency duly authorized by State law, and approved by the city, to act on behalf or in the interest of the city with respect to the city's financial affairs. (4) The term ``Secretary'' means the Secretary of the Treasury. SEC. 4. LOANS. (a) In General.--Upon written request of a financing agent, the Secretary may make loans to such agent subject to the provisions of this Act and the city and such agent shall be jointly and severally liable thereon. (b) Maturity.--Each such loan shall mature not later than 30 years after the last day of the city's fiscal year in which it was made, and shall bear interest at an annual rate equal to the current average market yield on outstanding marketable obligations of the United States with remaining periods to maturity comparable to the maturities of such loan, as determined by the Secretary at the time of the loan. (c) Prepayment.--The Secretary may not charge any prepayment penalties with respect to any loan made under this Act. SEC. 5. SECURITY FOR LOANS. In connection with any loan made under this Act, the Secretary may require the city and a financing agent and, where the Secretary deems necessary, the State, to provide such security as the Secretary deems appropriate. The Secretary may take such steps as such Secretary deems necessary to realize upon any collateral in which the United States has a security interest pursuant to this section to enforce any claim the United States may have against the city or any financing agent pursuant to this Act. Notwithstanding any other provision of law, Acts making appropriations may provide for the withholding of any payments from the United States to the city, either directly or through the State, which may be or may become due pursuant to any law and offset the amount of such withheld payments against any claim the Secretary may have against the city or any financing agent pursuant to this Act. With respect to debts incurred pursuant to this Act, for the purposes of section 3466 of the Revised statutes (31 U.S.C. 181) the term ``person'' includes any financing agent. SEC. 6. LIMITATIONS. At no time shall the amount of loans outstanding under this Act exceed in the aggregate $500,000,000. SEC. 7. REMEDIES. The remedies of the Secretary prescribed in this Act shall be cumulative and not in limitation of or substitution for any other remedies available to the Secretary or the United States. SEC. 8. FUNDING. (a) Establishment of Fund.--There is hereby established in the Treasury a fund to be known as the ``City of Detroit Growth and Stability Fund'', which shall be administered by the Secretary. The fund shall be used for the purpose of making loans pursuant to this Act. There is authorized to be appropriated to such fund the sum of $500,000,000. (b) Administrative Costs.--There are authorized to be appropriated such sums as may be necessary to pay the expenses of administration of this Act. SEC. 9. INSPECTION OF DOCUMENTS. At any time a request for a loan is pending or a loan is outstanding under this Act, the Secretary is authorized to inspect and copy all accounts, books, records, memorandums, correspondence, and other documents of the city or any financing agent relating to its financial affairs. SEC. 10. AUDITS. No loan may be made under this Act for the benefit of any State or city unless the General Accounting Office is authorized to make such audits as may be deemed appropriate by either the Secretary or the General Accounting Office of all accounts, books, records, and transactions of the State, the political subdivision, if any, involved, and any agency or instrumentality of such State or political subdivision. The General Accounting Office shall report the results of any such audit to the Secretary and to the Congress. SEC. 11. TERMINATION. The authority of the Secretary to make any loan under this Act terminates on January 1, 2016. Such termination does not affect the carrying out of any transaction entered into pursuant to this Act prior to that date, or the taking of any action necessary to preserve or protect the interests of the United States arising out of any loan under this Act.
Detroit Growth and Stability Act of 2012 - Authorizes the Secretary of the Treasury to: (1) make loans to a financing agent authorized by the state of Michigan to act on behalf of the city of Detroit, Michigan, for which the financing agent and the city shall be jointly and severally liable; and (2) require the city and the financing agent to provide security for such loans. Limits the aggregate amount of loans outstanding to $500 million. Terminates the authority of the Secretary to make such loans on January 1, 2016. Establishes in the Treasury the City of Detroit Growth and Stability Fund for purposes of making such loans.
for example , let us consider the 1d quantization of the 2d rashba hamiltonian @xcite,@xcite @xmath5),\ ] ] where @xmath6 is the 2d electron momentum , and @xmath7 is the normal to the plane of the system ( axis @xmath8 ) . the size quantization in @xmath9 direction leads to the reduction of the hamiltonian ( [ rh ] ) for the lowest subband : @xmath10 where @xmath11 is the 1d momentum . in this case the vector @xmath12 . the similar hamiltonian arises in cubic crystals with no inversion symmetry from the spin - orbit term in the bulk hamiltonian of dresselhaus @xcite @xmath13 where @xmath14 is 3d electron momentum . the symmetric in all indexes tensor @xmath15 characterizes the anisotropy of the crystal . in the principal crystal axes @xmath15 has the only non - zero component @xmath16 , in the general case @xmath17 , where @xmath18 is the rotation matrix from the frame of reference of crystallographic axes to the laboratory system . the confinement along two directions , say @xmath9 and @xmath8 , converts @xmath19 into the 1d hamiltonian , linear in the momentum @xmath20 . the form of this hamiltonian depends on the orientation of the wire relative to the principal crystal axes . as the result of quantization we obtain @xmath21 where the overline means the averaging with the wave function of the ground state in quantum well . in the considered cases the vector @xmath22 is constant . more general is the situation of curved wire , in which the vector @xmath23 becomes variable in accordance with the change of local direction of the axis @xmath4 . the adiabatic hamiltonian takes form of ( [ 1])-([1 ] ) , where @xmath4 is the coordinate along the wire , @xmath20 is the conjugate momentum ; the symmetrization reestablishes the hermicity . the other factors of appearance of so interaction in the form ( [ 1 ] ) are curvature - induced and torsion - induced so interactions @xcite . in the particular case of a curved wire with axially symmetrical cross - section we have @xmath24 where @xmath25 denotes the binormal to the wire , @xmath26 is the curvature , @xmath27 is the effective so coupling constant of bulk crystal @xcite , @xmath28 is the matrix element on the transversal wave function of the lowest subband of the wire . the quantity @xmath29 has order of the energy of quantization in the wire . the so hamiltonian ( [ 1 ] ) is the most general local expression which has the first order in the so constant and linear in @xmath20 . the other form of the hamiltonian ( [ 1 ] ) is @xmath30 where the velocity operator is @xmath31 . we shall demonstrate , that the hamiltonian ( [ 5 ] ) can be unitary transformed to the form with no pauli matrices . let us consider an equation @xmath32 for an operator @xmath33 which explicitly depends on the coordinate @xmath4 . the solution of ( [ 2 ] ) is @xmath34 the expression ( [ 3 ] ) can be rewritten as x - ordered exponent ( similar to t - ordering with difference that the ordering should be done in @xmath4-space ) : @xmath35 the operation @xmath36 means that all operators should be placed in the decreasing order of @xmath37 . the inverse operator @xmath38 is determined by the ordering in the inverse order @xmath39 : @xmath40 the operator @xmath33 is unitary : @xmath41 ; one can treat @xmath33 as a spacial evolution operator . it can be expanded on the @xmath42 matrix basis : @xmath43 , where the real vector @xmath44 satisfies an equation @xmath45=0.\end{aligned}\ ] ] by means of the operator @xmath46 the wave function transforms as @xmath47 . the identities @xmath48 and @xmath49 are valid , that yields the transformation rules @xmath50 and @xmath51 . the transformed spin operator @xmath52 obeys the equation @xmath53 $ ] and has the explicit form @xmath54 . using these rules we find @xmath55 thus , the transformation excludes the spin from the schrdinger equation . the hamiltonian ( [ 12 ] ) immediately yields the spin degeneracy of electron states , unless the boundary conditions depend on spin explicitly . in particular , if the simple - connected wire is infinite in both direction and the states are localized , the boundary conditions @xmath56 yield @xmath57 . this means double spin degeneracy ( kramers degeneracy ) . the delocalized states remain double - degenerate also . the unitary transformation of the hamiltonian to the form ( [ 12 ] ) has strong impact on different response functions . for example , consider linear responses of electric current @xmath58 , spin polarization @xmath59 and spin current @xmath60 . the electric field ( tangent component ) is assumed to be constant along the wire . these linear responses are expressed by the kubo formula via the velocity or velocity - spin correlators @xmath61 where in the case of conductivity @xmath62 stands for the velocity operator @xmath63 , for the spin orientation and spin current @xmath64 and @xmath65 , respectively , @xmath66 is the fermi function , @xmath67 is the length of the system . more general expressions for responses in arbitrary order on the electric field are determined by the velocity correlators @xmath68 or spin - velocity correlators @xmath69 instead of the spin operator one can write the spin current operator @xmath70 . let us unitary transform operators inside the trace operation using transformation @xmath71 . after the transformation the expression under @xmath72 in ( [ 109 ] ) becomes unit in the spin space . the expression ( [ 109 ] ) reduces to @xmath73 and ( [ 110 ] ) goes to @xmath74 as a result of ( [ 109 ] ) , the conductivity of the system with so interaction converts to that of the system with no so interaction . the eq.([110 ] ) follows from the identity @xmath75 , where @xmath76 denotes the trace in the spin space . it proves that both coefficients of spin polarization @xmath77 and spin current @xmath78 vanish . similar conclusions can be done with respect to electrical responses of higher orders ( _ e.g. _ , the photogalvanic effect ) which are not subjected to so interaction and spin responses on the electric field ( e.g. , stationary spin orientation by alternating electric field ) which vanish . note , that for proof of ( [ 110 ] ) it is essential the presence of _ the only _ spin operator under the trace ; the similar correlators , containing two or more spin operators do not vanish . note also , that the proof can be reformulated in the terms of the wave function . in fact , the wave function can be decomposed to the product of spinor function @xmath79 , obeying the equation @xmath80 and scalar function @xmath81 obeying the schrdinger equation with the hamiltonian ( [ 5 ] ) . the separation of variables can be done for the green functions : they decay on a product of the coordinate green function @xmath82 of the hamiltonian ( [ 5 ] ) and spin functions @xmath83 . in this section we consider possible generalizations of the hamiltonian ( [ 5 ] ) which conserve the main conclusions . first , we can include the electric field into the potential @xmath84 , hence all conclusions remain valid in presence of it in any order of magnitude . second , we can consider the potential as periodic ( or containing periodic part together with random one ) . such potential without the so interaction forms the energy bands @xmath85 , where @xmath20 is now quasimomentum . the operator @xmath86 in so part goes to @xmath87 . hence the resulting new hamiltonian can be also converted to the form with no spin operators . third , the spin can be treated as a quantum number , counting any pair - degenerate levels . for example , they can be subbands , originated from two equivalent valleys of bulk semiconductor . the hamiltonian ( [ 1 ] ) in that case refers to the system with valley degeneracy without spin . according to the found transform , the valley degeneracy will not be lifted . fourth , we can include spin - independent e - e interaction . as such hamiltonian does not touch the spin , the transformation can be done also . from said above one can conclude that there is no spin - orbit interaction in 1d system . in fact , this is not the case . the spin does not commute with the hamiltonian ( [ 5 ] ) . hence , an electron with a preset spin , once injected into the wire , will change the spin during propagation along the wire . in particular , this manifests itself in the systems with magnetic spin injectors / spin - selective drains @xcite , where the boundary conditions break the form of the hamiltonian ( [ 5 ] ) . ( in the magnetic injector one should supplement the hamiltonian with the exchange term like @xmath88 , where @xmath89 is the mean spin density in the contact , @xmath90 is the exchange constant ) . conductance of a finite wire with spin - selective source and drain should be sensitive to the spin evolution caused by the so interaction . thus , the total system does not obey the conditions of the proof . the same is valid for cyclic systems , _ e.g. _ a ring . the periodic boundary condition in the ring of length @xmath67 , @xmath91 converts into the equation @xmath92 , containing the spin via the operator @xmath46 . hence , the spin operator , being eliminated from the schrdinger equation , appears in the boundary conditions that produces the spin splitting of levels . we have neglected the zeeman term in the hamiltonian , direct interaction of spin with the magnetic field . this term actually leads to the spin - flip transitions caused by the alternating magnetic field and other effects . due to relativistic smallness they are weak . an example of such effect is examined below . we consider here a spiral quantum wire with circular cross - section . epsf = 8 cm in this system the alternating electromagnetic field can cause the steady electric current @xcite . we have previously studied the system neglecting the so interaction . with taking into account so interaction the possibility of resonant current caused by spin - flip processes arises . in accordance with said above , the _ electric _ component of field can not induce such current . hence the direct interaction of spin with magnetic field ( epr - resonance ) should be taken into account . the equation of central line of helical wire is @xmath93 where @xmath94 is the radius of the helix , @xmath95 is the coordinate ( length ) along the helix , the pitch of the helix is @xmath96 . the sign of @xmath97 determines the helix direction @xmath98 . the spiral symmetry of the wire with respect to translations along the wire ( @xmath99 ) helps to find exact electron states . the adiabatic 1d hamiltonian reads @xcite @xmath100 where @xmath101 , @xmath102 is the tangent ort to the wire , @xmath103 is the binormal , @xmath104 is the vector - potential of electromagnetic wave polarized in @xmath105 plane ; the last ( zeemann ) term describes interaction of spin with alternating magnetic field @xmath106 without the zeemann term the spin can be excluded , as mentioned above and the problem is reduced to the spinless one @xcite . the zeemann term results in the photogalvanic effect caused by transitions between spin - splitted subbands . let us consider the magnetic field polarized in the plane @xmath107 . the wire symmetry imposes the current phenomenology of the form @xmath108_z $ ] . the contribution to the stationary current due to interaction of electron spin with magnetic field is given by the quadratic kubo - type formula : @xmath109\bigg\ } , \end{aligned}\ ] ] where @xmath110 is the velocity - spin - spin correlator , @xmath111 is the hamiltonian ( 20 ) in the absence of external field ( @xmath112 ) . we shall neglect complications caused by the localization of electron states in 1d system and emulate the impurity scattering by the switching - on field : the rate of the field @xmath113 replaces the reciprocal relaxation time @xmath114 . the resulting current is@xmath115_z \left [ f(\frac{2m\omega - c^2}{2c})-f(\frac{2m\omega+c^2}{2c})\right],\ ] ] where@xmath116 the current exists in a narrow window of frequencies corresponding to the permitted spin - flip transitions . when so interaction is switched off the width of window ( but not the current magnitude ) shrinks . thus , the direct interaction of the spin with the magnetic field of the wave results in the spin - guided translational effect . the epr - induced photogalvanic effect should be compared with the photogalvanic effect caused by the action of electrical field on the translational motion of electron @xcite ; the latter exists in the absence of so interaction . for a running electromagnetic wave both effects add together , for a standing wave ( e.g. , in resonator ) they can be observed separately if to place the wire in loop or node of corresponding fields . besides , they have different frequency dependencies . in conclusion , we have found that in 1d systems different response function , which does not include the spin degree of freedom are not influenced by spin - orbit interaction . the responses connecting the spin and translational degrees of freedom are nonexistent unless the direct magnetic - field spin - flip processes are taken into account . on the contrary , the inclusion of such interaction leads to the magnetic - field - induced resonant steady current . in contrast to 2d systems , where so interaction plays determinative role for phenomena involving charge transfer and spin , in 1d systems the influence of so interaction is suppressed . the transition from 2d to 1d due to lateral quantization results in the sequential decrease of so - induced effects . the authors are grateful to a.v.chaplik and e.g. batyev for useful discussions . the work was supported by grants of rfbr no s 00 - 02 - 16377 and 02 - 02 - 16398 , program for support of scientific schools of russian federation no 593.2003.2 and intas no 03 - 51 - 6453 . ganichev , e.l . ganichev , e.l . nature * 417 * , 153,2002 . ganichev , s.n . lett . , * 88*,057401,2002 . edelstein , solid state comm.,*73*,233,1990 . aronov , yu.b . lyanda - geller , g.e . pikus jetp , * 73 * , 573,1991 . magarill and m.v . entin , jetp lett.*72*,134,2000 . chaplik , m.v . entin and l.i . magarill , physica e,*13*,744,2002 . j. schliemann and d. loss , phys . v.m . edelstein , phys.rev.lett.,*50*,5766,1998 . yu.a . bychkov and e.i . rashba , jetp lett.,*39*,78,1984 . e.i . rashba and v.i . sheka,_in book : _ landau level spectroscopy , ed . by g.landwehr and e.i.rashba , elsevier , g. dresselhaus , phys.rev.,*100*,580,1955 . l.i . magarill and m.v . entin , jetp , * 96*,7662003 . m.v . entin and l.i . magarill , phys.rev.b,*66 * , 205308 , 2002 . g. schmidt , l.w . et al . _ , phys.rev.b , * 62 * , r4790,2000 . o.v . kibis and d.a . romanov , phys.sol.state , * 37*,69,1995 . l.i . magarill and m.v . entin , jetp lett.*78*,213,2003 .
we report the absence of spin effects such as spin - galvanic effect , spin polarization and spin current under static electric field and inter - spin - subband absorption in 1d system with spin - orbit interaction of arbitrary form . it was also shown that the accounting for the direct interaction of electron spin with magnetic field violates this statement . pacs : 71.70.ej ; 73.63.-b ; 73.63.nm + + submitted to europhysics letters the spin - orbit ( so ) interaction in a 2d system underlies various spin control methods owing to the coupling between translational and spin degrees of freedom . such effects have been studied as spin - galvanic effect @xcite-@xcite , spin polarization @xcite-@xcite and spin current @xcite under static electric field , spin polarization under action of electromagnetic wave @xcite . the one dimensional system seems to be more suitable for this purpose due to more strong correlation between the spin and the wire direction . this stimulates to examine the similar problems in 1d systems . we consider the 1d hamiltonian @xmath0 with the most general form of so interaction @xmath1 where @xmath2 are the pauli matrices , the figure brackets denote the symmetrization procedure , vector @xmath3 is an arbitrary function of coordinate @xmath4 along the wire . the hamiltonian ( [ 1 ] ) originates from different approaches related with so interaction in 1d systems . in general , it does not conserve the spin and hence one can expect the above mentioned effects in the frameworks of this hamiltonian . however , we have found that in a strictly 1d system with the so hamiltonian ( [ 1 ] ) these effects vanish .
null
herein we report an umpolung strategy for the bioconjugation of selenocysteine in unprotected peptides . this mild and operationally simple approach takes advantage of the electrophilic character of an oxidized selenocysteine ( se s bond ) to react with a nucleophilic arylboronic acid to provide the arylated selenocysteine within hours . this reaction is amenable to a wide range of boronic acids with different biorelevant functional groups and is unique to selenocysteine . experimental evidence indicates that under oxidative conditions the arylated derivatives are more stable than the corresponding alkylated selenocysteine .
we provide kinetic evidence that the metastable , magic - size ( cdse)34 nanocluster is near the critical - nucleus size for cdse and supports the growth of wurtzite cdse nanocrystal platelets at room temperature ( 2025 c ) . typical conditions for the synthesis of cdse nanocrystals are temperatures above 200 c . to our knowledge , the growth of cdse quantum platelets ( qps ) reported here , via the intermediacy of ( cdse)34 nanoclusters , constitutes the lowest temperature at which crystalline cdse has been produced . we ascribe the low - temperature crystal growth to facile nucleation resulting from ( cdse)34 being near to the critical size , such that the nucleation barrier has largely been surmounted in the formation of this magic - size nanocluster . a crystal nucleation and growth process that is driven by a chemical reaction requires a proper ordering of nucleation , growth , and reaction barriers ( activation energies , scheme 1 ) . according to the classic crystal - growth model , nucleation barriers are higher than the activation energies for growth steps , such that conditions resulting in nucleation will also support crystal growth . we argue that the barrier for a monomer - generating chemical reaction must be higher than the nucleation barrier to support nanocrystal growth ( scheme 1 , curve a ) . here , monomers are defined as small ( cdse)n molecules or clusters . if the nucleation barrier exceeds the reaction barrier ( scheme 1 , curve b ) , then monomer is produced by the reaction under conditions that preclude crystal formation , and thus , amorphous aggregates and precipitates are formed instead . ( a ) the reaction barrier is higher than the nucleation barrier , and nanocrystal growth occurs on the black free - energy curve . ( b ) the reaction barrier is lower than the nucleation barrier , precluding nucleation and crystal growth . instead we suspect that the high temperatures typically employed in semiconductor - nanocrystal syntheses reflect high nucleation barriers for assembling the critical - size nucleus , such that high - barrier chemical reactions are also required . however , if a critical - size nucleus could be assembled under milder conditions , then in principle semiconductor - nanocrystal nucleation and growth could be achieved at lower temperatures . we propose that ( cdse)34 is near the critical size , such that its binary combination exceeds the critical size . if so , then the critical - nucleus size for cdse under our conditions is in the range of ( cdse)34(cdse)68 . we previously reported syntheses of cdse quantum belts ( nanoribbons ) in lamellar , n - octylamine - bilayer templates at the comparatively mild temperatures of 7080 c . we determined that the magic - size nanocluster ( cdse)13 was an intermediate in the formation of the quantum belts and later isolated and characterized a series of [ ( cdse)13(primary amine)13 ] adducts . we now report that crystalline , wurtzite , cdse quantum platelets ( qps ; also known as nanoplatelets or quantum disks ) are formed at room temperature by employing di - n - alkylamine cosolvents or by varying the primary - amine solvent . reaction monitoring and mechanistic analyses indicate that ( cdse)34 is the magic - size nanocluster intermediate under these conditions , which converts to cdse qps at room temperature by first - order kinetics with no detectable induction period . alternatively , at 0 c , ( cdse)34 converts to ( cdse)13 , which then requires temperatures above 40 c to form cdse qps . our interpretation of these results is that ( cdse)34 is nearer to the cdse critical - nucleus size . a ligated derivative of ( cdse)34 is obtained as a slushy solid that is stable indefinitely at 0 c . we suggest that this derivative , [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] , may effectively function as cdse crystal nuclei that may be stored in a bottle . its use in templates of varying geometries may afford low - temperature routes to cdse nanocrystals having other controlled morphologies . di - n - octylamine ( + 98% ) , di - n - pentylamine ( 99% ) , di - n - propylamine ( 99% ) , diethylamine ( > 99.5% ) , phenethylamine ( 99% ) , n - dodecylamine ( 99% ) , n - octylamine ( + 99% ) , n - pentylamine ( + 99% ) , n - propylamine ( + 98% ) , cd(oac)22h2o ( > 98% ) , trin - octylphosphine ( top ) ( 97% ) , and oleylamine ( or cis-9-octadecenylamine , technical grade , 70% ) were obtained from sigma - aldrich . toluene was obtained from sigma - aldrich ( chromasolv for hplc , 99.9% ) . transmission electron microscopy ( tem ) sample grids ( cu with holey carbon film ) were obtained from ted pella , inc . all synthetic procedures were conducted under dry n2 , except the final washing steps , which were conducted in the ambient atmosphere . the synthetic products were generally stored as reaction mixtures , after addition of top ( see below ) . in a typical procedure , cd(oac)22h2o ( 65 mg , 0.24 mmol ) was dissolved in di - n - octylamine ( 5.7 g , 24 mmol ) in a septum - capped schlenk tube and placed in a benchtop sonicating bath ( 10 min ) to achieve dissolution . in a glovebox , selenourea ( 50 mg , 0.41 mmol ) was dissolved in n - octylamine ( 1.2 g , 9.3 mmol ) in a septum - capped amber vial . the vial was removed from the glovebox and placed in a benchtop sonicating bath ( 10 min ) to achieve dissolution . the selenourea solution was injected into the [ cd(oac)22h2o ] solution at room temperature ( 2025 c ) . the colorless reaction mixture became cloudy within 10 s , viscous and light green within 5 min , cloudy and yellow green within 60 min , and cloudy and light yellow at longer reaction times . after 2 h , the mixture was nearly clear and colorless with a light - yellow precipitate . after 2 days , the yellow precipitate remained in the presence of a light - red supernatant , the color of which was due to a se side product from ( cdse)34 formation . top ( 0.250.50 ml ) was injected to scavenge the se side product through the formation of colorless ( n - octyl)3p = se . the light - yellow precipitate of bundled cdse qps was then stored at room temperature in the reaction mixture under n2 for further analyses . the procedure was conducted in the same manner as that for the 1.8 nm thick cdse qps ( see above ) , except for the amount of n - octylamine used ( 2.4 g , 18 mmol ) and the reaction temperature , which was raised to 70 c . the color changes occurred more rapidly , from colorless ( 0 min ) to viscous and yellow ( 10 s ) , cloudy and orange ( 1 min ) , and cloudy and orange - red ( > 120 min ) . after the reaction mixture stood for 2 days at 70 c , the cdse qps were deposited as an orange - red precipitate in the presence of a red supernatant . top ( 0.250.50 ml ) was injected to scavenge the se side product from the ( cdse)34 formation responsible for the red coloration of the supernatant , which became colorless . the dispersion of bundled cdse qps was then stored at room temperature in the reaction mixture under n2 for further analyses . in a typical procedure , cd(oac)22h2o ( 65 mg , 0.24 mmol ) was dissolved in phenethylamine ( 5.74 g , 47 mmol ) in a septum - capped schlenk tube and heated in a 70 c oil bath ( 1 h ) to dissolve the cadmium precursor . in a glovebox , selenourea ( 50 mg , 0.41 mmol ) was dissolved in phenethylamine ( 1.2 g , 9.9 mmol ) in a septum - capped amber vial . the vial was removed from the glovebox and placed in a benchtop sonicating bath ( 10 min ) to achieve dissolution of the selenourea . the clear , colorless reaction mixture became clear and light yellow within 1 h , viscous and white ( 6090 min ) , and then cloudy and white - yellow ( > 90 min ) . after 2 h , the solution became nearly clear and colorless upon formation of a white - yellow precipitate . after 2 days , cdse qps were deposited as a white - yellow precipitate in the presence of a light - red supernatant . top ( 0.250.50 ml ) was injected to scavenge the se side product from ( cdse)34 formation responsible for the red coloration of the supernatant , which became colorless . the white - yellow precipitate of bundled cdse qps was then stored at room temperature in the reaction mixture under n2 for further analyses . in a typical procedure , cd(oac)22h2o ( 65 mg , 0.24 mmol ) was dissolved in di - n - pentylamine ( 5.74 g , 36 mmol ) in a septum - capped schlenk tube and then was stored in an ice bath ( 0 c ) placed inside a refrigerator . in a glovebox , selenourea ( 50 mg , 0.41 mmol ) was added to n - octylamine ( 1.2 g , 9.3 mmol ) in a septum - capped amber vial . the vial was removed from the glovebox and placed in a benchtop sonicating bath ( 10 min ) to achieve dissolution of the selenourea . the clear , colorless reaction mixture became viscous and light yellow within 6 h , cloudy and yellow within 8 h , and cloudy and green - yellow at longer times ( 0 after 18 h at 0 c , ( cdse)34 was formed as a green - yellow precipitate mixed with colorless supernatant . the greenish - yellow precipitate was separated using a benchtop centrifuge ( 700 g , 30 s ) at room temperature , and the colorless supernatant was discarded . this purification process was repeated , for a total of two such cycles , yielding [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] as a slushy , greenish - yellow solid after drying in vacuo for 12 h ( 0.061 g , 95.7% ) . visible ( toluene ) max , nm : 360 , 390 , 418 ( figure 9a ) . ms m / z ( relative area , assignment ) : 6508.2572 ( 100% , ( cdse)34 ) , 6319.5733 ( 42.6% , ( cdse)33 ) , 3651.5768 ( 17.0% , ( cdse)19 ) , 2502.4747 ( 52.4% , ( cdse)13 ) ( figure 11 ) . calcd for [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] : c , 20.00 ; h , 3.94 ; n , 2.84 . found , c , 19.94 ; h , 3.95 ; n , 2.93 . all values are given as percentages . [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] was generally used immediately for analyses or further reactions . the compound was stable at room temperature for at least 24 h under n2 and for longer than one month at 0 c under n2 . other di - n - alkylamine derivatives of ( cdse)34 were prepared under the same general conditions , except for the reaction cosolvents employed . di - n - propylamine or diethylamine were used to replace di - n - pentylamine for the cd(oac)22h2o solution , while n - octylamine was used for the selenourea solution . the reaction mixture was then removed from the ice bath and stored at room temperature ( 2025 c ) for an additional 12 h. the reaction mixture was periodically monitored by uv visible spectroscopy to determine the extent of the conversion , which was found to be complete after 12 h. cdse qps were deposited as a light - yellow precipitate in the presence of a light - red supernatant . ( 0.51.0 ml ) was injected into the reaction mixture to scavenge the red selenium side product from ( cdse)34 formation , resulting in a colorless supernatant . the conversion of ( cdse)34 to cdse qps was accelerated by adding additional di - n - pentylamine to the reaction mixture at room temperature after the formation of ( cdse)34 . a ( cdse)34 sample was prepared as described above . an aliquot ( 26 mg ) taken from the reaction mixture was diluted into a di - n - pentylamine ( 2.5 g ) and n - octylamine ( 0.07 g ) mixture in a quartz cuvette at room temperature ( 2025 c ) . visible spectra were collected in the wavelength range of 400500 nm at 1 h intervals . during data collection , the 418 nm absorption of ( cdse)34 and the 423 and 448 nm absorptions of cdse qps were extracted from the spectra by nonlinear least - squares fitting using origin software ( http://originlab.com/ ) . the initial ( t = 0 h ) spectrum was fit by a single lorentzian function , yielding the center position of the 418 nm absorption . the final ( t = 12 h ) was fit with three lorentzian functions , the first centered at 418 nm , and a background - scattering function ( a/ , where a was an adjustable parameter ) , yielding the center positions of the 423 and 448 nm qp absorptions . all of the intermediate spectra were fit with three lorentzian and the one background - scattering functions , with the lorentzians initially centered at 418 , 423 , and 448 nm . the peak areas determined from the nonlinear least - squares fits were used for the kinetic analyses . all three absorptions gave first - order plots of absorption peak area vs time over three half - lives . the error in the slope of the plots was determined by conducting three kinetic trials and observing the range in the integrated peak areas in the final ( t = 12 h ) spectra . the range of slopes that accommodated these final values was assigned as the error in the slopes . the kinetic parameters kobs and t1/2 were extracted from the slopes , and their errors determined by propagation in the normal manner . these values are reported in the results section . the preparation of ( cdse)34 was conducted as described above . the reaction temperature equilibrated near 0 c in the refrigerator , even after the ice in the bath melted . when di - n - pentylamine was used as the cosolvent , the complete conversion required longer than 1 month , during which the greenish - yellow precipitate gradually changed to white with formation of a small amount of black precipitate , which was a selenium side product . top ( 0.51.0 ml ) was injected into the reaction mixture , whereupon the black solid disappeared , leaving ( cdse)13 as a white precipitate . when diethylamine or di - n - propylamine was used as the cosolvent , the conversion of ( cdse)34 to ( cdse)13 was more rapid and completed in 23 weeks . the conversion was also accelerated by adding additional n - octylamine to the reaction mixture after the formation of ( cdse)34 . we previously reported that reaction of cd(oac)22h2o and selenourea in n - octylamine solvent at room temperature selectively produced magic - size ( cdse)13 nanoclusters entrained within a spontaneously formed , double - lamellar , n - octylamine - bilayer template ( eq 1 , scheme 2 ) . these intratemplate ( cdse)13 nanoclusters were subsequently converted to crystalline , cdse quantum belts ( qbs ) at relatively mild temperatures ( 7080 c , eq 1 ) . the lengths , widths , and thicknesses of the quantum belts were determined by the dimensions within the spontaneously formed , double - lamellar templates ( scheme 2 ) . we sought to purposefully vary these dimensions by varying the nature of the amine solvent , and those efforts led to experiments using di - n - alkylamine cosolvents.1 ( a ) cd(oac)2 and the primary - amine solvent forms a lamellar , amine - bilayer mesophase ( blue and purple ) . ( b ) magic - size ( cdse)34 clusters are initially formed within the template when primary- and secondary- amine co - solvents are employed ( yellow dots , blue and purple ) . ( c ) ( cdse)34 clusters are converted to bundled qps at room temperature in the co - solvent mixtures ( orange and purple ) . ( d ) addition of a long - chain primary amine results in the spontaneous exfoliation of the qps by ligand exchange at room temperature ( orange and green ) . reaction of cd(oac)22h2o and selenourea in an n - octylamine / di - n - octylamine cosolvent mixture at room temperature gave a yellow precipitate , which contrasted with the white ( colorless ) [ ( cdse)13(n - octylamine)13 ] isolated from the eq 1 reaction . the uv visible spectrum of the yellow precipitate dispersed in toluene ( figure 1 ) closely matched those previously obtained for cdse qbs and could not be assigned to ( cdse)13 or other magic - size nanoclusters . tem images ( figure 2 ) revealed the formation of pseudorectangular cdse qps having mean widths and lengths of 7 and 50 nm , respectively . because the qps have the electronic properties of quantum wells , their spectrum depended only on thickness and was effectively indistinguishable from those previously obtained for 1.8 nm thick qbs . the sharp pl spectrum ( figure 1 ) also matched those of the corresponding cdse qbs . like the qbs , the qps gave high pl quantum efficiencies ( pl qe = 25% ) . we surmised that , in the cosolvent mixture , the reaction proceeded through magic - size nanocluster intermediates ( see below ) to cdse qp nanocrystals at room temperature ( eq 2).2figure 1a uv visible extinction spectrum in a toluene dispersion ( black curve ) and a photoluminescence spectrum in an oleylamine toluene solution ( 12% w / w , red curve ) of 1.8 nm thick cdse qps.figure 2tem images of cdse qps synthesized in n - octylamine and various di - n - alkylamine cosolvents . ( a , b ) di - n - octylamine , ( c , d ) di - n - pentylamine , ( e , f ) di - n - propylamine , and ( g , h ) diethylamine . a uv visible extinction spectrum in a toluene dispersion ( black curve ) and a photoluminescence spectrum in an oleylamine toluene solution ( 12% w / w , red curve ) of 1.8 nm thick cdse qps . tem images of cdse qps synthesized in n - octylamine and various di - n - alkylamine cosolvents . ( a , b ) di - n - octylamine , ( c , d ) di - n - pentylamine , ( e , f ) di - n - propylamine , and ( g , h ) diethylamine . we next sought to establish the crystallinity of the qps grown by the room - temperature synthesis . an x - ray diffraction ( xrd ) pattern of the as - prepared material ( figure 3 ) matched those previously obtained for wurtzite cdse qbs . like those qbs , the qps exhibited a lattice contraction associated with the surface tension of the thin nanocrystals . the lattice parameters extracted from the xrd data ( a = 4.07 0.02 , c = 6.82 0.03 ) were smaller than the bulk values ( a = 4.30 , c = 7.02 ) by nearly the same amounts as those of the qbs . an xrd pattern of 1.8 nm thick cdse qps . the black sticks are the peak positions for bulk cdse in the wurtzite structure , and the red sticks are the peak positions for bulk cdse in the zinc - blende structure . the indexed reflections for the wurzite qps are shifted to a higher angle than in the bulk pattern because of the lattice contraction ( see the text and ref ( 22 ) ) . figure 4a views a stack of bundled qps parallel to the qp edges ( individual qps are identified by arrows ) . the 0002 lattice spacings appearing as parallel fringes were clearly evident and provided another measure of the lattice parameter c = 6.86 0.04 . a fourier transform of the hrtem image of the face of a qp was consistent with the ( 1120 ) plane of wurtzite ( figure 4b inset ) , as with the previously reported qbs . the lattice parameter a = 4.04 0.08 was extracted from the fringe pattern in the image of the face ( figure 4b ) . white arrows in ( a ) indicate the length dimension of the bundled qps . although amorphous nanoparticles may be crystallized under the electron beam in the tem , we did not observe such a process ; the qps were crystalline from the outset of tem observations . thus , the sharp extinction and pl spectra ( figure 1 ) , the comparatively sharp xrd pattern that clearly indexed to wurtzite ( figure 3 ) , and the high - resolution tem data ( figure 4 ) all indicated that the cdse qps obtained from the room - temperature synthesis were crystalline as formed . the qp synthesis was repeated using other combinations of primary and secondary amines . experiments were conducted using n - octylamine and various di - n - alkylamine cosolvents . as summarized in table 1 , the secondary amine influenced the mean lengths of the qps , without strongly affecting widths or thicknesses . interestingly , the mean qp lengths were inversely proportional to the lengths of the alkyl groups on the di - n - alkylamine cosolvent ( figure 5 ) . we do not understand the origin of this effect . a plot of the qp mean length vs the inverse of the carbon number of the di - n - alkylamine cosolvent alkyl chain . another set of experiments was conducted in which the primary amine was varied and the secondary amine was held constant . in contrast to the above , systematic dependences of the qp sizes or morphologies on the primary amine were not observed . however , the results established , as described below , that the top and bottom qp facets were predominantly passivated by the primary amine . the qps were produced in bundled stacks that were derived from the lamellar , amine - bilayer templates in which they grew ( see scheme 2 and figures 2 and 4a ) . consequently , the inter - qp spacing ( d spacing ) provided a measure of the amine - bilayer thickness . low - angle xrd patterns of qps obtained from various primary - amine and diethylamine cosolvents are given in figure s1 ( supporting information ) . the d spacing was dependent on the primary amine and consistent with the lengths of the alkyl chains . the experiments described above in n - octylamine and di - n - alkylamine cosolvents gave d spacings consistent with n - octylamine , with no influence by the di - n - alkylamine ( figure s2 , supporting information ) . these results required that the primary amine was responsible for lamellar , amine - bilayer template formation , and consequently , the large qp facets inherited primary - amine passivation from the growth template . many combinations of primary and secondary amines were investigated as cosolvents ( see table s1 , supporting information ) . the best results were achieved when the length of the alkyl chain on the primary amine ( ch3(ch2)nnh2 ) was equal to or longer than the length of the alkyl chains on the secondary amine [ ch3(ch2)m]2nh ( n m ) . when this empirical rule was violated , the uv visible spectra of the resulting qps were broadened and in some cases contained absorptions for platelets of other discrete thicknesses ( see below ; figure s3 , supporting information ) . each of the syntheses conducted at room temperature and as described above gave qps with a discrete thickness of 1.8 nm . the 2.2 nm thick qps were obtained with an n - octylamine / di - n - octylamine cosolvent mixture when the synthesis was conducted at 70 c . low - resolution tem images of the qps ( figure 6a , b ) showed widths of 1020 nm and a mean length of 50 nm . a discrete qp thickness of 2.2 nm was established by high - resolution tem ( figure 6c ) . as expected , the three characteristic qp absorptions were red - shifted from those of the 1.8 nm thick qps ( figure 7 ) . ( a , b ) tem image and ( c ) hrtem image of the qps viewed from the edge . visible extinction spectra in toluene dispersions of bundled 1.4 nm thick cdse qps ( blue curve ) , 1.8 nm thick cdse qps ( black curve ) , and 2.2 nm thick cdse qps ( red curve ) . 1.4 nm thick qps were obtained in the solvent 2-phenethylamine at 40 c with no secondary - amine cosolvent . low - resolution tem images of the qps ( figure 8a , b ) showed widths of 24 nm and a mean length of 700 nm . we note that these lengths are closer to qbs along the qp qb length spectrum than the cases discussed above . a discrete qp thickness of 1.4 nm was established by high - resolution tem ( figure 8c ) . in this case , the three characteristic qp absorptions were blue - shifted from those of the 1.8 nm thick qps ( figure 7 ) . thus , we have prepared cdse qps of three discrete thicknesses ( 1.4 , 1.8 , and 2.2 nm ) . by comparison , dubertret and co - tem and hrtem images of bundles of 1.4 nm thick cdse qps ( 1.4 nm thickness ) . our prior study established magic - size cdse nanoclusters to be intermediates in the formation of cdse qbs . however , the eq 1 reaction in primary - amine solvents at room temperature gave ( cdse)13 , whereas the eq 2 reaction in primary - amine / secondary - amine cosolvents at room temperature gave cdse qps . visible spectroscopy to determine if magic - size nanocluster intermediates participated in the reaction . the eq 2 reaction was conducted in an n - octylamine / di - n - octylamine cosolvent mixture at room temperature as described above . an aliquot removed from the reaction mixture after 1 h gave the spectrum in figure s4a , supporting information , which has been previously assigned to the magic - size nanocluster ( cdse)34 . in our prior study , we mistakenly assigned one of these spectroscopic features to ( cdse)66 , but the results reported here ( see below ) demonstrate that the spectrum does indeed correspond to ( cdse)34 . a second aliquot was removed from the reaction mixture after 2 days , which gave the spectrum in figure s4b , supporting information , clearly assignable to cdse qps . thus , spectroscopic monitoring suggested that ( cdse)34 was an intermediate in the formation of the qps ; other magic - size nanoclusters were not observed . we then combined the eq 2 reactants in an n - octylamine / di - n - pentylamine cosolvent mixture at the lower temperature of 0 c , to determine if other nanocluster intermediates would be detected . ( the reaction was conducted in a different cosolvent mixture because n - octylamine / di - n - octylamine mixtures are solid at 0 c . ) the spectrum of an aliquot taken after 12 h at 0 c corresponded exclusively to ( cdse)34 ( figure 9a ) . the spectrum after 14 days corresponded to a mixture of ( cdse)34 and ( cdse)13 ( figure 9b ) . after 1 month , the ( cdse)34 was completely converted to ( cdse)13 ( figure 9c ) . for a similar reaction conducted at 0 c in a n - propylamine / di - n - ethylamine cosolvent mixture , the conversion of ( cdse)34 to ( cdse)13 was complete in about 1 week ( see figure s5 , supporting information ) . the results established that , under these conditions , ( cdse)13 was more thermodynamically stable than ( cdse)34 , a conclusion supported by another observation ( see below ) . spectral evolution upon transformation of ( cdse)34 to ( cdse)13 in an n - octylamine / di - n - pentylamine cosolvent at 0 c . visible extinction spectra of ( a ) ( cdse)34 after 12 h ( black curve ) , ( b ) a mixture of ( cdse)34 and ( cdse)13 after 14 days , and ( c ) ( cdse)13 after 1 month . we next sought to determine if the secondary - amine cosolvent was merely an inert diluent of the primary - amine component ( an inert cosolvent ) or was an active participant in the initial , selective formation of ( cdse)34 . consequently , the room - temperature synthesis described above was conducted using the inert cosolvent 1-octadecene in place of the secondary - amine cosolvent . reaction monitoring after 5 min revealed the ( unselective ) formation of a mixture of ( cdse)13 and ( cdse)34 , from which the ( cdse)34 was gradually converted to a mixture of ( cdse)13 and cdse qps . the results indicated that ( cdse)34 is a kinetic product , and its conversion to the thermodynamically more stable ( cdse)13 is actively hindered in the presence of a secondary amine . the conversion kinetics of ( cdse)34 to cdse qps were determined by uv visible spectroscopy . figure s6 , supporting information , shows the spectrum of ( cdse)34 prepared in an n - octylamine / di - n - pentylamine cosolvent mixture at 0 c , as described above , having a prominent absorption feature at 418 nm ( black curve ) . over the course of several hours at room temperature , a sharp absorption feature emerged at 448 nm corresponding to the lowest - energy transition in the spectrum of cdse qps ( red and blue curves ) . a second qp feature grew in at 423 nm , only slightly shifted from the 418 nm absorption of ( cdse)34 . the blue curve in figure s6 , supporting information , corresponds to the fully transformed sample . the kinetics of the appearance of cdse qps and the disappearance of ( cdse)34 were monitored by curve fitting of the 418 , 423 , and 448 nm absorptions ( figure s7 , supporting information ) . for kinetic analysis , ( cdse)34 was diluted into a cosolvent mixture having a lower n - octylamine / di - n - pentylamine ratio ( which increased the conversion rate ) . the appearance of cdse qps at room temperature was followed by the integrated area of the qp absorption at 448 nm derived from the curve fitting . as shown in figure 10 , the log of the integrated absorption vs time was linear over three half - lives ( kobs = ( 4.94 0.47 ) 10 s ; t1/2 = 233 22.2 min ) , establishing a first - order process . the inverse of the integrated absorption vs time was nonlinear , ruling out second - order kinetics ( figure 10 ) . significantly , no induction ( nucleation ) period was observed ; first - order qp growth began immediately upon warming the ( cdse)34 solution to room temperature . kinetic data for the conversion of ( cdse)34 to cdse qps at room temperature . the black squares in the first - order plot ( left axis ) were obtained from the integrated area of the 448 nm qp absorption ( see the text ) . the data are also plotted for second - order kinetics ( red points ) , which are nonlinear . the kinetics were also analyzed by the disappearance of the fitted 418 nm ( cdse)34 feature and the appearance of the fitted 423 nm cdse qp feature . these data also gave linear first - order plots over three half - lives ( figures s8 and s9 , supporting information ) . the kinetic parameters for the disappearance of ( cdse)34 were determined to be kobs = (4.69 0.81 ) 10 s ; t1/2 = 247 42.6 min . this rate constant is , within experimental error , the opposite of that for the appearance of cdse qps ( see above ) , establishing that the conversion of ( cdse)34 to cdse qps occurs without the accumulation of an intermediate . the appearance of cdse qps analyzed using the 423 nm feature gave kobs = ( 5.14 1.03 ) 10 s ; t1/2 = 225 50.5 min , in good agreement with the more - precise 448 nm data . thus , the conversion was demonstrated to be a first - order process , with no induction period . the room - temperature conversion rates of ( cdse)34 were influenced by the alkyl - chain lengths on the primary and secondary amine cosolvents and on the primary / secondary amine ratio . shorter - chain lengths on both the primary and secondary amines increased the rates of room - temperature conversion of ( cdse)34 to cdse qps , presumably by enhancing diffusion . lower primary / secondary amine ratios increased the room - temperature conversion rates of ( cdse)34 to cdse qps , perhaps by the increased lability of secondary - amine ligands on ( cdse)34 . higher primary / secondary amine ratios decreased the room - temperature conversion rates of ( cdse)34 to cdse qps , by facilitating the conversion of ( cdse)34 to ( cdse)13 . the cluster ( cdse)13 seems particularly stabilized by primary - amine ligation . a ligated derivative of ( cdse)34 as a slushy , greenish - yellow solid was obtained from preparations conducted in n - octylamine and di - n - pentylamine cosolvents . the uv visible spectrum ( figure s10 , supporting information ) matched those of ( cdse)34 in figures 9a and s6 , supporting information . although no features assignable to other magic - size nanoclusters were detected , the presence of ( cdse)13 , ( cdse)19 , or ( cdse)33 in small amounts was not ruled out , because their absorptions appear at shorter wavelengths and may have been obscured by the absorptions of ( cdse)34 . the isolated ( cdse)34 specimen was further characterized by laser - desorption - ionization ( ldi ) mass spectrometry ( see figure 11 ) . the spectrum contained a prominent ion centered at m / z 6508 corresponding to the bare ( cdse)34 nanocluster , indicating ligand desorption had occurred during the experiment . peaks were also present for each fragment nanocluster ( cdse)x , over the range of x = 3313 . the ( cdse)19 ( m / z 3652 ) and ( cdse)33 ( m / z 6320 ) ions were slightly more abundant , and the ( cdse)13 ( m / z 2502 ) ion was significantly more abundant , than those of the other fragment ions . significantly , the figure 11 ldi mass spectrum differed markedly from the one that we previously reported for isolated [ ( cdse)13(n - octylamine)13 ] and was consistent with a simple fragmentation process from the ( cdse)34 parent . however , the data did not confirm that ( cdse)34 had been isolated in a pure form , because of the presence of the fragment ions corresponding to the other magic sizes ( cdse)13 , ( cdse)19 , and ( cdse)33 . the results did establish that the sample was at least highly enriched in ( cdse)34 . an ldi mass spectrum of [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] . the composition of the ligand shell was determined by calibrated mass spectrometry ( see the supporting information ) . analysis of the isolated ( cdse)34 derivative gave an n - octylamine / di - n - pentylamine ligand ratio of 8.1 0.5 . that ratio was used to fit the c , h , and n analyses , providing an excellent fit to the formula [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] . on the basis of that formula , nanoclusters of ( cdse)34 dispersed in a di - n - alkylamine solvent were stable for over one month at room temperature . isolated [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] was stable at room temperature for over one week and stable for longer periods when stored at 0 c . a color change to reddish orange was observed when the greenish yellow [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] was subjected to a vacuum ( 0.1 torr ) for longer than 12 h. however , after redispersion of such samples in toluene , analysis by uv visible spectroscopy showed that the ( cdse)34 nanocluster remained intact , with no evidence for other species . syntheses of flat cdse nanocrystals may be categorized in two general types . in the first type , long - chain cadmium carboxylate precursors and high reaction temperatures ( 170 c ) are employed , yielding qps having zinc - blende structures . the second type , used here , employs simple cadmium salts and amine solvents at comparatively low temperatures ( 25100 c ) , producing cdse qps and qbs having wurtzite structures . the spectroscopic properties of the two types of flat cdse nanocrystals are closely related and produce comparable quantum - well absorption and emission spectra . the preparation of cdse qps having three discrete thicknesses , 1.4 , 1.8 , and 2.2 nm , is described here . these discrete thicknesses correspond to integer numbers of cdse monolayers . because the wurtzite qps exhibit a [ 1120 ] orientation , the monolayer thickness is a/2 = 0.20 nm , which is half of the basal unit - cell face diagonal . note that both a and c are compressed in the qps relative to the bulk values , so that a/2 here is smaller than the bulk value ( see the results ) . therefore , the three discrete qp thicknesses we have obtained are 7 , 9 , and 11 monolayers . in comparison , dubertret and co - workers have prepared cdse qps having 4 , 5 , 6 , and 7 monolayers , corresponding to discrete thicknesses of 1.2 , 1.5 , 1.8 , and 2.1 nm . because of the zinc - blende structure and orientation of dubertret s qps , the monolayer thickness is a/2 = 0.30 nm , explaining the apparent discrepancy between the monolayer and actual thicknesses of the two sets of nanocrystals . we also report here the isolation of a ligated form of ( cdse)34 having the empirical formula [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] . our results may be compared to the room - temperature synthesis and isolation of n - octylamine - ligated ( cdse)34 recently described by sardar and co - workers . their nanocluster specimen is a bright yellow solid having a strong , narrow lowest - energy absorption feature at 418 nm , where we observed it in [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] ( figure 9a ) . the sardar absorption spectrum also contains the two higher - energy features we also observed for our amine - ligated ( cdse)34 ( figure 9a ) . moreover , sardar and co - workers reported ldi mass spectra in which the ( cdse)34 parent ion was the ( most - intense ) base peak . by recording ldi mass spectra at varying laser powers , they demonstrated that the lower - mass peaks present in the spectra were fragment ions of ( cdse)34 . the evidence strongly supported the isolation of a purified , ligated form of ( cdse)34 , which compares very closely to the [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] isolated in this study . prominent , reasonably narrow absorption features having max values in the range of 410420 nm have been frequently observed in studies of small cdse nanocrystals or nanoclusters . in some cases , these absorptions more recently , such features have been assigned to ( cdse)33 , ( cdse)34 mixtures , or exclusively to ( cdse)34 , as in the present study . cossairt and owen isolated a nanocluster having a cd35se28 stoichiometry ( with additional charge - balancing ligation ) , which also gave a prominent absorption at 418 nm . whether a variety of small cdse nanoclusters has absorption features in this range or whether they equilibrate to the same absorbing species is presently unknown . the ligand - to - cluster stoichiometry of [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] is not readily reconciled with the theoretically proposed structures of the bare ( cdse)34 nanocluster . both cage and core - cage structures have been proposed for ( cdse)34 , the latter of which has a structure ( cdse)6@28 , in which 6 formula units of cdse are in the core of an outer cage structure . if one presumes amine binding to each surface cd atom , the expected ligand / cluster ratio would be 34 or 28 , respectively . that we have measured a ligand / cluster ratio of 18 suggests either that not all surface cd atoms are ligated or that the cluster structure has only 18 surface cd atoms . in our previously reported synthesis of ( cdse)13 at room temperature in primary - amine solvents , an initial mixture of ( cdse)13 , ( cdse)19 , ( cdse)33 , and ( cdse)34 , was observed to equilibrate exclusively to ( cdse)13 . the results here show that the same synthesis conducted in a primary - amine / secondary - amine cosolvent mixture initially produces ( cdse)34 as the only detectable nanocluster product , at both room temperature and at 0 c ( scheme 3 ) . at the lower temperature , ( cdse)34 eventually converts to ( cdse)13 , establishing that ( cdse)13 is more thermodynamically stable under these conditions and that ( cdse)34 is a kinetic product ( scheme 3 ) . the secondary - amine cosolvent slows the conversion of ( cdse)34 to ( cdse)13 . the ( cdse)13 generated from ( cdse)34 in this manner requires temperatures above 40 c for conversion to cdse qps , whereas this conversion occurs readily at room temperature from ( cdse)34 ( scheme 3 ) . thus , ( cdse)34 is a more potent nanocrystal precursor than is ( cdse)13 , because we propose ( cdse)34 is much closer to the critical crystal - nucleus size . the nanocluster ( cdse)34 ( yellow - green dot ) is the kinetic product at 0 c , which slowly converts to the thermodynamic product ( cdse)13 ( gray dot ) at 0 c or to crystalline , wurtzite cdse qps ( orange platelets ) at 25 c . temperatures of > 40 c are required to convert the ( cdse)13 generated by the scheme to cdse qps . the room - temperature conversion of ( cdse)34 to cdse qps occurs by first - order kinetics , with no induction period . the first - order nature of the conversion suggests that an activated , partially ligated form of ( cdse)34 is generated by a ligand dissociation in the rate - determining step , which itself either functions as a critical - size nucleus or coalesces with a fully ligated ( cdse)34 nanocluster in a subsequent fast bimolecular collision to exceed the critical - nucleus size . if correct , then the critical - nucleus size ( cdse)x is in the range of x = 3468 . other experimental determinations of the cdse critical - nucleus are in the diameter range of 1.21.6 nm . for comparison , ( cdse)34 has a theoretical diameter of 1.45 nm , and thus , the critical - size range we elucidate here is consistent with the prior measurements . the very low temperature ( 25 c ) at which crystalline cdse is produced here is surprising . the early syntheses of cdse colloids conducted at room temperature within the water pools of inverse micelles gave materials of low crystallinity . the crystalline coherence lengths in such colloids were shown to be much smaller than the particle sizes . consequently , most syntheses of cdse nanocrystals are conducted at temperatures well above 200 c . for example , the now - classical cdse quantum - dot synthesis in topo solvent reported by murray , norris , and bawendi employed nanocrystal - growth temperatures of 230260 c . crystalline cdse nanosheets and quantum belts have been grown at the low temperatures of 100 c and 4580 c , respectively , using n - octylamine as the solvent and via magic - size nanocluster intermediates . crystalline cdse quantum dots have been obtained under aqueous conditions at 55 c . to our knowledge , the synthesis of cdse quantum platelets reported here , via the intermediacy of ( cdse)34 nanoclusters , constitutes the lowest temperature at which crystalline cdse has been obtained . in the introduction , we argue that the monomer - generating reaction and crystal nucleation must be the two highest - barrier processes participating in semiconductor - nanocrystal growth . therefore , we surmise that the high temperatures typically employed in nanocrystal synthesis reflect either high reaction barriers or high nucleation barriers with the use of conventional precursors and conditions . the very mild conditions for cdse qp growth found here suggest that the nucleation barrier has nearly been surmounted in ( cdse)34 . magic - size nanoclusters should be ideal precursors to support semiconductor - nanocrystal growth and have been observed as reaction intermediates in nanocrystal synthesis since the early observations of henglein and co - workers . the critical sizes and stoichiometries of crystal nuclei are likely precursor and condition dependent . to our knowledge , a stoichiometry for the critical - size crystal nucleus has not been previously determined for cdse but has been reported to be ( zno)254 for zno and ( znse)181109 for znse by gamelin and co - workers . the magic - size nanocluster ( cdse)34 has been shown to be a potent , room - temperature precursor for crystalline cdse qps . the first - order conversion kinetics suggest that the critical nucleus is achieved in a deligated form of ( cdse)34 or in its combination with a second ( cdse)34 , which supports room - temperature crystal growth . the nanocluster is obtained in isolable form as [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] , which functions as critical crystal nuclei that may be stored in a bottle . the results suggest a strategy for making low - temperature nanocrystal synthesis more generally achievable . magic - size nanoclusters like ( cdse)34 of other compositions should be near to the critical size and function as potent nucleating agents . incorporating these into other mesophase - template geometries may provide low - temperature routes to well - passivated nanocrystals having a range of compositions and morphologies .
reaction of cd(oac)22h2o and selenourea in primary - amine / secondary - amine cosolvent mixtures affords crystalline cdse quantum platelets at room temperature . their crystallinity is established by x - ray diffraction analysis ( xrd ) , high - resolution transmission electron microscopy ( tem ) , and their sharp extinction and photoluminescence spectra . reaction monitoring establishes the magic - size nanocluster ( cdse)34 to be a key intermediate in the growth process , which converts to cdse quantum platelets by first - order kinetics with no induction period . the results are interpreted to indicate that the critical crystal - nucleus size for cdse under these conditions is in the range of ( cdse)34 to ( cdse)68 . the nanocluster is obtained in isolated form as [ ( cdse)34(n - octylamine)16(di - n - pentylamine)2 ] , which is proposed to function as crystal nuclei that may be stored in a bottle .
SECTION 1. SHORT TITLE. This Act may be cited as the ``California Coastal National Monument Expansion Act''. SEC. 2. PURPOSES. (a) Findings.--Congress finds the following: (1) Presidential Proclamation Number 7264, dated January 11, 2000 (65 Fed. Reg. 2821), designated over 20,000 islands, rocks, and pinnacles along the approximtely 1,100-mile California coastline as the California Coastal National Monument to protect the biological treasures situated offshore on thousands of unappropriated or unreserved areas of land owned or controlled by the Federal Government within 12 nautical miles of the shoreline. (2) Presidential Proclamation Number 9089, dated March 11, 2014 (79 Fed. Reg. 14603), expanded the boundary of the Monument to include 1,665 acres of Federal land administered by the Bureau of Land Management along the Northern California coastline in Mendocino County, commonly known as the ``Point Arena-Stornetta Unit''. (3) The Point Arena-Stornetta Unit is the first onshore expansion of the Monument. (4) Numerous governmental entities, community organizations, businesses, and individuals have made significant contributions to maintain the unique character, management, and preservation of the individual parcels of Federal land along the California coast. (b) Purposes.--The purposes of this Act are-- (1) to protect, conserve, and enhance for the benefit and enjoyment of present and future generations the nationally significant historical, natural, cultural, scientific, educational, and scenic values of the Federal land along and adjacent to the shoreline of the State of California, and for the purposes for which the Monument was designated; and (2) to support the land management partnerships of the Bureau of Land Management with the State of California, local governments, communities, and stakeholders, and to enhance the relationships those entities have with the Bureau of Land Management and Federal land, as appropriate. SEC. 3. DEFINITIONS. In this Act: (1) Federal land.--The term ``Federal land'' means-- (A) the Federal land comprising approximately 13 acres in Humboldt County, California, identified as ``Trinidad Head'' on the map; (B) the Federal land comprising approximately 5,780 acres in Santa Cruz County, California, identified as ``Cotoni-Coast Dairies Public Land'' on the map; (C) the Federal land comprising approximately 20 acres in San Luis Obispo County, California, identified as ``Piedras Blancas Light Station Outstanding Natural Area'' on the map; and (D) the Federal land comprising approximately 8 acres in Humboldt County, California, identified as ``Lighthouse Ranch'' on the map. (2) Map.--The term ``map'' means the Bureau of Land Management map entitled ``California Coastal National Monument Addition'' and dated July 24, 2015. (3) Monument.--The term ``Monument'' means the California Coastal National Monument established by Presidential Proclamation 7264. (4) Presidential proclamation 7264.--The term ``Presidential Proclamation 7264'' means Presidential Proclamation Number 7264, dated January 11, 2000 (65 Fed. Reg. 2821), creating the Monument. (5) Presidential proclamation 9089.--The term ``Presidential Proclamation 9089'' means Presidential Proclamation Number 9089, dated March 11, 2014 (79 Fed. Reg. 14603), expanding the Monument. (6) Secretary.--The term ``Secretary'' means the Secretary of the Interior. SEC. 4. EXPANSION OF CALIFORNIA COASTAL NATIONAL MONUMENT. (a) In General.--The boundary of the Monument is expanded to include the Federal land. (b) Map and Legal Description.-- (1) In general.--As soon as practicable after the date of enactment of this Act, the Secretary shall develop a map and boundary description of the Federal land added to the Monument by this Act. (2) Force and effect.--The map and boundary description developed under paragraph (1) shall have the same force and effect as if included in this Act, except that the Secretary may correct any minor errors in the map and boundary descriptions. (3) Availability of map and boundary description.--The map and boundary description developed under paragraph (1) shall be on file and available for public inspection in appropriate offices of the Bureau of Land Management. SEC. 5. ADMINISTRATION. (a) In General.--Subject to valid existing rights and deed restrictions in place as of the date of enactment of this Act, the Secretary shall manage the Federal land added to the Monument by this Act-- (1) as part of the Monument; and (2) in accordance with Presidential Proclamations 7264 and 9089. (b) Management Plan.-- (1) In general.--As soon as practicable after the date of enactment of this Act, the Secretary shall finalize an amendment, or multiple amendments as applicable for the individual Federal land areas, to the Monument management plan for the long-term protection and management of the Federal land added to the Monument by this Act. (2) Requirements.--Any amendment under paragraph (1) shall-- (A) be developed in consultation with, at a minimum-- (i) affected State, tribal, and local governments; (ii) the public; and (iii) interested Federal agencies; (B) describe the appropriate uses and management of the Federal land, consistent with this Act; (C) contain individual plans and considerations specific to each individual Federal land area; (D) take into consideration existing uses of the Federal land; (E) include components regarding stewardship, visitor services, facilities management and maintenance, public access, traffic, public safety, emergency services, and law enforcement; (F) include a component regarding potential education and interpretation activities, with recognition of the specific character and history of each Federal land area; and (G) include a component regarding Native American cultural resources management, with emphasis on the preservation of resources within the individual Federal land areas. (3) Interim management.--Until the completion of the management plan, the Secretary shall manage the Federal land in accordance with the purposes described in section 2(b). (c) Motorized and Mechanized Transport.--Except as needed for emergency or authorized administrative purposes, in the Monument-- (1) motorized vehicle use shall be permitted only on designated roads; and (2) mechanized vehicle use shall be permitted only on roads and trails designated for the use of those vehicles. (d) Incorporation of Land and Interests.-- (1) Authority.--Except as provided in paragraph (3), the Secretary may acquire non-Federal land or interests in land within or adjacent to the Federal land added to the Monument by this Act only through exchange, donation, or purchase from a willing seller. (2) Management.--Any land or interests in land within or adjacent to the Federal land added to the Monument by this Act acquired by the United States after the date of the enactment of this Act shall be-- (A) added to and administered as part of the Monument; and (B) with respect to inclusion in the management plan, taken into consideration through an appropriate amendment to that plan. (3) Exception.--An addition to the Cotoni-Coast Dairies unit of Federal land referred to in section 3(1)(C) shall be limited to the acreage contained within the boundary of the Monument, as established by this Act. (e) Existing Cooperative Management Agreements.--Any cooperative management agreement in existence on the date of enactment of this Act between the Federal land areas and other land management entities shall not be affected due to the enactment of this Act. (f) Cooperative Agreements With Local Governments and Entities.--To better implement the management plan and to continue the successful partnerships with local communities and land administered by the State of California and other partners, the Secretary may enter into cooperative agreements with the appropriate Federal, State, and local agencies and organizations pursuant to section 307(b) of the Federal Land Policy and Management Act of 1976 (43 U.S.C. 1737(b)). (g) Withdrawals.--Subject to valid existing rights, all Federal land within the Monument and all land and interests in land acquired for the Monument by the United States after the date of the enactment of this Act are withdrawn from-- (1) all forms of entry, appropriation, or disposal under the public land laws; (2) location, entry, and patent under the mining laws; and (3) operation of the mineral leasing, mineral materials, and geothermal leasing laws. (h) Native American Uses and Interests.-- (1) In general.--The Secretary shall, to the maximum extent permitted by law and in consultation with affected Indian tribes, ensure the protection of Indian sacred sites and traditional cultural properties in the Monument and provide access by members of Indian tribes for traditional cultural and customary uses, consistent with Public Law 95-341 (commonly known as the ``American Indian Religious Freedom Act''; 42 U.S.C. 1996) and Executive Order 13007 (42 U.S.C. 1996 note; relating to Indian sacred sites). (2) Relationship to other rights.--Notwithstanding paragraph (1), nothing in this Act enlarges, diminishes, or modifies the rights of any Indian tribe or Indian religious community. (i) Buffer Zones.-- (1) In general.--The expansion of the Monument by this Act is not intended to lead to the establishment of protective perimeters or buffer zones around the Federal land included in the Monument by this Act. (2) Activities outside monument.--The fact that activities outside the Monument can be seen or heard within the Federal land added to the Monument by this Act shall not, of itself, preclude those activities or uses up to the boundary of the Monument. (j) Grazing.--Nothing in this Act affects the grazing of livestock within the Federal land described in section 3(1)(C). (k) National Landscape Conservation System.--The Secretary shall manage the Monument as part of the National Landscape Conservation System. SEC. 6. ADVISORY COUNCILS. (a) Establishment.--Not less than 180 days after the date of the enactment of this Act, the Secretary shall establish an advisory council for each unit of Federal land described in subparagraphs (A) through (D) of section 3(1) within the Monument. (b) Duties.--The advisory councils shall advise the Secretary with respect to the preparation and implementation of the management plan under section 5(b) (or amendments to an existing applicable management plan) for each relevant unit of Federal land. (c) Applicable Law.--The advisory councils shall be subject to-- (1) the Federal Advisory Committee Act (5 U.S.C. App.); (2) the Federal Land Policy and Management Act of 1976 (43 U.S.C. 1701 et seq.); and (3) all other applicable laws (including regulations). (d) Members.--Each advisory council shall include 7 members, to be appointed by the Secretary, of whom, to the maximum extent practicable-- (1) 1 shall be appointed after taking into consideration the recommendations of the local county board of supervisors of the applicable unit of Federal land; and (2) 6 shall-- (A) reside within a reasonable proximity to the applicable unit of Federal land; and (B) demonstrate experience that reflects-- (i) the purposes for which the Monument was established; and (ii) the interest of the stakeholders that are affected by the planning and management of the unit of Federal land, which may include stakeholders representing private land- ownership, Native American interests, environmental, recreational, economic, or other non-Federal land interests. (e) Representation.--The Secretary shall ensure that the memberships of the advisory councils are fairly balanced with respect to the points of view represented, and the functions to be performed, by each advisory council. (f) Quorum.-- (1) In general.--Four members of an advisory council shall constitute a quorum. (2) Unappointed members.--The operation of an advisory committee shall not be affected if-- (A) a member has not yet been appointed to the advisory committee; but (B) a quorum has been attained. (g) Chairperson and Procedures.--Each advisory council shall-- (1) elect a chairperson from among the members of the advisory council; and (2) establish such rules and procedures as the advisory council determines to be necessary or appropriate. (h) Service Without Compensation.--The members of each advisory council shall serve without pay. (i) Termination.--The advisory councils shall terminate-- (1) on the date that is 2 years after the date on which the management plan (or amendment to an existing management plan) is officially adopted by the Secretary; or (2) on such later date as the Secretary considers to be appropriate. (j) Existing Advisory Bodies.--The Secretary may elect not to establish an advisory council for a unit of Federal land if a regularly scheduled, organized public forum or entity exists-- (1) of which the Bureau of Land Management is an active or leading participant; and (2) that fulfills the duties described in subsection (b). SEC. 7. ROCKS AND SMALL ISLANDS ALONG COAST OF ORANGE COUNTY, CALIFORNIA. (a) California Coastal National Monument.--The Act of February 18, 1931 (46 Stat. 1172, chapter 226), is amended by striking ``be, and the same are hereby, temporarily reserved'' and all that follows through ``United States'' and inserting ``are part of the California Coastal National Monument and shall be administered as part of the Monument''. (b) Repeal of Reservation.--Section 31 of the Act of May 28, 1935 (49 Stat. 309, chapter 155), is repealed.
California Coastal National Monument Expansion Act This bill expands the boundary of the California Coastal National Monument to include specified federal lands in Humboldt, Santa Cruz, and San Luis Obispo Counties in California. The Department of the Interior shall amend the Monument management plan for the long-term protection and management of the federal land so added. Interior may acquire nonfederal land or interests within or adjacent to the added federal land only through exchange, donation, or purchase from a willing seller. Interior must ensure the protection of Indian sacred sites and traditional cultural properties in the Monument and provide access by members of Indian Tribes for traditional cultural and customary uses. The Monument shall be managed as part of the National Landscape Conservation System. Interior shall establish an advisory council for each unit of the added federal land to advise on the implementation of the management plan. Certain rocks, pinnacles, reefs, and islands in the Pacific Ocean within a mile of the coast of Orange County, California, are made part of the California Coastal National Monument, and their current temporary reservation is repealed. Likewise repealed is the lighthouse reservation with respect to the San Juan and San Mateo Rocks and the two rocks in the vicinity of Laguna Beach, off the coast of Orange County.
while males constitute the majority , female adolescent offenders are a sizeable minority of the overall delinquent population . further , those females who become involved in delinquent activities appear to be doing so at a younger age , and they are involved in a wide range of criminal activities , including violent offenses . the goal of this article is to consolidate an empirical base for our current knowledge about female juvenile offenders trauma - related mental health and rehabilitation issues . we searched for studies using pilots , psyclit , psycinfo , and ebscohost electronic databases . accordingly , we present a review of findings from 33 recent studies showing consistently high rates of trauma exposure , ptsd , and common comorbidities among female adolescent offenders . we also examined recent literature on risk and protective factors for female delinquency , as well as treatments for offenders , and found that there was some early representation of trauma and ptsd as important variables to be considered in etiology and treatment . future plans for addressing the mental health needs of female offenders should be better informed by these recent findings about widespread trauma exposure and related psychological consequences . in the past 10 years an impressive number ( 33 ) of new studies on female offenders trauma exposure and related mental health issues have emerged . further , nearly one - fourth of these new studies are from non - us sources . key findings from our review of these studies indicate that severe exposure from multiple types of trauma is most often found among these young women , and that ptsd rates generally exceed 30% . high prevalence rates for other comorbidities , such as depression , substance abuse , anxiety , and suicidality are also reported . given the extensiveness of these findings , it seems clear that severe trauma exposure and serious mental health sequelae are to be expected in high proportions of incarcerated female offenders , both in international and us forensic settings . while there is modest representation of trauma exposure and related psychological disorders among studies of etiology for female delinquency , present treatments for offenders do not specifically target trauma effects . we hope that the results of this review will encourage the inclusion of trauma as a prominent consideration in future treatment planning for female offenders . there is no conflict of interest in the present study for any of the authors .
backgroundwhile males constitute the majority , female adolescent offenders are a sizeable minority of the overall delinquent population . further , those females who become involved in delinquent activities appear to be doing so at a younger age , and they are involved in a wide range of criminal activities , including violent offenses.objectivethe goal of this article is to consolidate an empirical base for our current knowledge about female juvenile offenders trauma - related mental health and rehabilitation issues.methodwe searched for studies using pilots , psyclit , psycinfo , and ebscohost electronic databases.resultsaccordingly , we present a review of findings from 33 recent studies showing consistently high rates of trauma exposure , ptsd , and common comorbidities among female adolescent offenders . we also examined recent literature on risk and protective factors for female delinquency , as well as treatments for offenders , and found that there was some early representation of trauma and ptsd as important variables to be considered in etiology and treatment.conclusionfuture plans for addressing the mental health needs of female offenders should be better informed by these recent findings about widespread trauma exposure and related psychological consequences .
hadrons are the complex systems consisting of quarks and gluons , which makes a long and continuous way to precisely understand the hadron structure . thanks to the collinear factorization theorem@xcite in quantum chromodynamics ( qcd ) , the calculation of high energy hadron collision becomes much straightforward . the calculation is the product of the calculable hard process and the incalculable soft part which is absorbed into the parton distribution functions ( pdfs ) . although incalculable so far , parton distribution functions are universal coefficients which can be determined by the experiments conducted worldwide . moreover there are some models@xcite and lattice qcd ( lqcd ) calculations@xcite which try to predict / match the pdfs of proton . pdfs in wide kinematic ranges of @xmath2 and @xmath4 is an important tool to give some theoretical predictions of high energy hadron collisions and simulations of expected interesting physics in modern colliders or jlab experiments of high luminosity . determination of pdfs of proton attracts a lot of interests on both theoretical and experimental sides . to date , the most reliable and precise pdfs data comes from the global qcd analysis of experimental data . there have been a lot of efforts and progresses achieved on this issue@xcite . in the global analysis , firstly , the initial parton distributions at low scale @xmath5 1 gev@xmath1 , commonly called the nonperturbative input , is parameterized using complicated functions with many parameters . given the nonperturbative input , the pdfs at high @xmath2 are predicted by using dglap equations from qcd theory . secondly , the nonperturbative input is determined by comparing the theoretical predictions to the experimental data measured at high scale . this procedure is usually chosen to be the least square regression method . finally , pdfs in a wide kinematic range is given with the obtained optimized nonperturbative input . although a lot of progresses have been made , the gluon distribution at small @xmath4 is still poorly estimated , which has large uncertainties@xcite . even worse , the gluon distributions from different collaborations exhibit large differences . gluon distribution needs to be more quantitative in terms with a number of physics issues relating to the behavior of it @xcite . pdfs at low resolution scale is always confusing since it is in the nonperturbative qcd region . however it is related to the nucleon structure information measured at high resolution scale . therefore the nonperturbative input gives some valuable information of the nucleon . besides the powerful predictions of the qcd theory , other fundamental rules of hadron physics should also be reflected in the nonperturbative input . how does the pdfs relate to the simple picture of the proton made up of three quarks ? in the dynamical parton model@xcite , the input contains only valence quarks , valence - like light seas and valence - like gluon , which is consistent with the dressed constitute quark model . all sea quarks and gluons at small @xmath4 are dynamically produced . in the dynamical parton approach , the gluon and sea quark distributions are excellently constrained by the experimental data , since there are no parametrizations for input dynamical parton distributions . parton radiation is the dynamical origin of sea quarks and gluons inside the proton . it is also worthwhile to point out that the valence - like input and pdfs generated from it are positive . in some analysis , the negative gluon density distributions@xcite are allowed for the nonperturbative input in order to fit the small-@xmath4 behavior observed at high scale . the dynamical parton model is developed and extended to even low scale around @xmath6 gev@xmath1 in our previous works@xcite . the naive nonperturbative input@xcite with merely three valence quarks are realized , which is the simplest input for the nucleon . in the later research@xcite , we composed a nonperturbative input which consists of three valence quarks and flavor - asymmetric sea components , and extracted the flavor - asymmetric sea components from various experimental data measured at high @xmath2 . the flavor - asymmetric sea components here refer to the sea quark distributions generated not from the qcd evolution but from the complicated nonperturbative qcd mechanisms . in terms of the interpretation of the nonperturbative input , the extended dynamical parton model gives the clearest physics picture . this work is mainly based on our previous works@xcite . the extended dynamical parton model is taken in the analysis . dglap equations@xcite based on parton model and perturbative qcd theory successfully and quantitatively interpret the @xmath2-dependence of pdfs . it is so successful that most of the pdfs are extracted by using the dglap equations up to now . and the common way of improving the accuracy of the determined pdfs is to apply the higher order calculations of dglap equations . however there are many qcd - based evolution equations and corrections to dglap equations@xcite being worked out . it is worthwhile to apply new evolution equations in the global analysis . there are some pioneering works@xcite trying to reach this aim . in this work , dglap equations with glr - mq - zrs corrections are taken to do the global analysis . the main purpose of this study is to give purely dynamical gluon distributions ( @xmath7 ) , which is expected to be more reliable at small @xmath4 . the second purpose is to connect the quark model picture of proton to the qcd description at high energy scale . the aim is to resolve the origin of sea quarks and gluons at high resolution scale . the third purpose is to understand the qcd dynamics of parton radiation and parton recombination . we want to quantify the strength of glr - mq - zrs corrections by determining the value of parton correlation length @xmath8 . the organization of this paper is as follows . section [ secii ] lists the experimental data we used in the analysis . section [ seciii ] discusses the qcd evolution equations , which is the most important tool to evaluate the pdfs . the nonperturbative input inspired by quark model and other nonperturbative effects are discussed in sec . [ seciv ] . the other details of the qcd analysis are explained in sec . section [ secvi ] shows the results of the global fits and the comparisons of the obtained pdfs to experimental measurements and other widely used pdf data sets . section [ secvii ] introduces the imparton package which gives the interface of the obtained pdfs . finally , a simple discussion and summary is given in sec . [ secviii ] . the deeply inelastic scattering ( dis ) of charged leptons on nucleon has been the powerful tool to study nucleon structure for a long time . the quark structure of matter is clearly acquired by decades of measurements starting from the late 1960s with the lepton probes interacting mainly through the electromagnetic force . the dis data of leptons is so important that we include only the dis data in this work . the structure function @xmath9 data used in this analysis are taken from slac@xcite , bcdms@xcite , nmc@xcite , e665@xcite and hera ( h1 and zeus)@xcite collaborations . in order to make sure the data is in the deep inelastic region , and to eliminate the contributions of nucleon resonances , two kinematic requirements shown in eq . ( [ q2w2_cuts ] ) are performed to select the experimental data . @xmath10 for the neutral - current dis , the contribution of the z - boson exchange can not be neglected at high @xmath2 . therefore we compose another kinematic cut to reduce the influence of the z - boson exchange contribution , which is shown in eq . ( [ q2_cut2 ] ) . the z - boson exchange contribution is of the order @xmath11 1% at @xmath12 . @xmath13 with these kinematic requirements , we get 469 , 353 , 258 , 53 and 763 data points from slac , bcdms , nmc , e665 and hera experiments respectively . slac was the first to perform the fixed - target dis experiments . the slac data we used is from the reanalysis of a series of eight electron inclusive scattering experiments conducted between 1970 and 1985 . the reanalysis procedure implement some improved treatments of radiative correction and the value of @xmath14 . the minus four - momentum transfer squared @xmath2 of slac experiments are not big ( @xmath15 gev@xmath1 ) , and the @xmath4 is mainly at large @xmath4 ( @xmath16 ) because of the relative low beam energy . the target mass correction ( tmc ) should not be ignored for the slac data , because of the low @xmath2 and large @xmath4 . in this work , the formula of tmc@xcite is taken as , @xmath17 with @xmath18 , and @xmath19 the nachtmann variable defined as @xmath20 . compared to the later experiments , the uncertainties of the structure functions and the absolute normalization of slac data are big . the precise measurements of the structure function @xmath21 was followed by the experiments at cern , fermilab , and hera at desy . both bcdms and nmc data are collected from the muon - proton dis with cern sps muon beam but with radically different detectors . the bcdms data are taken at beam energy of 100 , 120 , 200 and 280 gev , and the nmc data are taken at beam energy of 90 , 120 , 200 and 280 gev . the absolute normalization for the nmc data was based on an empirical data model motivated basically by leading order qcd calculations . therefore we should fit the nmc normalization factors for each incident beam energy . the h1 and zeus data at hera span a wide kinematic region of both @xmath2 and @xmath4 . the small @xmath4 information of the structure function primarily comes from the hera data . the hera data we used is the combined analysis of h1 and zeus experiments . the normalization uncertainty in this data is 0.5% . a complementary set of the inclusive hera data was obtained by the h1 collaboration in the run with a reduced collision energy . these data are particularly sensitive to the structure function @xmath22 and thereby to the small-@xmath4 shape of the gluon distribution . finally , the kinematic coverage of the charged lepton - proton dis data is shown in fig . [ expkine ] . the kinematic of all the data covers 3 magnitudes in both @xmath4 and @xmath2 . since the slac and the nmc data distribute from relatively low @xmath2 , the target mass corrections are applied when comparing theoretical calculations to these data . all the normalization factors of the experimental data are fitted in the analysis except for the combined data of h1 and zeus , as the normalization uncertainties are not small for other data . dglap equations@xcite is the important and widely used tool to describe the @xmath2 dependence of quark and gluon densities . the equations are derived from the perturbative qcd theory using the quark - parton model instead of the rigorous renormalization group equations , which offers a illuminating interpretation of the scaling violation and the picture of parton evolution with the @xmath2 . the dglap equations are written as , @xmath23 in which @xmath24 , @xmath25 , @xmath26 and @xmath27 are the parton splitting functions@xcite . the prominent characteristic of the solution of the equations is the rising sea quark and gluon densities toward small @xmath4 . the qcd radiatively generated parton distributions at small @xmath4 and at high @xmath2 are tested extensively by the measurements of hard processes at modern accelerators . the most important correction to dglap evolution is the parton recombination effect . the theoretical prediction of this effect is initiated by gribov , levin and ryskin ( glr)@xcite , and followed by mueller , qiu ( mq)@xcite , zhu , ruan and shen ( zrs)@xcite with concrete and different methods . the number densities of partons increase rapidly at small @xmath4 . at some small @xmath4 , the number density become so large that the quanta of partons overlap spatially . one simple criterion to estimate this saturation region is @xmath28 , with @xmath29 the proton radius . therefore the parton - parton interaction effect becomes essential at small @xmath4 , and it expected to stop the increase of the cross sections near their unitarity limit . in zrs s work , the time - ordered perturbative theory ( topt ) is used instead of the agk cutting rules@xcite . the corrections to dglap equations are calculated in the leading logarithmic ( @xmath2 ) approximation , and extended to the whole @xmath4 region , which satisfy the momentum conservation rule@xcite . in this analysis , dglap equations with glr - mq - zrs corrections are used to evaluate the pdfs of proton . the glr - mq - zrs corrections are very important to slow down the parton splitting at low @xmath30 gev@xmath1 . up to date , zrs have derived all the recombination functions for gluon - gluon , quark - gluon and quark - quark processes@xcite . our previous work finds that the gluon - gluon recombination effect is dominant@xcite , since the gluon density is significantly larger than the quark density at small @xmath4 . therefore , we use the simplified form of dglap equations with glr - mq - zrs corrections , which is written as , @xmath31 for the flavor non - singlet quark distributions , @xmath32\\ -\frac{\alpha_s^2(q^2)}{4\pi r^2q^2}\int_x^{1/2 } \frac{dy}{y}xp_{gg\to \bar{q}}(x , y)[yf_g(y , q^2)]^2\\ + \frac{\alpha_s^2(q^2)}{4\pi r^2q^2}\int_{x/2}^{x}\frac{dy}{y}xp_{gg\to \bar{q}}(x , y)[yf_g(y , q^2)]^2 , \end{aligned } \label{zrs - s}\ ] ] for the dynamical sea quark distributions , and @xmath33\\ -\frac{\alpha_s^2(q^2)}{4\pi r^2q^2}\int_x^{1/2 } \frac{dy}{y}xp_{gg\to g}(x , y)[yf_g(y , q^2)]^2\\ + \frac{\alpha_s^2(q^2)}{4\pi r^2q^2}\int_{x/2}^{x}\frac{dy}{y}xp_{gg\to g}(x , y)[yf_g(y , q^2)]^2 , \end{aligned } \label{zrs - g}\ ] ] for the gluon distribution , in which the factor @xmath34 is from the normalization of the two - parton densities , and @xmath8 is the correlation length of the two interacting partons . in most cases , @xmath8 is supposed to be smaller than the hadron radius@xcite . note that the integral terms as @xmath35 in above equations should removed when @xmath4 is larger than @xmath36 . @xmath37 in eq . ( [ zrs - g ] ) is defined as @xmath38 $ ] . the splitting functions of the linear terms are given by dglap equations , and the recombination functions of the nonlinear terms are written as@xcite , @xmath39 quark model achieved a remarkable success in explaining the hadron spectropy and some dynamical behaviors of high energy reactions with hadrons involved . quark model uncovers the internal symmetry of hadrons . moreover , it implies that the hadrons are composite particles containing two or three quarks . according to the quark model assumption , the sea quarks and gluons of proton at high @xmath2 are radiatively produced from three valence quarks . there are some model calculations of the initial valence quark distributions at some low @xmath40 from mit bag model@xcite , nambu - jona - lasinio model@xcite and maximum entropy@xcite estimation . inspired by the quark model , an ideal assumption is that proton consists of only three colored quarks at some low scale @xmath41 . this assumption results in the naive non - perturbative input three valence quarks input . at the input scale , the sea quark and gluon distributions are all zero . this thought is widely studied soon after the advent of qcd theory@xcite . the initial scale of the naive nonperturbative input is lower than 1 gev@xmath1 , since gluons already take comparable part of the proton energy at @xmath42 gev@xmath1 . to properly evolve the naive nonperturbative input should be considered at such low @xmath2 . partons overlap more often at low @xmath2 because of the big size at low resolution scale . in our analysis , the recombination corrections are implemented . in the dynamical pdf model , all sea quarks and gluons at small @xmath4 are generated by the qcd evolution processes . global qcd analysis based on the dynamical pdf model@xcite reproduced the experimental data at high @xmath2 with high precision using the input of three dominated valence quarks and valence - like components which are of small quantities . partons produced by the qcd evolution are called the dynamical partons . the input scale for the valence - like input is aroud 0.3 gev@xmath1@xcite and the evolution of the valence - like input is performed with dglap equations . in our works , the dynamical pdf model is developed and extended to even low @xmath2@xcite . the naive nonperturbative input is realized in our approach . the input of valence quarks with flavor - asymmetric sea components is also investigated and found to be a rather better nonperturbative input . the flavor - asymmetric sea components here refer to the intrinsic sea quarks in the light front theory@xcite or the connected sea quarks in lqcd@xcite , or the cloud sea in the @xmath43 cloud model@xcite . although there are different theories for the flavor - asymmetric sea components , the flavor - asymmetric sea components are generated by the nonperturbative mechanisms . these types of sea quarks are completely different from the dynamical sea quarks . in this analysis , the evolutions of the flavor - asymmetric sea components obey the equation for the non - singlet quark distributions . in this work , we try to use two different inputs . one is the naive nonperturbative input and the other is the three valence quarks adding a few flavor - asymmetric sea components . for convenience , three valence quarks input is called input a , and the one with flavor - asymmetric sea components is called input b in this paper . accordingly , pdfs from inputs a and b are called data set a and data set b respectively . the simplest function form to approximate valence quark distribution is the time - honored canonical parametrization @xmath44 , which is found to well depict the valence distribution at large @xmath4 . therefore the parameterization of the naive input is written as , @xmath45 with zero sea quark distributions and zero gluon distribution . one proton has two up valence quarks and one down valence quark . therefore we have the valence sum rules for the nonperturbtive inputs , @xmath46 for the naive input , the valence quarks take all the momentum of proton . we have the momentum sum rule for valence quarks in the naive input , @xmath47dx=1 . \label{momentumsum-1}\ ] ] with above constraints , there are only three free parameters left for the parametrizations of the naive input . the naive input ( eq . ( [ naive - para ] ) ) is the simplest nonperturbative input for proton , which simplifies the nucleon structure greatly . for input b , the parametrizations of valence quarks and the valence sum rules are the same . for simplicity , the parameterizations of the flavor - asymmetric sea components in input b are given by , @xmath48 this parameterizations easily predict the @xmath49 difference . the dynamical sea quark and gluon distributions are all zero for input b. with the flavor - asymmetric sea components , the momentum sum rule for input b is modified as follows , @xmath50dx\\ = \int_0 ^ 1x [ u^{v}(x , q_0 ^ 2)+2\bar{u}^{as}(x , q_0 ^ 2)\\ + d^{v}(x , q_0 ^ 2)+2\bar{d}^{as}(x , q_0 ^ 2 ) ] dx=1 . \end{aligned } \label{momentumsum-2}\ ] ] in order to determine the quantity of the flavor - asymmetric sea components with accuracy , the following constraint eq . ( [ asynum ] ) from e866 experiment@xcite is taken in this analysis . @xmath51 dx=0.118 . \end{aligned } \label{asynum}\ ] ] therefore , there are only 7 free parameters left for the parametrization of input b. for better discussion on the quantity of flavor - asymmetric sea , we define @xmath52 the momentum fraction of the flavor - asymmetric sea components , @xmath53dx . \end{aligned } \label{asyfrac}\ ] ] one last thing about the nonperturbative input is the input scale @xmath41 . according to the naive nonperturbative input , the momentum fraction taken by valence quarks is one . by using qcd evolution for the second moments ( momentum ) of the valence quark distributions@xcite and the measured moments of the valence quark distributions at a higher @xmath2@xcite , we get the specific starting scale @xmath54 gev for lo evolution ( with @xmath55 gev for @xmath56 flavors ) . this energy scale is very close to the starting scale for bag model pdfs which is @xmath57 gev @xcite . in all , the initial scale @xmath41 depends on the running coupling constant and the experimental measurements at high @xmath2 . we are sure that the initial scale @xmath41 for the naive input is close to the pole ( @xmath58 ) of coupling constant . in this analysis , the initial scale @xmath41 is viewed as a free parameter which can be determined by experimental data . the running coupling constant @xmath59 and the quark masses are the fundamental parameters of perturbative qcd . in fact these parameters can be determined by the dis data at high @xmath2 . however these fundamental parameters are already determined by a lot of experiments . hence there is no need to let these parameters to be free . the running coupling constant we choose is @xmath60 in which @xmath61 and @xmath62 mev@xcite . for the @xmath59 matchings , we take @xmath63 gev , @xmath64 gev , @xmath65 gev . the fixed flavor number scheme ( ffns ) is used to deal with heavy quarks in this work . in this approach , the heavy quarks ( @xmath66 , @xmath67 and @xmath68 ) will not be considered as massless partons within the nucleon . the number of active flavors @xmath69 in the dglap evolution and the corresponding wilson coefficients is fixed at @xmath70 ( only @xmath71 , @xmath72 and @xmath73 light quarks ) . the heavy quark flavors are entirely produced perturbatively from the initial light quarks and gluons . the ffns predictions agree with the dis data with excellence@xcite . in this analysis , only charm quark distribution is given , since bottom and up distributions are trivial . the charm quark distribution comes mainly from the gluon distribution through the photon - gluon fusion subprocesses as @xmath74 , @xmath75 and @xmath76@xcite . the lo contribution of charm quarks to the structure function@xcite is calculated in this analysis . the flavor - dependence of sea quarks is an interesting finding in the nucleon structure study@xcite . as discussed in sec . [ seciii ] , the flavor - asymmetric sea components @xmath77 and @xmath78 result in the @xmath49 difference naturally . as found in experiments@xcite and predicted by the lqcd@xcite , the strange quark distribution is lower than the up or down quark distribution . in order to reflect the suppression of strange quark distribution , the suppression ratio is applied as @xmath79 with @xmath80@xcite . @xmath81 here denotes the dynamical sea quarks . in this approach , the strange quarks are all dynamical sea quarks without any intrinsic components . the least square method is used to determine the optimal parameterized nonperturbative input . using dglap evolution with recombination corrections , the @xmath82 function is calculated by the formula , @xmath83 where @xmath84 is the number of data points in experiment @xmath85 , @xmath86 is a data in a experiment , @xmath87 is the predicted value from qcd evolution , and @xmath88 is the total uncertainty combing both statistical and systematic errors . two separate fits are performed for input a , which consists only three valence quarks . one of them is the fit to all @xmath4 range ( fit 1 ) and the other is to fit the data excluding the region of @xmath89 ( fit 2 ) . the results of the fits are listed in table [ table_chi2 ] . the obtained input valence quark distributions from fit 1 and fit 2 are expressed as @xmath90 the initial scale @xmath41 and the parton correlation length @xmath8 for parton recombination are shown in table [ table_para ] . the obtained @xmath8 values are smaller than the proton radius , which are consistent with the previous studies@xcite . in order to justify the importance of parton - parton recombination corrections , we also performed a global fit using dglap equations without glr - mq - zrs corrections to the experimental data in the range of @xmath91 or @xmath92 , as a baseline . the obtained @xmath93 is @xmath94 , and the input scale is @xmath95 mev . the quality of the fit is bad if we use dglap equations without parton - parton recombination corrections , because parton splitting process only generates very steep and high parton distributions at small @xmath4@xcite . parton - parton recombination corrections can not be neglected if the evolution of pdfs starts from very low resolution scale . . the obtained @xmath82 of fit 1 , 2 and 3 . [ cols="^,^,^,^",options="header " , ] [ table_para ] the obtained @xmath93 is big for input a , especially in the case of fit 1 . basically , the predicted @xmath96 structure function gives the similar shape as that measured in experiments , which are shown in fig . [ q2_22_f2](a ) . however it fails in depicting the experimental data in details around @xmath97 . the experimental data are obviously higher than fit 2 in the intermediate @xmath4 region , which is demonstrated clearly in fig . [ q2_22_f2](b ) . it is interesting to find that the pdfs generated from three valence quarks input miss a peak - like component in the transition region from valence - domain to sea - domain . three valence quarks input needs to be modified and developed . this discrepancy is expected to be removed by the intrinsic light quarks or cloud sea quarks or connected quarks . in order to get reliable valence quark distributions , the experimental data in the region of @xmath89 should be excluded in the global fit , since the discrepancy around @xmath97 distorts the optimal three valence quarks input from the analysis . this is the reason why we performed fit 2 to input a. fit 2 is in excellence agreement with the experimental data at both large @xmath4 and small @xmath4 , which are shown in fig . [ q2_22_f2 ] . quarks at small @xmath4 ( @xmath98 ) are mainly the dynamical sea quarks . generally , our obtained valence quark distributions and the dynamical sea quark distributions are consistent with the experimental observables . @xmath99 for input b , we performed a fit to the data in all @xmath4 range ( fit 3 ) . the quality of the fit improves greatly compared to input a , which is shown in table [ table_chi2 ] . the additional flavor - asymmetric sea components are important to remove the discrepancy around @xmath97 . the obtained input is shown in eq . ( [ input_b_fit ] ) . so far , we have introduced the simplest parametrization for flavor - asymmetric sea components . we argue that more complex parametrization will further improve the result . the total momentum @xmath52 carried by flavor - asymmetric sea components at the input scale is obtained to be 0.1 . the obtained parameters @xmath41 and @xmath8 are shown in table [ table_para ] , which is close to that of fit 1 and fit 2 . the determined input scales are close to the simple theoretical estimation 0.253 gev as discussed in sec . [ seciv ] . the obtained normalization factor of slac data is 1.007 . the obtained normalization factors of nmc data at beam energy of 90 , 120 , 200 , 280 gev are 1.07 , 1.08 , 1.07 , and 1.04 respectively . the normalization factors of nmc data by abm11 global analysis@xcite are also large than one . the obtained normalization factor of e665 data is 1.09 . the obtained normalization factor of h1 data is 1.02 . the obtained normalization factors of bcdms data at beam energy of 100 , 120 , 200 , 280 gev are 1.02 , 1.01 , 1.007 , and 1.01 respectively . the predictions of @xmath4-dependence of structure functions at different @xmath2 are shown in figs . [ slac_f2 ] , [ hera_f2 ] and [ h1_f2 ] with the experimental data . our obtained pdfs agree well with the experimental measurements in a wide kinematical range at high resolution scale . the evolutions of @xmath96 structure function with @xmath2 and the comparisons with the experimental data are shown in fig . [ q2_dependence_f2 ] . parton distribution functions generated from the the valence quarks and the flavor - asymmetric sea components at the nonperturbative region are consistent with the experimental measurements at high @xmath0 gev@xmath1 in whole @xmath4 region with the application of glr - mq - zrs corrections to the standard dglap evolution . the experimental data favors some intrinsic components in the nonperturbative input besides three valence quarks . [ valence_highq2 ] and [ seaglu_highq2 ] show the valence quark , sea quark and gluon distributions at high @xmath2 compared to other widely used parton distribution functions . the valence quark distributions exhibit some differences between our result and other recent global analyses . this discrepancy suggests that we need a more complicated parametrization for valence quark distributions beyond the the simple beta function form . sea quark distributions are consistent with each other . our gluon distribution is close to that of grv98 and mstw08 , but it is higher than that of ct10 . one thing we need to point out is that our gluon distributions are purely dynamically produced in the qcd evolution . we argue that this gluon distribution is more reliable since no arbitrary parametrization of input gluon distribution is involved . our predicted difference between @xmath100 and @xmath101 are shown in fig [ e866_dbar_ubar ] . since the up and down dynamical sea quarks are produced from the gluon splitting , their distributions are the same . the flavor asymmetry between up and down sea quarks are merely from the flavor - asymmetric sea components in this approach . the parametrization of the flavor - asymmetric sea components in this work basically can reproduce the observed @xmath100-@xmath101 difference observed in drell - yan process . note that the e866 data is not included in the global analysis . our predicted strange quark distribution at @xmath102 gev@xmath1 are shown in fig [ strange ] with the recent reanalysis data by hermes collaboration and other widely used parton distribution functions . the predicted strange quark distribution describes the experimental data well , and are consistent with the other pdfs . our strange quark distribution is purely dynamically generated , since there is no strange quark component in the parameterized nonperturbative input . compared to the up and down dynamical sea quark distributions , the dynamical strange quark distribution is suppressed in our approach . the suppression of the strange sea quark distribution is not hard to understand , because the current mass of the strange quark is much heavier than that of the up or down quark . this kind of suppression are supported by the lqcd calculation . [ f2c ] shows the comparison of the charm quark distributions to the measurements by h1 and zeus collaborations . the charm quark distributions are based on lo calculation of photon - gluon fusions . this method dealing with the charm quark distributions is also used in the global analysis by grv95 and grv98 . although it is a simple calculation under the ffns , the calculation of photon - gluon fusion subprocesses basically reproduced the experimental measurements of the charm quark contribution to @xmath96 structure function . in our approach , parton distribution functions at very low @xmath2 are also given . we extend the input scale form @xmath103 gev@xmath1 down to @xmath104 gev@xmath1 . our valence quark distribution at low @xmath2 are shown in fig . [ valence_at_lowq2 ] . the valence quark distributions are obviously high at large @xmath4 . fig . [ gluon_at_lowq2 ] shows the gluon distributions at low @xmath2 . the gluon distributions are regge - like and positive at even extremely low scale . on the issue of gluon distribution , the prominent advantage of the extended dynamical parton model is that there is no negative gluon density at any resolution scale , no matter how small the @xmath2 is . we provide a c++ package named imparton to access the obtained pdfs in the wide kinematic range , in order to avoid the complicated qcd evolution with glr - mq - zrs corrections and make the practical applications of the pdfs easier . the package is now available from us via email , the www@xcite , or download by the git command@xcite . two data sets of the global analysis results , called data set a ( fit 2 result ) and data set b ( fit 3 result ) , are provide by the package . data set a is from the three valence quarks nonperturbative input , and data set b is from the nonperturbative input of three valence quarks adding flavor - asymmetric sea quark components , as discussed in sec . [ seciv ] . the package consists of a c++ class named imparton which gives the interface to the pdfs . imparton has a method imparton::setdataset(int setoption ) , which let the user choose data set a or data set b via setdataset(1 ) or setdataset(2 ) respectively ; the most important method of imparton is imparton::getpdf(int iparton , double x , double q2 ) , which is the method called by users to get the pdf values . iparton set as -4 , -3 , -2 , -1 , 0 , 1 , 2 , 3 , 4 corresponds to getting @xmath105 , @xmath106 , @xmath100 , @xmath101 , gluon , @xmath71 , @xmath72 , @xmath73 , @xmath66 quark / gluon distribution functions respectively . the given pdf values come from the quadratic interpolations of the table grid data calculated by the dglap equations with glr - mq - zrs corrections . the table grids are generated in the kinematic range of @xmath107 and @xmath108 gev@xmath1 . the pdf values outside of the grid range are estimated using some sophisticated and effective extrapolation methods . the relative uncertainty of the interpolation is less than 1% in the kinematical range of @xmath109 . we composed some naive nonperturbative inputs inspired by the quark model and some other nonperturbative qcd models at very low @xmath2 . by using dglap equations with glr - mq - zrs corrections , pdfs generated from these nonperturbative inputs are consistent with various experiments . the obtained gluon distribution is purely dynamically produced , without even the valence - like gluon distribution . the dynamical parton distributions generated in this approach expect to have small bias as a result of the strict theoretical constraints of the method . a c++ package named imparton is introduced to interface with the obtained pdfs . two pdf data sets are provided . one is from the three valence quarks input , and the other is from three valence quarks with a few flavor - asymmetric sea components . the obtained pdfs can be justified and updated with further investigations of many other hard processes , such as the drell - yan process , the inclusive jet production and the vector meson production . by the global analysis , we find that the quark model on the proton structure has some interesting and good results . the three valence quarks can be viewed as the origin of the pdfs observed at high @xmath2 . our analysis also shows that the nonperturbatvie qcd effects beyond quark model are also needed to reproduce the experimental data in details . by adding the flavor - asymmetric sea components the quality of the global qcd fit improves significantly this is a clear evidence of the other nonperturbative parton components of proton beyond the quark model@xcite . it is interesting to know the fact that the sea quarks and gluons are mainly from the parton radiations of three valence quarks predicted by the quark model . however there are more degrees of freedom inside proton which needs the interpretations of the qcd theory in the future . the nonlinear effects of parton - parton recombinations are important at low @xmath2 and small @xmath4 . without the recombination processes , the splitting processes generate much steep and large parton densities because of the long evolution distance from extremely low scale . at low @xmath2 , the strength of recombination processes are comparable to that of the parton splitting processes . thus the recombinations slow down enormously the fast splitting of partons at very small @xmath4 . the preliminary results show that the parton distribution measurements at high @xmath2 are directly connected to the nonperturbative models at low scale with the applications of dglap equations with glr - mq - zrs corrections . dglap equations with nonlinear terms is a simple tool to bridge the physics between the nonperturbative region and the perturbative region . the last but not least conclusion we want to draw is that the partons still exist at extremely low @xmath2 , although the definition / meaning of the parton distribution at low scale is not clear . the physics of partons at low @xmath2 is affected by the parton - hadron duality , which still needs a lot of investigations on both experimental and theoretical sides . based on this work , the valence quarks are the dominant partons at low @xmath2 and go down fast at the beginning of qcd evolution ( fig . [ valence_at_lowq2 ] ) . the dynamical sea quark and dynamical gluon distributions at low @xmath2 and small @xmath4 are regge - like , which have the flat forms over @xmath4 ( fig . [ gluon_at_lowq2 ] ) . the dynamical partons grow fast at small @xmath2 in the evolution . the dynamical gluon distribution grows linearly with the increase of @xmath2 instead of @xmath110 at low @xmath111 gev@xmath1 ( fig . [ gluon_at_lowq2 ] ) . we thank wei zhu , fan wang , pengming zhang and jianhong ruan for the helpful and fruitful discussions . one of us ( r. wang ) thanks hongkai dai and qiang fu for some interesting discussions . we are also greatly thankful to baiyang zhang for preparing the imparton package . this work was supported by the national basic research program ( 973 program grant no . 2014cb845406 ) , and century program of chinese academy of sciences y101020br0 .
determination of proton parton distribution functions is present under the dynamical parton model assumption by applying dglap equations with glr - mq - zrs corrections . we provide two data sets , referred as imparton16 , which are from two different nonperturbative inputs . one is the naive input of three valence quarks and the other is the input of three valence quarks with flavor - asymmetric sea components . basically , both data sets are compatible with the experimental measurements at high scale ( @xmath0 gev@xmath1 ) . furthermore , our analysis shows that the input with flavor - asymmetric sea components better reproduce the structure functions at high @xmath2 . generally , the obtained parton distribution functions , especially the gluon distribution function , are the good options of inputs for simulations of high energy scattering processes . the analysis is performed under the fixed - flavor number scheme for @xmath3 3 , 4 , 5 . both data sets start from very low scales around 0.07 gev@xmath1 , where the nonperturbative input is directly connected to the simple picture of quark model . these results may shed some lights on the origin of the parton distributions observed at high @xmath2 .
MOSCOW (AP) — A Russian reconnaissance aircraft was brought down by a Syrian missile over the Mediterranean early Tuesday, killing all 15 people on board, the Russian defense ministry said. It blamed Israel for the crash, saying the plane was caught in the crossfire as four Israeli fighters attacked targets in northwestern Syria. The Russian military said the Il-20 reconnaissance aircraft was hit 35 kilometers (22 miles) off the coast as it was returning to its home base nearby. "The Israeli pilots were using the Russian aircraft as a shield and pushed it into the line of fire of the Syrian defense," Ministry spokesman Maj. Gen. Igor Konashenkov said in a statement. Russia said it would make an "appropriate response" to Israel. The military said Israel did not warn it of its operation over Latakia province until one minute before the strike, which did not give the Russian plane enough time to escape. A recovery operation in the Mediterranean Sea is underway, Konashenkov said. For several years, Israel and Russia have maintained a special hotline to prevent their air forces from clashing in the skies over Syria. Israeli military officials have previously praised its effectiveness. Russia has been a key backer of Syrian President Bashar Assad and it has two military bases in the country, including one close to the Mediterranean coast. The Israeli military said earlier on Tuesday that it had no reaction, saying it does not comment on "foreign reports." ||||| Syrian air defenses inadvertently shot down a Russian surveillance plane over the eastern Mediterranean Sea, Moscow has said, blaming what it called reckless actions by Israel that led to the deaths of 15 Russian servicemen. The Russian Defense Ministry said in a statement on September 18 that Israeli pilots conducting attacks on targets in Syria "used the Russian plane as a cover, exposing it to fire from Syrian air defenses." "The blame for the downed Russian plane and the death of the crew lies fully with the Israeli side," Shoigu told his Israeli counterpart, Avigdor Lieberman, in a phone call, the ministry said in a statement. "The Russian Defense Ministry, through various channels of coordination, has repeatedly called on the Israeli side to refrain from strikes on Syrian territory that create a threat to the safety of Russian servicemen," Shoigu added, according to the statement. The Israeli military rejected the accusation, saying Syrian President Bashar al-Assad's forces were to blame. Russian President Vladimir Putin, meanwhile, refrained from directly accusing Israel of responsibility for the incident while speaking alongside Hungarian Prime Minister Viktor Orban following their talks in Moscow. He said the plane was downed following a "chain of tragic accidental circumstances." Russian Defense Ministry spokesman Igor Konashenkov said earlier in the day that Moscow could take "commensurate measures in response" to "the irresponsible actions of the Israeli military." Putin said Russia's response would be focused on the security of Russian personnel in Syria. "These will be steps that everyone notices," he said, calling the deaths of the servicemen "a tragedy for everyone, for the country, for the loved ones of our fallen comrades." Israeli Prime Minister Benjamin Netanyahu expressed sorrow for the death of the Russian soldiers, his office said in a statement on September 18. Netanyahu told Putin during a phone call that Syria bears responsibility for the downing of the jet. He also noted that Israel is determined to block Iran from establishing a military presence in Syria and transferring weapons to its proxy Hizballah militia for use against Israel, the statement said. State-run Russian news agency RIA Novosti said that the Israeli ambassador to Moscow had been summoned to the Russian Foreign Ministry on September 18. The accusation against Israel came hours after the Russian Defense Ministry said that the Ilyushin Il-20 aircraft went off radar 35 kilometers from the Syrian coast at about 11 p.m. local time the previous day. The Ilyushin disappeared from radar at around the same time that Israeli F-16 fighters attacked Syrian facilities in Latakia Province, the ministry said.​ It said the plane was returning to Hmeimim air base in the northwestern Syrian province of Latakia, where the bulk of Russia's armed forces in the country are stationed. Hmeimim is Russia's main base for air strikes on rebel groups in Syria. Fragments of the plane were found 27 kilometers west of the city of Banias, Russian authorities said, adding that some remains of the crew had been recovered as well. In Washington, U.S. President Donald Trump said that the Russian warplane was likely shot down by the Syrian regime. "It sounds to me and it seems to me based on a review of the facts that Syria shot down a Russian plane. And I understand about 14 people were killed and that’s a very sad thing but that’s what happens," Trump said at a joint press conference with Polish President Andrzej Duda. Earlier, U.S. Secretary of State Mike Pompeo said it was an "unfortunate" incident and a reminder that a political resolution to the conflict was needed. "Yesterday's unfortunate incident reminds us of the need to find permanent, peaceful, and political resolutions to the many overlapping conflicts in the region and the danger of tragic miscalculation in Syria's crowded theater of operations," Pompeo said in a statement on September 18. Russia has given Syrian President Bashar al-Assad crucial support throughout the Syrian conflict, which began with a government crackdown on protesters in March 2011. Several countries are conducting military operations in Syria, in some cases supporting opposite sides in the conflict. Communication lines have been set up between countries to mitigate the risk of unintended military confrontation in Syria. Russia and Israel have largely maintained friendly relations in recent years, and Israeli Prime Minister Benjamin Netanyahu has visited Putin in Moscow several times to discuss the Syria conflict. Israel is not backing a specific side in the Syria conflict, though it has admitted to conducting air strikes in the country targeting Iran and its allies, including Hizballah. "These weapons were meant to attack Israel, and posed an intolerable threat against it," it said. Before announcing its plane was shot down by Syrian forces, Russia had said rocket launches were detected from the French frigate Auvergne nearby around that time, though the French military denied any involvement. Israel claimed that its jets were already inside Israeli air space when Syrian forces launched the missiles that struck the Russian aircraft. Konashenkov said the plane was downed by Syria using an S-200 air-defense system that Russia had provided. Earlier, the official Syrian news agency SANA reported that missiles were fired at several locations in Latakia Province late on September 17. State media said the explosions were suspected to have been caused by Israeli strikes. With reporting by Interfax, Reuters, AP, RIA Novosti, TASS, AFP, and the BBC
– A Russian reconnaissance aircraft was brought down by a Syrian missile over the Mediterranean early Tuesday, killing all 15 people on board, the Russian defense ministry says. The ministry blames Israel for the crash, saying the plane was caught in the crossfire as four Israeli fighters attacked targets in northwestern Syria, the AP reports. The Russian military said the Il-20 reconnaissance aircraft was hit 22 miles off the coast as it was returning to its home base nearby. "The Israeli pilots were using the Russian aircraft as a shield and pushed it into the line of fire of the Syrian defense," ministry spokesman Maj. Gen. Igor Konashenkov said in a statement. Russia said it would make an "appropriate response" to Israel. "As a result of the irresponsible actions of the Israeli military, 15 Russian service personnel perished," Konashenkov said, per Radio Free Europe. "This absolutely does not correspond to the spirit of Russian-Israeli partnership." The military said Israel did not warn it of its operation over Latakia province until one minute before the strike, which did not give the Russian plane enough time to escape. A recovery operation in the Mediterranean Sea is underway, Konashenkov said. For several years, Israel and Russia have maintained a special hotline to prevent their air forces from clashing in the skies over Syria.
one of the most fascinating aspects of the chinese language presents one of the largest barriers to learning it , an irony not lost on generations of students . chinese characters enchant the learner like little else , but the origins of that enchantment , their elegant , structured complexity and seemingly infinite variety , make learning them a formidable task @xcite . mastery is a hard - won thing and is rarely achieved until an advanced stage of study @xcite . becoming functionally literate in chinese requires memorization of several thousand distinct characters , and the effort involved has profound consequences for the learning process @xcite . an early focus on learning characters can delay the acquisition of productive language skills , while learning them late can inhibit productive learning techniques , such as extensive reading , and also obscure the logic of the language . either way , the consequences for students are a steep learning curve , high rates of attrition , and a certain preoccupation with methods for learning and remembering characters @xcite . the task of learning thousands of distinct symbols is not , however , as difficult as it first appears . there are regularities in the structures of chinese characters that relate them to their pronunciations and meanings , and also to one another . around 90% of characters are semantic - phonetic compounds , in which one part of the character indicates the meaning and the other part the pronunciation . these cues are not always obvious , as meanings and pronunciations have evolved over time , but they remain useful . in a study of 2570 characters taught in chinese elementary schools , shu et al . @xcite found that 88% of compound characters had a semantic component that was clearly related to the meaning and 62% had a phonetic component that provided a useful cue to pronunciation . the level of phonetic regularity is greater than is often appreciated @xcite . compound characters are frequently used as phonetic or semantic components in other characters and so , collectively , chinese characters form a hierarchal network . at the foundation of this network are primitive symbols , which typically originate from pictographs . some of these primitives form characters in their own right , while others are used only as components . the structure of the character ( zho , to illuminate ) is shown in fig . [ fig1 ] . the decomposition illustrates the typical ways in which phonetic relationships are distorted and how semantic relationships are sometimes rather general or oblique . is the subtlex - ch usage frequency rank of the character . pronunciations are given in pinyin romanization . note that each character is only assigned a single meaning even though most actually possess a range of broadly related meanings.,scaledwidth=70.0% ] the semantic - phonetic structure of most chinese characters makes the learning process somewhat different for native chinese speakers and second language learners . when chinese children learn to read and write they already know the spoken language and so the phonetic information can be very useful for making connections between written and spoken forms @xcite . for second language learners , who are often learning characters at a time when they know little of the language , this information is more difficult to use and the learning process is correspondingly more difficult . but just as learning characters can be more challenging for second language learners it can also be particularly useful . the chinese language abounds in homophones , syllables that have identical pronunciations but different meanings ( it has many fewer distinct syllables compared to english , around 6 times fewer if one accounts for tones and around 20 times fewer if one neglects them ) . this gives a potential for ambiguity in the spoken language and acts to obscure some of the logic behind word formation . however , neither of these issues translate into written chinese because homophones are often represented by different characters . for example , the character of fig . [ fig1 ] is pronounced identically to the unrelated characters ( sign or portent ) , ( cover ) and ( oar ) . knowing characters can thus help the learner distinguish between homophones and assign distinct mental identities to the different meanings . this , in turn , can help with understanding and remembering words . for example , the verbs ( zhoyng , to coordinate ) and ( zhoyng , to shine ) are pronounced identically , but have differences in meaning that are suggested by the final characters ( respond ) and ( reflect ) . there is substantial debate in the literature on how characters should be taught and on the level of knowledge that is required at different educational stages @xcite . this debate , as well as the importance of the problem , is reflected in the wide variety of learning methodologies found in different courses , books and apps . here we are largely agnostic regarding the best overall approach . rather , we consider a general question that is relevant to most of them and suggest an answer that is based on broad educational principles . the question we address is the optimum order in which chinese characters should be learned . there are two orders that make intuitive sense : in order of usage frequency , from high to low , and in order of network hierarchy , starting with primitives and building up compound characters using components that have already been learned . the first of these follows directly from the goal of the learner but the second merits further discussion . in general terms , the desirability of learning characters in hierarchal order follows from a broad principle of human cognition , that mastery of a complex system rests on mastery of the relevant features of its sub - components @xcite . this applies to chinese characters if one assumes that it is productive to treat them as a complex system rather than as a set of unrelated symbols to be learned by rote . a number of experimental studies indicate that this assumption is valid . they show that orthographic awareness is of critical importance to skilled native readers and in learning to read by both chinese children and second language learners @xcite . these also show that orthographic awareness is present whether or not it is taught explicitly and , among learners , that the extent of the awareness is correlated with performance @xcite . we consider learning characters in order of hierarchy to be desirable because we infer that a learning order that explicitly reflects orthographic principles is more likely to generate accurate and productive orthographic awareness in students . there is , however , necessarily a tension between learning by usage frequency and learning by hierarchy , because frequency is only weakly correlated with character complexity . this behavior can be seen in fig . [ fig1 ] , where , for example , appears around five times more often than its component , and also in fig . [ fig2 ] . learning characters in order of frequency would therefore often mean learning characters before their components had been learned , whereas learning them in order of hierarchy would often mean learning rarer characters in advance of more common ones . when devising a learning order one can choose either of the extremes , of frequency or hierarchy , or attempt to find a balance between them in which some common characters are learned in advance of their components . one previous approach that searched for such a balance was a network theory - based approach by yan et al . @xcite . they demonstrated that an algorithmically - optimized , balanced order can be substantially more efficient than one that follows frequency . yan et al . also showed an improvement over pure hierarchal ordering , though somewhat less convincingly . it is unconvincing because they compared their optimized order to only one of many possible hierarchal orders , and there is no reason to believe that the one they choose is representative . indeed , it will be one of the conclusions of this work that extremely efficient hierarchal orders do exist , ones that can outperform the orders produced by their algorithm . the tension between frequency and hierarchy is a dominant consideration in determining the learning order but it is not the only one . small - scale character - to - character patterns are also known to be important , especially for encouraging orthographic awareness @xcite . patterns can be chosen to emphasize the logic of character construction , by introducing components directly before their compounds , or to emphasize the functional role of components , by presenting their compounds in sets . these patterns are often found in human - curated orders , and especially in books on learning chinese characters ( for example , those of heisig and richardson @xcite ) . they embody sound educational principles , which can be understood in terms of marton s variation theory @xcite . patterns such as these are not present in orders produced by the yan et al . their procedure generates a degree of character - to - character noise that means that components are rarely adjacent to the compounds that motivate their introduction and sometimes even follow them . this contrasts with the algorithm presented here , which produces orders with a high degree of logical transparency and strong clustering of related characters . our algorithm is built on the fundamental assumption that hierarchal orders are the pedagogically desirable way to accumulate usage frequency and we search among this subset of orders for the one that is most efficient . the algorithm is implemented using the conceptual framework of network theory , within which we conceive the network of chinese characters as a directed analytic graph @xcite . the nodes in the graph represent characters and the edges represent the structural relationships between them . we devise a measure of node centrality that relates each character s usage frequency to the effort required to learn it , and order the characters by this measure to provide a first approximation to the optimal learning order . we then sort this list into topological ( hierarchal ) order using an algorithm designed to minimally disturb the starting order . the algorithm can be applied to any scheduling task where nodes have intrinsic differences in importance and must be visited in topological order . following this introduction , we describe the structure of our algorithm in detail , including how we define learning efficiency and how we calculate the cost of learning characters . we discuss the robustness of the algorithm and study the characteristics of the orders it produces . in the final section we apply our algorithm to a network that is expanded to include chinese words . chinese words can be single characters but they are more frequently compounds of two or more . they are the primary units of communication in the chinese language and so characters , rather like letters in alphabetic scripts , may be considered useful only in so far as they help to build words or act as words themselves . reflecting this , words can be moved to center stage and , instead of having character usage frequencies drive the acquisition of components , word usage frequencies can be used to drive the acquisition of characters and their components . we explore the results of this more holistic approach . the network of chinese characters can be represented as a directed analytic graph . nodes represent characters , with their visual forms , pronunciations and meanings , and edges represent the structural relationships between characters and the nature of those relationships , whether semantic , phonetic or otherwise . learning chinese characters means memorizing a productive subset of this network . our aim is to derive a character learning order which maximizes learning efficiency . such an order maximizes cumulative usage frequency while minimizing the effort required to learn it . to this end , we assign a usage frequency to each character along with an estimate for the effort required to learn it , its _ learning cost_. learning costs are calculated using a model that assumes that characters are learned in hierarchal order . we incorporate usage frequency and learning cost into a measure of character centrality . this measure indicates the relative importance of the character to the learner , prioritizing usage frequency and penalizing learning cost . ordering characters by this centrality provides a first approximation to the final learning order . this order is approximate because ordering by centrality does not imply ordering by hierarchy , which must be imposed in a separate step . hierarchal ordering is imposed using an algorithm designed to topologically sort our centrality - ordered list in a way that minimally disturbs it . higher - centrality characters are learned first only when allowed topologically . the algorithm can easily account for characters that are already known to the learner ( their learning costs can be set to zero ) or characters that are partially known ( their learning costs can be suppressed ) . this capability could be useful in software applications , which could dynamically update the learning order as the student progresses . the algorithm has potential applications beyond the learning of chinese characters , and can be applied to any scheduling task where nodes have intrinsic differences in importance yet must be visited in topological order . a typical learning scenario is characterized by a fixed available effort , with which the learner seeks to acquire the maximum cumulative usage frequency as rapidly as possible . the learning process can be visualized as a _ learning curve _ in a space defined by axes of cumulative usage frequency and cumulative learning cost . this is illustrated in fig . efficient learning curves rise quickly and reach high end - points . and @xmath0 represent two different learning curves . for each curve , the final learning efficiency @xmath1 is the cumulative usage frequency for a specific cumulative learning cost @xmath2 , and the integral learning efficiency @xmath3 is the average cumulative usage frequency between the origin and @xmath2 . curve @xmath4 has higher @xmath1 but lower @xmath3 . illustrated values for @xmath3 are approximate.,scaledwidth=70.0% ] learning curves for different orders can be compared visually , as in the figure , but it is convenient to parameterize them . we propose a two - parameter scheme . the first parameter is the cumulative usage frequency at the end of the learning process , once the maximum cumulative learning cost @xmath2 is reached . we call this parameter the _ final learning efficiency _ @xmath1 . high @xmath1 is characteristic of efficient learning curves . note that when comparing curves , it is necessary to use the same usage frequencies for characters that appear in both curves , even though the curves may not cover identical sets of characters . to ensure this , we normalize the entire usage frequency data set to 1.0 , which becomes the maximum possible value of @xmath1 . the second parameter concerns how the maximum cumulative usage frequency is approached . consider the two curves shown in fig . the curve @xmath4 has higher @xmath1 but , over much of the learning process , actually performs less well than curve @xmath0 . curve @xmath4 might be one that prioritizes longer - term cumulative frequency at the cost of shorter - term . the difference would be immaterial if the learning process had a short extent in time but this is not typically the case . learning may take place over many months , during which the learner would likely be exposed to other parts of the language . in this case it might make sense to have a learning curve that rises quickly , even at the cost of some longer - term cumulative usage frequency . we parameterize this using the average cumulative usage frequency , which we call the _ integral learning efficiency _ @xmath3 and calculate using @xmath5 where @xmath6 is the cumulative usage frequency and @xmath7 is the cumulative learning cost . we define the centrality @xmath8 of character @xmath9 to be @xmath10 where @xmath11 is the usage frequency and @xmath12 is the learning cost . this quantity is the ratio of the benefit and cost that each character represents to the learner . learning characters in order of @xmath13 will therefore tend to satisfy the prime concern of the learner , of maximizing cumulative usage frequency and minimizing effort . these learning curves will rise steeply and have high end points , or , in the language of the previous section , be characterized by high integral and final learning efficiencies . values for @xmath14 can be extracted from corpora of written chinese . values for @xmath15 are more difficult to assign objectively and we estimate them using a learning model . in our model we use different procedures to assign costs to primitives and compound characters . the cost @xmath16 of learning a primitive @xmath9 is taken to be @xmath17 where @xmath18 is the number of strokes that make up the character . using a @xmath19 of 0.1 would , for example , mean the cost of learning is 1.3 and the cost of learning is 1.7 . this is a simple approximation to the true learning cost , which would depend on a variety of other things , and likely in complex ways : the learner s familiarity with the strokes that make up the character , their knowledge of similar primitives , and the visual similarity between the primitive and the thing it represents . the cost @xmath20 of learning a compound @xmath9 is taken to be @xmath21 where @xmath22 is number of combinations used to build the character . thus , the cost of learning would be 1 , because it is a compound of two components and , and the cost of would be 2 , because it is a compound of three ( , and ) . for characters that are variants of others ( such as , which is a variant form of ) we assign a cost of 1 . we do not take special account of characters with repeating elements ( such as ) for which we likely overstate the cost . in this work we take @xmath23 . this means that the cost of learning the simplest primitive would be similar to the cost of learning the simplest compound . a typical primitive would be around twice as difficult . our final learning order depends on the specific value chosen for @xmath19 but our conclusions are the same for any reasonable value . our learning cost model assumes that characters are learned in hierarchal order . when we calculate the cost of learning a compound character we do not include the cost of learning the components themselves , which we assume have already been learned . our model implies that the total cost of learning a fixed set of characters is identical for all hierarchal orders . all final learning efficiencies will be identical , with the only differences being in the integral learning efficiencies . learning characters in order of centrality prioritizes characters that are useful and easy to learn but it does not ensure that characters are ordered according to the character hierarchy . for example , the simple and common will appear a long way in advance of its components and , which appear much less frequently as characters . we resolve this issue with a sorting algorithm that modifies the centrality - ordered list to ensure that all characters appear before those in which they act as components . the algorithm is illustrated in fig . [ fig4 ] and may be described as follows : 1 . process characters from low to high centrality ( right to left in the figure ) . this is opposite to the order in which characters are learned . 2 . decompose each character into a list of its primitives and all intermediate characters . for example , should be decomposed into , , , , and . 3 . determine the position of each component in the centrality - ordered list . if the position is to the left of the character then no action is taken . if it is to the right of the character then it should be moved the character s left . move the character as far left as it will go , while still remaining to the right of all characters with higher centrality . this procedure ensures that characters are relocated only when necessary and always to the optimum position within the region allowed by the hierarchy . this results in a highly optimized order . however , it is not necessarily the most efficient order possible and we can find special cases , such as the network in fig . [ fig5 ] , where the algorithm does generate a sub - optimal order . we have not found such instances in the real character network but can not prove that they do not exist . we use two different representations of the simplified chinese character network , one compiled with an emphasis on etymological correctness and one with an emphasis on the visual relationships between characters . the former is based on a preliminary version of forthcoming dictionary by _ outlier linguistic solutions _ @xcite and the latter is taken from the books _ remembering simplified hanzi 1 and 2 _ by heisig and richardson . we refer to these networks as the ols and hr networks , respectively . the networks have similar coverage : ols covers 3507 characters and primitives , and hr covers 3250 , with 2990 in common between them . the majority of the decompositions are identical and the majority of the differences originate from decisions regarding encoding ( a number of components do not have unicode code points and others can reasonably be represented by more than one code point ) . usage frequency data for characters and words are taken from the subtlex - ch database @xcite , which is derived from chinese film and television subtitles . we chose this database because it is comprehensive and is representative of modern colloquial chinese . in any practical application of our algorithm , frequency data should be chosen with the specific goals of the learner in mind . the database contains 5938 unique characters and 99121 unique words with frequencies calculated from a total corpus of 46.8 million characters ( algorithmically segmented into 33.5 million words ) . all usage frequencies used in this study are normalized to the whole database . normalizing in this way , both the ols and hr networks have cumulative usage frequencies of 0.992 . stroke counts were taken from the unihan database using the python package cjklib @xcite . where stroke data was unavailable , the number was set to zero . fig . [ fig6 ] shows the first 85 characters from the learning order derived using the ols network . the full learning curve for the ols network is shown in fig . [ fig7 ] , where it is compared to the yan et al . algorithm and to the fixed character order of heisig and richardson . [ fig8 ] shows usage frequencies for the first 85 characters of each of the curves in fig . learning efficiencies are presented in table [ table ] . . the blue curve uses the hr network with heisig and richardson s fixed character order . learning efficiencies are presented in table [ table].,scaledwidth=70.0% ] . dark bars represent primitives and light bars represent compounds.,scaledwidth=100.0% ] the shape of the heisig and richardson curve in fig . [ fig7 ] can be understood from the structure of their book . the first half of the curve , between the origin and the large discontinuity , covers their first volume , in which they introduce the bulk of the primitives . these are grouped in chapters according to meaning and each one is followed by all the high - usage compound characters that can be made at that point . this explains the alternating pattern of sharp upward jumps and gentle slopes . the second volume introduces the lower - frequency compounds that are not included in the first . the authors present a hierarchal order which aims for a relatively high @xmath1 by the end of the first volume but with no particular regard for the shape of the curve . note that their curve in fig . [ fig7 ] was calculated using subtlex - ch frequency data , which may differ from the frequency data which they used to select their characters and order them . the curves corresponding to our algorithm and that of yan et al . were calculated using identical character networks and usage frequencies . we also used identical learning models in order to make a properly normalized comparison . to account for the yan et al . order being non - hierarchal we extended our learning model to include the cost of any unlearned components , making it similar to the model used in their publication . we find that our algorithm gives better @xmath1 and @xmath3 . this follows largely from the fact that our order is hierarchal and so there is no inefficiency associated with learning characters by rote and then later re - learning some of the components . but it is also dependent on the particular characteristics of the chinese character network , because there is no a priori reason why the optimal learning order need be hierarchal . for example , in the extreme , hypothetical case that a small number of complex characters accounted for the vast majority of usage frequency it would be more efficient to ignore the components and just learn them by rote . with such a network the yan et al . algorithm might perform better because it has access to the non - hierarchal parts of the character order space . indeed , it remains possible that that a non - hierarchal order is optimal for the character network as it exists . our algorithm produces orders that exhibit a high degree of logical transparency . this behavior can be seen in fig . [ fig6 ] , and it follows directly from the algorithm , which tends to cluster components directly before the compounds in which they are used . the sequence , , , , is a typical example . similar sequences appear frequently in the heisig and richardson order but rarely in the orders produced by the yan et al . algorithm . the heisig and richardson order is characterized by the introduction of compound characters in sets that have a particular primitive in common . the pedagogical advantages of this pattern were discussed in the introduction but are not fully realized because of the absence of phonetic information ( they do not give character pronunciations ) and their deliberate strategy of assigning semantic values to all components , whether or not this is etymologically correct . grouping of characters into meaningful sets does not occur in either of the algorithmically - generated orders and its absence offers an avenue for improvement . nevertheless , the two algorithms do produce orders with markedly different degrees of clustering between related characters ( of which grouping of characters into sets might be considered a limiting case ) . [ fig9 ] shows all three orders compared using two parameters which measure the degree of clustering : the average distance , in number of characters , of each character to its closest preceding component and to the closest character sharing a component . in both these measures the heisig and richardson order exhibits the strongest clustering , with ours intermediate between theirs and yan et al . along with the logical transparency of our order , we take this to indicate improved pedagogical characteristics compared to yan et al . . averages below 250 characters are not shown because in this region the averages fluctuate wildly.,scaledwidth=70.0% ] in summary , we have shown that our algorithm can identify pure hierarchal orders of chinese characters that give high learning efficiencies . the numerical improvements over yan et al . are modest , but they are coupled with character - to - character patterns that we suggest are pedagogically advantageous : the order is strictly hierarchal , components are typically introduced immediately before they are used , and there is stronger clustering of related characters . . * learning curve parameters . * the number of characters learned @xmath24 , final learning efficiency @xmath1 , and integral learning efficiency @xmath3 for reference cumulative learning costs of @xmath25 and @xmath26 . the yan et al . algorithm was optimized up to a cumulative learning cost of @xmath27 . [ cols="<,<,<,^,^,^,^,^,^",options="header " , ] [ table ] we can expand our analysis to include multiple - character chinese words by making minor changes to the network and learning model . the network is expanded using multiple - character words from the subtlex - ch word database ( limited to the 10000 most common words , for computational convenience ) . this database is also used for all usage frequencies , including those of characters . previously , when using character frequencies , the and of ( zhdo , to know ) had very high frequencies because the compound is common , but when we switch to word frequencies their frequencies become very small compared to the compound - becomes the important thing to learn and and are only learned in advance because they support it in the hierarchy . the learning model is expanded to account for multiple - character words by assigning them learning costs equal to the number of character combinations required to build them . thus , and all have learning costs of 1 . the learning order calculated in this way is the learning order for the productive units of communication . characters and their components are introduced in hierarchal order only when needed to build multiple - character words or when the character forms a single - character word itself . this approach to learning to read and write chinese has the advantage that the things being learned - the words - can be put to immediate and productive use in reading and writing sentences , activities that helps the learning process . this is not the case for characters , which often only acquire their usage frequencies via expression through words . is a categorically useful word to know , that can be assigned a clear mental identity and used immediately . its component characters and are rarely used alone as words and take their character usage frequencies primarily from their presence in this and other , less common compounds . in the character learning order they appear as unrelated characters , ten characters apart , yet ca nt be used by a learner until has already been learned . in the word learning order they appear together in the logical sequence , , , , . the word learning curve is shown in fig . [ fig10 ] , where it is compared to the curve for characters . the difference between the two is stark , indicating that mastery of characters is substantially easier than mastery of the words they combine to create ( something which accords with learner experience ) . the other curves in the figure show what happens to the word curve when the target set of words is a subset of the wider language . these curves represent realistic situations for the application of our algorithm , in which a student is trying to master the vocabulary required to pass a course or read a particular book . the figure shows curves for the vocabulary lists for levels 1 - 4 and 1 - 6 of the hsk chinese proficiency test . this is an exam administered by the chinese national office for teaching chinese as a foreign language ( notcfl ) @xcite and the lists contain 1200 and 5000 words , respectively . we also show curves corresponding to the lowest two levels of a series of chinese readers , containing 496 and 977 distinct words , respectively @xcite . the text from the readers was segmented into words using the same algorithm used to calculate the word usage frequency list ( implemented using pynlpir @xcite ) . these curves necessarily have inferior @xmath1 and @xmath3 compared with the curve for the wider language but nevertheless represent efficient approaches to the restricted goals . the algorithm would be similarly useful in managing the transition between different levels of a course or from one book to another . in these situations , the algorithm would provide an efficient way to bridge the gap in vocabulary . [ [ s1_file ] ] s1 file . + + + + + + + + * input files and character orders . * data.zip is a zipped file containing input data ( usage frequencies , decompositions , stroke numbers , target word lists ) and final character and word orders . a readme within the file contains information on the origin of the input data and guidance on use of the output orders . this work was supported by the shanghai key lab for particle physics and cosmology ( sklppc ) , grant no . we acknowledge the generosity of outlier linguistic solutions in sharing part of their dictionary prior to publication . walker glr . intensive chinese curriculum : the easli model . journal of the chinese language teachers association . 1989 ; 24(2 ) : 43 - 83 . light t. controlled composition and reading . journal of the chinese language teachers association . 1975 ; 10(2 ) : 70 - 79 . kane d. the chinese language : its history and current usage . tuttle publishing ; 2006 . tse sk , cheung wm . chinese and the learning of chinese . in : marton f , tse sk , cheung wm , editors . on the learning of chinese . sense publishers ; 2010 . pp . 1 - 8 . richardson tw . chinese character memorization and literacy : theoretical and empirical perspectives on a sophisticated version of an old strategy . in : guder a , xin j , yexin w , editors . the cognition , learning and teaching of chinese characters . beijing language university press ; 2007 . ritter fe , nerb j , lehtinen e , oshea t , editors . in order to learn : how the sequence of topics influences learning . oxford university press ; 2007 . chase wg , simon ha . perception in chess . cognitive psychology . 1973 ; 4 : 55 - 81 . ling lm , marton f. towards a science of the art of teaching : using variation theory as a guiding principle of pedagogical design . international journal for lesson and learning studies . 2012 ; 1(1 ) : 7 - 22 .
we present a novel algorithm for optimizing the order in which chinese characters are learned , one that incorporates the benefits of learning them in order of usage frequency and in order of their hierarchal structural relationships . we show that our work outperforms previously published orders and algorithms . our algorithm is applicable to any scheduling task where nodes have intrinsic differences in importance and must be visited in topological order .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Gulf Coast Restoration Act''. SEC. 2. AMENDMENTS TO THE WORKFORCE INVESTMENT ACT OF 1998. (a) In General.--Section 173(a) of the Workforce Investment Act of 1998 (29 U.S.C. 2918(a)) is amended-- (1) by striking ``and'' at the end of paragraph (3); (2) by striking the period at the end of paragraph (4) and inserting ``; and''; and (3) by adding at the end the following: ``(5) to provide assistance to the Governor of any State within the boundaries of an area that is the subject of a Presidential determination that additional resources are necessary to respond to an incident related to a spill of national significance declared under the National Contingency Plan provided for under section 105 of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (42 U.S.C. 9605) (`covered incident') by providing oil spill relief employment in the area in accordance with subsection (h).''. (b) Oil Spill Relief Employment Assistance Requirements.--Section 173 of the Workforce Investment Act of 1998 (29 U.S.C. 2918) is amended by adding at the end the following: ``(h) Oil Spill Relief Employment Assistance Requirements.-- ``(1) In general.--Funds made available under subsection (a)(5)-- ``(A) shall be used to provide oil spill relief employment on projects with respect to cleaning, restoration, renovation, repair, and reconstruction (including the construction of infrastructure to facilitate ecosystem and habitat restoration, protection, creation, enhancement and species repopulation) of lands, marshes, waters, structures, and facilities, located within an area of a covered incident, as well as offshore areas related to such incident, and projects that provide food, clothing, shelter, and other humanitarian assistance to individuals harmed by the covered incident; ``(B) shall be used to establish general cleanup standards approved by the Secretary for the selection of remedial actions for an area of a covered incident (including offshore areas related to such incident); ``(C) may be expended through public and private agencies and organizations engaged in projects described in subparagraph (A); ``(D) may be expended to provide employment and training activities; ``(E) may be expended to provide personal protective equipment to workers engaged in oil spill relief employment described in subparagraph (A); ``(F) may be used to increase the capacity of States to make available the full range of services authorized under this title and provide information (in languages appropriate to the individuals served) about, and access to, the variety of public and private services available to individuals adversely affected by the covered incident at one-stop centers described in section 134(c) and other access points (including other public facilities, mobile service delivery units, and social services offices); and ``(G) may be used to provide temporary employment by public sector entities, in addition to the oil spill relief employment described in subparagraph (A). ``(2) Priority.--An individual shall be given priority consideration for the oil spill relief employment described in subsection (a)(5) if such individual-- ``(A) is temporarily or permanently laid off as a consequence of a covered incident with respect to which such employment is being provided; ``(B) is a dislocated worker; ``(C) has been an unemployed individual for a prolonged period; or ``(D) meets such other criteria as the Secretary may establish. ``(3) Prevailing wages.--The Secretary shall require that each State receiving support under subsection (a)(5) provide reasonable assurance that all employees and contractors employed in the performance of a project for which the support is provided will be paid wages at rates not less than those prevailing on similar work in the locality as determined by the Secretary of Labor in accordance with subchapter IV of chapter 31 of part A of subtitle II of title 40, United States Code (commonly referred to as the `Davis-Bacon Act'). ``(4) Limitations on oil spill relief employment assistance.--An individual shall be employed under subsection (a)(5) in oil spill relief employment with respect to a covered incident for a period of 6 months. Such period of employment may be subject to an extension for a period determined by the Secretary. ``(5) Reimbursement.--Each party responsible for a covered incident under the Oil Pollution Act of 1990 (33 U.S.C. 2701 et seq.) shall, upon the demand of the Secretary of the Treasury, reimburse the general fund of the Treasury for the costs incurred by the United States under subsection (a)(5) with respect to such incident, as well as the costs of the United States in administering its responsibilities under subsection (a)(5) with respect to such incident. If a responsible party fails to pay a demand of the Secretary of the Treasury pursuant to subsection (a)(5), the Secretary shall request the Attorney General to bring a civil action against the responsible party or a guarantor in an appropriate district court to recover the amount of the demand, plus all costs incurred in obtaining payment, including prejudgment interest, attorneys fees, and any other administrative and adjudicative costs involved. Such reimbursement shall be without regard to limits of liability under the section 1004 of the Oil Pollution Act of 1990 (33 U.S.C. 2704). ``(6) Use of available funds.--Funds appropriated for fiscal years 2009 and 2010 and remaining available for obligation by the Secretary to provide any assistance authorized under this section shall be available to assist workers affected by a covered incident, including workers who have relocated from areas in which a covered incident has been declared. Under such conditions as the Secretary may approve, any State may use funds that remain available for expenditure under any grants awarded to the State under this section to provide any assistance authorized under subsection (a)(5). Funds used pursuant to the authority provided under this paragraph shall be subject to the reimbursement requirements described in paragraph (5). ``(7) Requirements for grant applications.--In order to receive funds under subsection (a)(5), a State shall submit an application at such time, in such manner, and containing such information as the Secretary may require. Such application shall include a detailed description of-- ``(A) how the State will ensure the capacity of one-stop centers described in section 134(c) and other access points to-- ``(i) provide affected individuals with information, in languages appropriate to the individuals served, about the range of available services; and ``(ii) provide affected individuals with access to the range of needed services; ``(B) how the State will prioritize individuals who are temporarily or permanently laid off as a consequence of the covered incident in the assignment of temporary employment positions; and ``(C) any other supporting information the Secretary may require.''. (c) Effective Date.--The amendments made by this section shall take effect immediately upon the date of the enactment of this section and shall apply to all responsible parties under the Oil Pollution Act of 1990 (33 U.S.C. 2701 et seq.), including any party determined to be liable under such Act for any incident that occurred prior to the date of the enactment of the amendments made by this section. SEC. 3. GULF COAST COMMUNITY CONSERVATION CORPS. (a) Authority.--From the amounts appropriated to carry out this section, the Corporation for National and Community Service (in this section referred to as the ``Corporation''), pursuant to section 126(b) and subtitle E of title I of the National and Community Service Act of 1990 (42 U.S.C. 12576(b)), shall carry out the activities authorized under this section. (b) Establishment.-- (1) In general.--There is established a Gulf Coast Community Conservation Corps (in this section referred to as the ``Gulf Coast CCC''), to be administered by the Corporation directly, or by grant or contract, to carry out full- or part- time service national service programs that provide oil spill relief in accordance with subsection (d) in areas that are the subjects of a Presidential determination that additional resources are necessary to respond to a covered incident. (2) Existing grants or contracts.--A grant or contract awarded under paragraph (1) may be awarded to an entity with which the Corporation has an existing grant or contract. (c) Participants.-- (1) Eligibility.--To be eligible to participate in a national service program carried out by the Gulf Coast CCC, an individual-- (A) shall be participating in a national service program under the national service laws; or (B) shall be determined to be eligible in a manner that is consistent with the determination of eligibility under the national service laws. (2) Benefits.--An individual selected to participate in a national service program carried out by the Gulf Coast CCC shall be eligible for any living allowances, educational awards, and other support that are authorized for a participant under the national service laws. (3) Priority.--In selecting participants under paragraph (1), priority shall be given to unemployed individuals between the ages of 18 through 24. (4) Training.--Training for participants serving in the Gulf Coast CCC shall include an environmental education component. (d) Programs.--National service programs carried out by the Gulf Coast CCC shall-- (1) include programs-- (A) involving the cleaning, restoration, renovation, repair, and reconstruction (including the construction of infrastructure to facilitate ecosystem and habitat restoration, protection, creation, enhancement and species repopulation), of lands, marshes, waters, structures, and facilities located within the area of the covered incident, as well as offshore areas related to such incident; and (B) providing food, clothing, shelter, and other assistance to communities and individuals harmed by the covered incident; and (2) comply with the nonduplication and nondisplacement provisions of section 177 of the National and Community Service Act of 1990 (42 U.S.C. 12637). (e) Educational Assistance.--From funds appropriated to carry out this section, the Corporation may transfer funds to the National Service Trust established under section 145 of the National and Community Service Act of 1990 (42 U.S.C. 12601) to provide in-service or post-service benefits to, or funds to otherwise support, individuals participating in a national service program carried out by the Gulf Coast CCC. (f) Reimbursement.--Each party responsible for a covered incident under the Oil Pollution Act of 1990 (33 U.S.C. 2701 et seq.) shall, upon the demand of the Secretary of the Treasury, reimburse the general fund of the Treasury for the costs incurred by the United States under this section with respect to such incident, as well as the costs of the United States in administering its responsibilities under this section with respect to such incident. If a responsible party fails to pay a demand of the Secretary of the Treasury pursuant to this section, the Secretary shall request the Attorney General to bring a civil action against the responsible party or a guarantor in an appropriate district court to recover the amount of the demand, plus all costs incurred in obtaining payment, including prejudgment interest, attorneys fees, and any other administrative and adjudicative costs involved. Such reimbursement shall be without regard to limits of liability under the section 1004 of the Oil Pollution Act of 1990 (33 U.S.C. 2704). (g) Definitions.--In this section: (1) In general.--The term ``national service laws'' has the meaning given such term in section 101 of the National and Community Service Act of 1990 (42 U.S.C. 12511). (2) Covered incident.--The term ``covered incident'' means an incident related to a spill of national significance declared under the National Contingency Plan provided for under section 105 of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (42 U.S.C. 9605). (3) Unemployed individual.--The term ``unemployed individual'' has the meaning given such term in section 101 of the Workforce Investment Act of 1998 (29 U.S.C. 2801).
Gulf Coast Restoration Act - Amends the Workforce Investment Act of 1998 to authorize the Secretary of Labor to award national emergency grants to a state to provide oil spill relief employment assistance for an area of the state that has been affected by an oil or hazardous substances spill of national significance (covered incident). Makes assistance available to: (1) provide oil spill relief employment of unemployed or dislocated workers on projects to clean, restore, or reconstruct lands, marshes, waters, and structures located within an area of a covered incident, as well as for food, clothing, shelter and other humanitarian assistance to affected individuals; (2) establish cleanup standards; (3) provide employment and training of, and protective equipment to, workers; (4) increase a state's capacity to provide information about public and private services at one-stop centers and other access points to individuals adversely affected by a covered incident; and (5) provide temporary employment by public sector entities. Requires the Secretary to require states receiving oil spill relief employment assistance to provide assurance that Davis-Bacon Act (locality pay) wages are paid to all employees and contractors who work on such projects. Limits an individual's oil spill employment to six months, subject to extension for a period determined by the Secretary. Establishes a Gulf Coast Community Conservation Corps (Gulf Coast CCC), administered by the Corporation for National and Community Service, to carry out national service programs that provide a covered incident area with oil spill relief specified in this Act. Authorizes the Corporation to transfer funds from the National Service Trust Fund to provide in-service or post-service national service educational benefits to individuals participating in a Gulf Coast CCC national service program. Requires parties responsible for a covered incident to reimburse the federal government for costs incurred in carrying out the activities authorized under this Act.
Three More Women Come Forward to Accuse Bill Cosby of Sexual Assault Three more women have publicly come forward with accusations that Bill Cosby sexually assaulted them in the 1970s and '80s. The three women, including an actress who appeared on The Cosby Show and the former wife of a vice president at the William Morris Agency, joined the more than 40 women who allege they were drugged, sexually assaulted or sexually harassed by the comedian.At a press conference Wednesday, civil rights attorney Gloria Allred, who is now representing more than 21 accusers, said the three women are speaking out now to show their support for other accusers who have been criticized by Cosby's attorneys "There is no statute of limitations on free speech," says Allred. "A person who alleges that she or he is a victim can speak out at any time."Cosby has consistently denied the allegations of sexual assault. His lawyer, Marty Singer, previously said that the claims "about alleged decades-old events are becoming increasingly ridiculous." Singer did not immediately comment on the new allegations.At the press conference Wednesday, Colleen Hughes said she was a young stewardess with American Airlines in the early '70s when she allegedly met Cosby on a flight to Los Angeles. Hughes claimed Cosby flirted with her the entire flight and invited her to lunch in Beverly Hills. She agreed to go, but only with another stewardess. Cosby allegedly had a car and driver waiting for them at the airport but the other stewardess never showed up. Hughes claimed she drove with Cosby to the Fairmont Hotel and he allegedly watched television in her room while she got ready for lunch.When she came out of the bathroom fully dressed, she said Cosby had allegedly ordered a bottle of champagne and was drinking it out of her Gucci pump. She claimed he raised the shoe towards her, and said, "A princess should always drink champagne out of a glass slipper."Hughes said they started watching television and he allegedly tried to hold her hand. The last thing she remembered, she claimed, was waking up around 5:15 p.m. Her clothes were all over the room and she "felt semen on the small of my back and all over me," she claimed."It was disgusting," she said. "Bill obviously did not use a condom and there was no lunch and Bill was nowhere to be seen. I was confused and ashamed and never told anyone about what happened to me."Linda Ridgeway Whitedeer told reporters that at the time of her alleged assault she was the recently divorced wife of Fred Apollo, a vice president and department head of live TV for the William Morris Agency, and who worked closely with the comedian. The former actress, who starred in 1972's The Mechanic with Charles Bronson, claimed she met Cosby on a movie set around 1971. She claimed Cosby told her she was there to be interviewed and then allegedly lured her into the director's office.Once inside, Cosby exposed himself, grabbed her head and shoved his penis in her mouth, Whitedeer alleged."His attack was fast with surgical precision and surprise on his side," she claimed. "When Cosby was done there was a horrible mess of semen all over my face, my clothes and in my hair."Whitedeer said she wanted to go straight to William Morris Agency but changed her mind because she didn't want to embarrass her ex-husband. "An actress is like a tennis player," she said at the press conference. "Her integrity and confidence are everything. For me, Bill Cosby was a career-killer."The third alleged victim, Eden Tiri, said she was a 22-year-old actress when she was given a part playing a cop on The Cosby Show in 1989. While working on the set in front of hundreds of people, she alleged she was led off set to Cosby's dressing room twice but he wasn't there. The third time, Cosby was allegedly waiting for her. Inside the dressing room, she alleged Cosby wrapped his arms around her and whispered in her ear, "See that's all we were going to do, make love. This is making love. He turned me around, hugged me and I left without saying a word."On July 29, a California judge ordered the comedian to give a deposition in the civil suit filed by Allred's client Judy Huth. The deposition is scheduled for Oct. 9."My hope for that deposition is that we will ask questions and he should provide answers," said Allred. "We have a great deal of latitude. We are looking forward to his answers. We are entitled to answers."Huth claimed that Cosby, 78, molested her inside the Playboy Mansion when she was just 15 years old. She is just one of the nearly 50 women who've accused Cosby of some form of sexual abuse.The deposition will be the first time Cosby has spoken about the sexual assault allegations against him since a separate case in 2005. Portions of that deposition were unsealed earlier last month; in the deposition, Cosby admitted he gave Quaaludes to a woman and then had sex with her. ||||| CLOSE Three more women have come forward with allegations of sexual assault against Bill Cosby. Now, more than 40 women have made that claim. Video provided by Newsy Newsy Gloria Allred with three new accusers of Bill Cosby, Colleen Hughes (L), Linda Ridgeway Whitedeer (2nd R) and actress Eden Tirl (R), on Aug. 12 in Los Angeles. (Photo: Kevork Djansezian/ Getty Images) Three more women joined the list Wednesday of nearly 50 women who have publicly accused Bill Cosby of sexual assault. Crusading women's-rights lawyer Gloria Allred, one of Cosby's most persistent legal foes, introduced three new clients who claimed to have been abused by Cosby, at a press conference in her Los Angeles office where she has introduced Cosby accusers multiple times before. The new accusers include two actors and one flight attendant. In introducing them, Allred alluded to the recent New York Magazine cover of three dozen Cosby accusers and one empty chair, symbolizing all the accusers who have not stepped forward publicly. "It is my turn to take the empty chair, " said Linda Ridgeway Whitedeer, a former actress, weeping a little in Allred's office. Whitedeer said she met Cosby through her ex-husband who was an executive at Cosby's agency, the William Morris Agency. In 1971, she said, Cosby of forced her into oral sex in a lightning-quick attack minutes after she went into an empty room with him on a movie set and sat down in a chair. She said he grabbed the back of her head by her hair. Linda Ridgeway Whitedeer, actor. In 1971 #billcosby 'shoved his penis in my mouth' during interview. pic.twitter.com/gTS6rLismg — Rory Carroll (@rorycarroll72) August 12, 2015 Afterward "he was mumbling that I had been blessed with his semen as if it were holy water," she said. "He gloated over my humiliation. He had planned it. I was in shock." She said she never told anyone because she promised her ex not to do anything to embarrass him, because Cosby was too powerful in the industry and because she was worried about upsetting Cosby's wife, Camille, who was pregnant at the time. Cover of 'New York Magazine' on July 27, 2015 (Photo: AMANDA DEMME / HANDOUT, EPA) Colleen Hughes, a former American Airlines flight attendant, says she met Cosby on a flight in the early 1970s, and agreed to go to lunch with him. While she freshened up in a hotel room, he ordered Champagne, then gave her some. She woke up hours later covered in semen, she says, and Cosby was gone. She saw him again on another flight about a year later, he flirted with her again, apparently not recognizing her. She warned him to stay away from her fellow flight attendants or she would report him. "I've lived my whole life with this terrible secret about Bill and what he did to me," she said. "I never told anyone." Eden Tirl, an actress, accused Cosby of sexually harassing her — grabbing her and hugging her in an overly intimate and yet intimidating way — when she appeared on The Cosby Show during the fifth season in 1989. She was an admirer of Cosby from childhood, and praised him and the show for putting black and mixed-race Americans, like herself, on previously all-white American TV. But after her encounter with Cosby, she said, she did not report him because she feared his power to ruin her career. "It took 39 other women to come forward before I was ready to share my experience" publicly, she said. The latest tally of Cosby accusers totals 46, so these three women would bring the number to 49 women who say they were drugged and raped by Cosby in episodes going back to the mid-1960s. Since the allegations first re-emerged last fall, Cosby has denied all wrongdoing and has not been charged with a crime. Most of the allegations against him are too old to be criminally prosecuted under states' statutes of limitation, but a number of accusers are suing him in civil court. Bill Cosby at Temple University on June 4, 2015. (Photo: Matt Rourke, AP) Allred represents 21 accusers so far, and the lawsuit she is pursuing for one of them, Judy Huth, is the one most likely to force Cosby to answer questions in a deposition. Huth alleges Cosby sexually molested her at the Playboy Mansion 40 years ago when she was 15, and is suing him for allegedly causing her severe emotional distress. Cosby tried to get her lawsuit dismissed, and failed, and now a California judge has ordered him to sit for a deposition on Oct. 9. Allred has promised to grill him, and will seek to make the deposition public. Allred also challenged Monique Pressley, Cosby's new lawyer hired to defend him in the media, to a public debate. She described Pressley's recent attempts to defend Cosby on TV as "pathetic" for implying that accusers have no right to speak out decades after what they say happened to them. "Instead of trying to change the subject from specific allegations to vague generalizations and truisms about the criminal justice system, I challenge you to debate me," Allred said. "Stand up like a woman and do it and please do not give some ridiculous excuse as to why you cannot accept my challenge." Pressley did not respond to the new accusers but in a statement she rejected Allred's challenge. "While I do appreciate the temptation that may exist, for some, to turn this matter into a public spectacle, lawyers representing clients resolve matters in court, not debates," she said in the emailed statement. A deposition in another, 10-year-old civil suit against Cosby, which was supposed to be sealed but came out partly through court order and partly through a leak, has already severely damaged what was left of Cosby's reputation as a beloved TV dad and role model. In that deposition, Cosby acknowledged seeking drugs to give to women he sought for sex, and described his approach to seducing women in terms widely described as repulsive. Read or Share this story: http://usat.ly/1TrxIwA
– Three more women are accusing Bill Cosby of sexual assault and harassment, and the details are among the most graphic yet. "It is my turn to take the empty chair," an emotional Linda Ridgeway Whitedeer said at a press conference yesterday in Gloria Allred's office, referring to the New York magazine cover in which the chair represents all of the women who could still come forward against Cosby. Per USA Today, Whitedeer, who met him through her ex-husband, a William Morris exec, says the comedian forced her into oral sex on a movie set in 1971. "His penis was out of his pants and he shoved it into my mouth," she says, according to Page Six, adding per People that "when [he] was done there was a horrible mess of semen all over my face, my clothes and in my hair." Cosby also allegedly said, per USA Today, "that I had been blessed with his semen as if it were holy water. He gloated over my humiliation." Meanwhile, Colleen Hughes says she had a "disgusting" encounter with Cosby in the '70s while working as a flight attendant, People notes. Cosby allegedly watched TV in a hotel room while she freshened up in the bathroom for a lunch date, and she says when she came out he was sipping champagne out of her Gucci shoe, saying, "A princess should always drink champagne out of a glass slipper." She apparently drank the champagne and says the next thing she remembers was waking up after 5 that afternoon, clothes scattered everywhere and with "semen on the small of my back and all over me. … I was confused and ashamed and never told anyone about what happened to me." Also to come forward yesterday: Eden Tiri, a then 22-year-old actress who says Cosby grabbed her and hugged her in his dressing room on the set of The Cosby Show in 1989. (Cosby is set to give a new deposition Oct. 9 in the case of another Allred client.)
Click here to tell the artists to #TakeAKnee during their halftime set in solidarity with Kaepernick. Rihanna has reportedly turned down the opportunity to perform in front of over 100 million people at the 2019 Super Bowl halftime show. Why? She supports Colin Kaepernick, the former San Francisco 49ers quarterback who has been exiled from the NFL because of his decision to kneel during the national anthem. Kaepernick risked his career to take a knee for equality, and the NFL punished him for it. Until the league changes their policy and support players’ constitutional right to protest, no artists should agree to work with the NFL. Join me in asking Maroon 5 to drop out of the 2019 Super Bowl halftime show. Rihanna is not the first major artist to turn down the Super Bowl halftime show. Jay-Z turned down a request to perform at the 2017 Super Bowl, and even addressed it in his 2018 song “Apesh*t.” The lyrics say, “I said no to the Super Bowl: you need me, I don't need you. Every night we in the endzone, tell the NFL we in stadiums too," while the music video shows a line of men on one knee. Comedian Amy Schumer has also weighed in, praising Rihanna and encouraging Maroon 5 to follow her lead and step down. Schumer also said that she will refuse to do any commercials that would air during the big game. “Hitting the NFL with the advertisers is the only way to really hurt them," she said. Maroon 5 has made music over the years featuring artists from all genres, including Rihanna, Cardi B and Kendrick Lamar — all of whom have publicly supported Kaepernick in his decision to protest the violent racism sweeping the United States. Maroon 5 must do the same. The band has a chance to stand on the right side of history. If they don’t, they will be remembered for choosing to side with the NFL over its players. The band’s lead singer, Adam Levine, has not shied away from politics in the past. He has been a strong supporter of same sex marriage and LGBT rights. The band even changed the location of a show because the venue supported anti-gay marriage laws. If the band can take a stand for LGBT rights, they should do the same for these players. Colin Kaepernick has sacrificed his NFL career to call out violent racism in America, and players across the country have followed his lead. Rihanna, Jay-Z, Amy Schumer and others have refused to work with the NFL. Maroon 5: Americans look to artists and celebrities as leaders, and you have huge opportunity to use your influence to take a stand. Sign to tell Maroon 5 to drop out of the Super Bowl halftime show in solidarity with Kaepernick and players who #TakeAKnee. ||||| Adam Levine and Maroon 5 are being urged to pull out of performing at the Super Bowl halftime show in support of former NFL player Colin Kaepernick. More than 44,000 people have signed a petition on change.org calling on the group to cancel their plans to perform at the sporting event in Atlanta on February 3 next year. According to the petition, “Kaepernick risked his career to take a knee for equality, and the NFL punished him for it. Until the league changes their policy and support players’ constitutional right to protest, no artists should agree to work with the NFL.” The former San Francisco 49ers quarterback, 31, was the first NFL player to kneel during the national anthem in a peaceful protest against police brutality and racial inequality in 2016. He has not played since that season and last year filed a grievance against the league and its owners, accusing them of colluding to keep him off the field. US Weekly exclusively revealed in September that the Grammy-winning band had accepted an offer to perform at the Super Bowl, with multiple sources later telling Us that Cardi B is “being considered” as a special guest. Maroon 5 have yet to officially confirm their involvement, but Levine, 39, did little to dispel the rumors during an appearance on The Ellen DeGeneres Show on Friday, November 16. “It’s definitely a rumor. And the rumor’s a rumor that everyone seems to be discussing. It’s the Super Bowl. It’s a great event and there’s gonna be a band performing — or an artist of some kind — at halftime. And it’s gonna be great, regardless of who it is,” the Voice judge said. “Whoever is lucky enough to get that gig is probably gonna crush it. … Whoever does it is probably equal parts nervous and excited. This is all speculative ‘cause I don’t know who I’m talking about.” In October, Amy Schumer urged the band to reconsider performing at the show, while Us exclusively reported that Rihanna had decided to skip performing at the Super Bowl because she “supports Colin Kaepernick.” Jay Z also claimed on his song “Apes—t” that he had also turned down an offer to headline the show, and Pink, who sang the national anthem at the Super Bowl this year, also reportedly said no. For all the inside details on the biggest celebrity stories and scoop this week, subscribe to our new podcast "Us Weekly's Hot Hollywood" below! Sign up now for the Us Weekly newsletter to get breaking celebrity news, hot pics and more delivered straight to your inbox! Want stories like these delivered straight to your phone? Download the Us Weekly iPhone app now!
– Thousands of people are not on board with Maroon 5 headlining the Super Bowl halftime show. At least 48,541 people, to be exact. That's how many people, as of this writing, had signed a Change.org petition asking the band to drop out of the performance. But no, not because they have anything against the band. Rather, the petition claims that Rihanna turned down the opportunity to perform because she supports Colin Kaepernick, and Maroon 5 should do the same. "Kaepernick risked his career to take a knee for equality, and the NFL punished him for it. Until the league changes their policy and support players’ constitutional right to protest, no artists should agree to work with the NFL," the petition reads. Maroon 5 has yet to officially confirm the rumor that it is this season's halftime headliner, Us reports.
stenotrophomonas maltophilia is a readily available commensal of importance ( 1 ) , found in water , soil , sewage and frequently on plant or within plant rhizosphere ( 2 ) . the bacteria explore the depression of immune systems to cause infection ( 4 - 6 ) , though they have also been implicated in infection of immunocompetent subjects ( 7 - 9 ) . they are therefore important considering their infectivity and the morbidity they initiate ( 10 , 11 ) , which range from nosocomial to community acquired infections . they cause a wide range of human systemic infections ( 12 , 13 ) after entering through the respiratory pathway ( 4 , 14 ) . multidrug resistance by s. maltophilia has been well documented ( 16 - 19 ) , raising the mortality rate in some areas to as high as 44.4% ( 20 ) . although the drug of choice for s. maltophilia infections is the sulfonamides ( 21 ) , especially the synergistic form ( cotrimoxazole or trimethoprim - sulfamethoxazole ) , resistance to these antibiotics is rampant around the world among human and nonhuman animals ( 22 - 24 ) and is mediated by the sulphonamide resistance ( sul ) gene . sul3 , being the newest sulphonamide gene , has been fingered as the possible reason for new rise in sulphonamide resistance world - wide ( 25 ) . sul2 has also been the most widely reported gene in animals ( 26 - 30 ) and can be used to trace the sulphonamide resistance genes in other sources originated from animal farms . therefore , the safety of consumers is hinged on the type of bacterial flora associated with the plants and their susceptibility to antibiotics when they infect consumers . this in turn depends on the pool of genes in the rhizosphere of plants , as resistance gene(s ) may become disseminated to the indigenous bacterial community form one organism , and ultimately contribute to the clinical problems of antibiotic resistant pathogens . therefore , the aim of this study was to assess the s. maltophilia isolates from plants rhizosphere in the nkonkobe municipality , eastern cape province , south africa , for their antibiogram characteristics and the presence of antibiotic resistance genes , sul2 and sul3 , in their genomes . this study was conducted within the nkonkobe municipality of the eastern cape province , south africa . the municipality is situated in the amathole district municipality , bordering the nxuba municipality to the west and the amahlathi municipality to the east . soil butternut and grass roots in alice town environment were carefully uprooted and aseptically cut with sterile scissors into sterile containers and transported in ice to the laboratory for bacteria isolation . isolation of the bacteria from root rhizospheres was performed following the methods of bollet et al . about 1 g of the plant root sections were collected and inoculated into 10 ml of nutrient broth ( bio - merieux , marcy - letoile , france ) , supplemented with 0.5 mg of dl - methionine ( sigma chemicals , south africa ) per ml . after 24 hours of incubation at 37c , 0.1 ml was inoculated unto mueller hinton agar , spread to dry using a glass spreader , and allowed to stand for 15 minutes . thereafter , four discs of 10 g imipenem ( mast diagnostics , merseyside , uk ) were aseptically placed on the surface of the inoculated agar . after 18 hours of incubations at 37c , colonies that grew around the disc were subcultured for purity and subjected to preliminary identification . the gram negative isolates were subjected to oxidase test and the oxidase negative isolates were subjected to preliminary speciation using analytic profile index 20e ( api 20 e , biomerieux , south africa ) . in addition , carbon assimilation tests and other biochemical tests were carried out in the identification process . differentiation of s. maltophilia isolates amongst the genus isolates identified above was carried out using specie - specific polymerase chain reaction ( pcr ) , using the primer sets sm1 ( 5'-cagcctgcgaaaagta-3 ' ) and sm2 ( 5'-ttaagcttgccacgaacag-3 ' ) ( inqaba biotech . , south africa ) ( 32 ) . the pcr condition was as follows : an initial denaturation of 95c for 5 minutes , a subsequent 30-cycle amplification including annealing at 58c for 10 seconds , extension at 72c for 60 seconds , and denaturation at 95c for 10 seconds . s. maltophilia dsm 50170 ( atcc 13637 , type strain t20 , berlin , germany ) was used as the control . the disc diffusion technique was employed to determine the antibiotic susceptibility pattern of the isolates . the test antibiotics included meropenem , cefuroxime , ampicillin , ceftazidime , cefepime , minocycline , kanamycin , ofloxacin , levofloxacin , moxifloxacin , ciprofloxacin , gatifloxacin , polymyxin b , cotrimoxazole , trimethoprim , aztreonam and polymyxin b. s. maltophilia 50170 was used as the positive control , and the antibiogram was performed in accordance with standards described by the national committee for clinical laboratory standards ( 34 ) and cheesebrough ( 35 ) . the multiple antibiotic resistance index ( mari ) was calculated as the ratio of the number of antibiotics to which resistance occurred by the isolates ( a ) to the total number of antibiotics to which the isolates were exposed ( b ) , ie , mari = a / b ( 36 ) . trimethoprim - sulfamethoxazole is the drug of choice in the treatment of infections caused by s. maltophilia . this , along with our initial observation of resistance to this antibiotic , informed the need for assessment of the presence of sul2 and sul3 genes in the resistant isolates , which were performed in accordance with the descriptions of blahna et al . the pcr condition for sul2 detection began with an enzyme activation ( denaturation ) stage at 94c for five minutes , followed by 30 cycles of denaturation at 94c for 40 seconds , annealing at 55c for 40 seconds and extension at 72c for 1 minute . the pcr condition was as follows : heating at 94c for five minutes , 30 cycles at 94c for 60 seconds , 55c for 60 seconds and 72c for 60 seconds , followed by one cycle at 72c for seven minutes ( 26 ) . this study was conducted within the nkonkobe municipality of the eastern cape province , south africa . the municipality is situated in the amathole district municipality , bordering the nxuba municipality to the west and the amahlathi municipality to the east . soil butternut and grass roots in alice town environment were carefully uprooted and aseptically cut with sterile scissors into sterile containers and transported in ice to the laboratory for bacteria isolation . isolation of the bacteria from root rhizospheres was performed following the methods of bollet et al . about 1 g of the plant root sections were collected and inoculated into 10 ml of nutrient broth ( bio - merieux , marcy - letoile , france ) , supplemented with 0.5 mg of dl - methionine ( sigma chemicals , south africa ) per ml . after 24 hours of incubation at 37c , 0.1 ml was inoculated unto mueller hinton agar , spread to dry using a glass spreader , and allowed to stand for 15 minutes . thereafter , four discs of 10 g imipenem ( mast diagnostics , merseyside , uk ) were aseptically placed on the surface of the inoculated agar . after 18 hours of incubations at 37c , colonies that grew around the disc were subcultured for purity and subjected to preliminary identification . the purified isolates were gram stained and observed under a light microscope . the gram negative isolates were subjected to oxidase test and the oxidase negative isolates were subjected to preliminary speciation using analytic profile index 20e ( api 20 e , biomerieux , south africa ) . in addition , carbon assimilation tests and other biochemical tests were carried out in the identification process . differentiation of s. maltophilia isolates amongst the genus isolates identified above was carried out using specie - specific polymerase chain reaction ( pcr ) , using the primer sets sm1 ( 5'-cagcctgcgaaaagta-3 ' ) and sm2 ( 5'-ttaagcttgccacgaacag-3 ' ) ( inqaba biotech . , the pcr condition was as follows : an initial denaturation of 95c for 5 minutes , a subsequent 30-cycle amplification including annealing at 58c for 10 seconds , extension at 72c for 60 seconds , and denaturation at 95c for 10 seconds . s. maltophilia dsm 50170 ( atcc 13637 , type strain t20 , berlin , germany ) was used as the control . the disc diffusion technique was employed to determine the antibiotic susceptibility pattern of the isolates . the test antibiotics included meropenem , cefuroxime , ampicillin , ceftazidime , cefepime , minocycline , kanamycin , ofloxacin , levofloxacin , moxifloxacin , ciprofloxacin , gatifloxacin , polymyxin b , cotrimoxazole , trimethoprim , aztreonam and polymyxin b. s. maltophilia 50170 was used as the positive control , and the antibiogram was performed in accordance with standards described by the national committee for clinical laboratory standards ( 34 ) and cheesebrough ( 35 ) . the multiple antibiotic resistance index ( mari ) was calculated as the ratio of the number of antibiotics to which resistance occurred by the isolates ( a ) to the total number of antibiotics to which the isolates were exposed ( b ) , ie , mari = a / b ( 36 ) . trimethoprim - sulfamethoxazole is the drug of choice in the treatment of infections caused by s. maltophilia . this , along with our initial observation of resistance to this antibiotic , informed the need for assessment of the presence of sul2 and sul3 genes in the resistant isolates , which were performed in accordance with the descriptions of blahna et al . the pcr condition for sul2 detection began with an enzyme activation ( denaturation ) stage at 94c for five minutes , followed by 30 cycles of denaturation at 94c for 40 seconds , annealing at 55c for 40 seconds and extension at 72c for 1 minute . the pcr condition was as follows : heating at 94c for five minutes , 30 cycles at 94c for 60 seconds , 55c for 60 seconds and 72c for 60 seconds , followed by one cycle at 72c for seven minutes ( 26 ) . one hundred and twenty ( 96% ) s. maltophilia isolates were recovered from grass root rhizosphere , while 5 ( 4% ) were recovered from soil butternut rhizosphere ( table 2 ) . about 8.9% of the isolates were resistant to meropenem , while resistance to the other antibiotics was as follows : cefuroxime 95.6% , ampicillin - sulbactam 53.9% , ceftazidime 10.7% , cefepime 29.3 % , minocycline 2.2% , kanamycin 56.9% , ofloxacin 2.9% , levofloxacin 3% , moxifloxacin ( 2.8% ) , ciprofloxacin 24.3% , gatifloxacin 1.3% , polymyxin b 2.9% and aztreonam 58% ( table 3 ) . about 88% of the isolates were susceptible to meropenem and ceftazidime , while 58.7% were susceptible to cefepime . in addition , 97.8% and 97.1% of the isolates were susceptible to minocycline and polymycin b , respectively . with regards to the fluoroquinolones , about 94.7% of the isolates were susceptible to both gatifloxacin and levofloxacin , while 90% and 87.1% were susceptible to moxifloxacin and ofloxacin , respectively ( table 3 ) . a lower resistance ( 26.1% ) to cotrimoxazole was observed in comparison with 98.6% resistance to trimethoprim ( table 3 ) , and the mari ranged 0.32 - 0.9 ( figure 1 ) . furthermore , four isolates were positive for sul3 genes while none were for sul2 gene ( table 4 ) . commensal s. maltophilia may end up as an opportunistic pathogen ( 37 ) . as revealed in this study , these bacteria are easily culturable , and appear ubiquitous , probably due to their resilience in the face of environmental stress ( 38 ) . our experience in this study suggests that the recovery of the organisms varies from place to place . as some studies have reported the isolation of this bacteria from soil butternut and walnut rhizosphere ( 39 , 40 ) , only 5 ( 4% ) were isolated from the soil butternut rhizosphere compared to 120 ( 96% ) from grass rhizosphere . the intrinsic resistance of this organism to imipenem was exploited for their isolation and api , supported by molecular identification , allowed convenient discrimination between the stenotrophomonas species and other imipenem - resistant bacteria only ( 32 ) . the recovery rate of this bacterium appears to be increasing with time compared to when the bacteria was initially discovered . this scenario is buttressed by our findings as well as those of gulmez et al . ( 41 ) which showed a higher frequency of occurrence of this specie than previously observed . s. maltophilia has been reported to be resistant to myriads of antibiotics ( 42 , 43 ) . this high resistance characteristic which was peculiar to clinical isolates has now been observed among environmental strains ( 44 , 45 ) . the resistance observed to kanamycin and trimethoprim in this study was in agreement with the report of musa et al . similarly , s. maltophilia resistance to cephalosporin was higher in this study compared to that reported previously ( 47 ) . berg et al . ( 48 ) and crossman et al . ( 49 ) also noted that resistance to conventional antibiotics would have helped s. maltophilia to compete with other rhizospheric bacteria and made them survive in their habitat . this assertion is pertinent as all the isolates here showed mari > 0.2 , which implies that they have arisen from high - risk sources where antibiotics is in constant arbitrary use resulting in high selective pressure , as reported by suresh et al . fluoroquinolone and polymycin b , both of which showed good activities against the s. maltophilia isolates , are usually the antibiotics of choice in the treatment of infections by the bacteria . the activities of these antibiotics against the bacteria have been similarly reported by gales et al . valdezate et al . ( 52 ) observed that > 95% ( 94.7% in this study ) of the bacterial isolates in their study were susceptible to a fluoroquinolone . however , it is known that trimethoprim - sulfamethoxazole is the drug of therapeutic choice against s. maltophilia infections ( 10 , 53 - 55 ) ; but several reports have shown that the prevalence of s. maltophilia strains that are resistant to trimethoprim - sulfamethoxazole are increasing ( 56 - 58 ) . in this study , about 26% of the s. maltophilia isolates were resistant to this antibiotic compared with 2% reported elsewhere ( 10 ) . the trend continues to threaten the public health of individuals , especially in an hiv / aids infested populations where the immune system is weakened . resistance to trimethoprim - sulfamethoxazole is mediated by the sulphonamide resistance sul genes among other determinants ( 59 ) . a study in portugal by antunes et al . ( 60 ) detected sul1 , sul2 , or sul3 genes in some gram - negative isolates . this gene was earlier detected in some gram - negative isolates recovered from animals and foods in switzerland and germany ( 22 - 24 ) , suggesting commensal s. maltophilia to be as important as its clinical counterpart . the presence of sul3 genes in this study may imply that the endophytic and clinical strains possess a similar level of antibiotic resistance , which may be more extensive among some endophytic strains of s. maltophilia ( 2 ) . the rise in this sulphonamide resistance worldwide has been attributed to the newest sulphonamide resistance gene , sul3 , especially in nonclinical ( human ) specimens like fresh water and soil ( used in this study ) , sewage loving animals and animal farm ( 25 , 62 ) ; but the isolates harboring these genes can still infect human . the potential threat that such resistant isolates could be to public health , informed the call for a surveillance study of the sul gene and phenotypic sulfamethoxazole by toleman et al . commensal s. maltophilia appears to be an important commensal with comparable antibiogram characteristics to its clinical strains . it also appears to be abundant in grass and soil butternut rhizosphere in the eastern cape province of south africa . the maris of the bacterial isolates suggest that their sources have been under antibiotics selective pressure , which could be related to abuse of antibiotics . their antibiogram characteristics also suggest that the bacterium is an important reservoir of antibiotic resistant determinants ( especially sulphonamide resistance ( sul3 ) genes ) in the environment .
background : assessment of resistance genes is imperative , as they become disseminated to bacterial flora in plants and to the indigenous bacterial community , and thus ultimately contributes to the clinical problems of antibiotic resistant pathogens.objectives:the research was to assess the antibiotic characteristics and incidence of sul3 genes of stenotrophomonas maltophilia isolates recovered from rhizospheres plant in nkonkobe municipality.materials and methods : identification and assessment of resistance genes ( sul2 and sul3 genes ) were carried out using polymerase chain reaction ( pcr ) . analytical profile index ( api ) was used for biochemical characterization for identification before the pcr . antibiotic susceptibility test was carried out using the approved guidelines and standards of clinical laboratory standard institute ( clsi).results : a total of 125 isolates were identified , composed of 120 ( 96% ) from grass root rhizosphere and 5 ( 4% ) from soil butternut root rhizosphere . in vitro antibiotic susceptibility tests showed varying resistances to meropenem ( 8.9% ) , cefuroxime ( 95.6 % ) , ampicillin - sulbactam ( 53.9% ) , ceftazidime ( 10.7% ) , cefepime ( 29.3 % ) , minocycline ( 2.2% ) , kanamycin ( 56.9% ) , ofloxacin ( 2.9% ) , levofloxacin ( 1.3% ) , moxifloxacin ( 2.8% ) , ciprofloxacin ( 24.3% ) , gatifloxacin ( 1.3% ) , polymyxin b ( 2.9 % ) , cotrimoxazole ( 26.1% ) , trimethoprim ( 98.6% ) and aztreonam ( 58% ) . the isolates were susceptible to the fluoroquinolones ( 74.3 - 94.7% ) , polymycin ( 97.1% ) and meropenem ( 88.1% ) . the newest sulphonamide resistance gene , sul3 , was detected among the trimethoprim - sulfamethoxazole ( cotrimoxazole)-resistant isolates , while the most frequent sulphonamide - resistant gene in animal source isolates , sul2 , was not.conclusions:the commensal s. maltophilia isolates in the nkonkobe municipality environment harbored the resistant gene sul3 as clinical counterparts , especially from the perspective of reservoirs of antibiotic resistance determinants .
Theater shooter James Holmes is in court this week as prosecutors outline their case. Will we finally learn what was inside his notebook? Christine Pelisek on the possible surprises. Almost six months after 12 people were killed and at least 70 were injured in a shooting rampage at a movie theater in Aurora, Colo.—and just days after a deadly hostage standoff rattled the town once more—the highly anticipated preliminary hearing is set to begin today for accused mass shooter James E. Holmes. The purpose of the preliminary hearing, which is expected to last through the week and draw hundreds of witnesses, victims, and members of the media to the Arapahoe County courthouse, is to determine if there is sufficient evidence to put the 25-year-old former University of Colorado Denver neuroscience doctoral student on trial. Holmes, who has not yet entered a plea and has made at least one suicide attempt by running headfirst into a jail cell wall, has been charged with 166 counts of first-degree murder and attempted murder and possession of explosive devices. This will be the first time that details about the shooting and Holmes’s capture by police minutes after the rampage will be revealed. In July, Arapahoe County District Judge William Sylvester issued a gag order barring attorneys and investigators from speaking publicly about the case. The contents of a notebook that Holmes sent his school psychiatrist, Dr. Lynne Fenton, on July 19—the day before the shooting—that reportedly contained violent images of an attack may also be divulged. Holmes is suspected of gunning down 12 people and injuring 70 others in a shooting spree at the midnight showing of The Dark Knight Rises at the Century 16 movie theater on July 20 in Aurora. On that fateful night, police say Holmes, with his hair dyed red as a creepy homage to Batman’s Joker, was dressed in combat gear and armed with an assault rifle, a Glock pistol, a shotgun, and two canisters of what sources say was tear gas. After Holmes was arrested at the back of the theater, police discovered that his third-floor apartment had been booby-trapped with explosives, trip wire, and gasoline. Authorities believe that Holmes had rigged his apartment so that it would kill responders when they arrived to investigate after the shooting. Police say Holmes went on his rampage one month after he withdrew from the Ph.D. program after failing a year-end exam. On that day, he bought an AR-15 semiautomatic rifle to add to his burgeoning collection of weapons. At the preliminary hearing, Arapahoe County prosecutors will likely outline their case against Holmes and present evidence to show that Holmes’s rampage was premeditated and that he methodically began preparing for the attack months earlier. In an earlier hearing, prosecutors said that in March of 2012 Holmes told a fellow classmate he wanted to kill people. Prosecutors are also expected to call to the stand police investigators, first responders, coroner officials, as well as a number of injured moviegoers who witnessed the bloodbath. In addition, prosecutors are set to play the 911 calls by dozens of frantic moviegoers as well as show some of the 30 hours of video from the theater. Holmes’s defense team, which has repeatedly suggested that the wiry former student suffers from mental illness, is planning to call at least one mental-health expert, and will undoubtedly take the position that Holmes, who now sports longer brown hair and a bushy beard as he sits zombielike through hearings, was insane at the time of the mass shooting and can’t be found guilty of the heinous crimes. According to ABC News, defense attorneys also plan to call two unidentified witnesses who will testify about Holmes’ mental state—a move that was heavily opposed by the prosecution at a hearing last week. The witnesses, who have never been interviewed by the defense team, are cooperating with Colorado law-enforcement authorities, ABC reported. The preliminary hearing will be held in the courthouse’s biggest courtroom, and there will also be overflow rooms capable of seating hundreds of people. Armed law-enforcement personnel will be stationed on the court’s rooftops. The hearing is expected to draw hundreds of spectators, including the macabre fans of Holmes who call themselves Holmsies. Meanwhile, the theater, which has been closed since the shooting, is set to reopen on Jan. 17. Several family members of those killed have criticized Cinemark, the theater owners, after they sent them invitations to the grand opening offering “a special evening of remembrance” followed by the showing of a movie. In a letter to Cinemark, the families wrote: “During the holiday we didn’t think anyone or anything could make our grief worse but you, Cinemark, have managed to do just that by sending us an invitation two days after Christmas inviting us to attend the re-opening of your theater in Aurora where our loved ones were massacred.” ||||| James Holmes hearing may reveal "difficult" evidence Retailers preparing for their most critical time of the year Retailers preparing for their most critical time of the year The storied football team of Gallaudet, the nation's first university for the deaf (CBS News) In Colorado, the suspect in last July's movie theater killings returns to court Monday. James Holmes will listen as prosecutors detail the evidence against him. It's information that up until now has not been released. Holmes is accused of opening fire in a crowded Aurora movie theater July 20, killing 12 people and wounding 70. Complete Coverage: Colorado Massacre James Holmes is mentally ill: lawyers Only radio transmissions between first responders that night have been made public so far. The new evidence could include testimony from victims and witnesses, as well as video and 911 calls from inside theater 9, where the shooting happened. The district attorney has warned victims' families they might not want to attend. In a letter, he asked them to "carefully consider whether or not you think that you are ready to be exposed to potentially difficult information at the hearing." In order to keep the public and press away from victims and families who do attend, officials have set up a separate courtroom where they can watch the proceedings on a closed circuit feed. Jessica Watts / CBS News Jessica Watts will be there. She's attended every hearing so far in honor of her cousin Jonathon Blunk, a husband and father of two who was killed inside the theater. "I had made a promise to him right after the shooting. I visited the crosses and told him that I would see this through to the end," she said. Holmes attorneys will challenge the evidence. They are also ready to call their own witnesses to describes Holmes' mental state, likely setting up an insanity defense. At the end of the week, the judge will decide if there is enough evidence for Holmes to stand trial.
– James Holmes will be back in court today for the first time in almost six months for a preliminary hearing as the prosecution lays out its case against the 25-year-old, to establish whether there is sufficient evidence to put him on trial. Holmes has been charged with 166 counts of first-degree murder and attempted murder, and hasn't yet entered a plea. Prosecutors are expected to bring a slew of police detectives, first responders, and witnesses who were inside the theater to the stand, the Daily Beast reports. And it's not going to be pretty: CBS notes that the DA has asked victims' families to "carefully consider whether or not you think that you are ready to be exposed to potentially difficult information at the hearing." Prosecutors are expected to play dozens of 911 calls from frightened moviegoers, and show video from the theater. There's also a chance the public will finally learn the contents of the notebook he sent his school psychiatrist the day before the shooting—which reportedly contains violent images. The defense, for its part, intends to call at least one mental health expert in hopes of mounting an insanity defense.
SECTION 1. SHORT TITLE. This Act may be cited as the ``Fugitive Information Networked Database Act of 2009'' or the ``FIND Act of 2009''. SEC. 2. DEFINITION. In this Act, the term ``National Crime Information Center database'' means the computerized index of criminal justice information operated by the Federal Bureau of Investigation pursuant to section 534 of title 28, United States Code, and available to Federal, State, and local law enforcement and other criminal justice agencies. SEC. 3. GRANTS TO ENCOURAGE STATES TO ENTER FELONY WARRANTS. (a) State System.--A State Attorney General may, in consultation with local law enforcement and any other relevant government agencies, apply for a grant from the United States Attorney General to-- (1) develop and implement secure, electronic warrant management systems that permit the prompt preparation, submission, and validation of warrants and are compatible and interoperable with the National Crime Information Center database; or (2) upgrade existing electronic warrant management systems to ensure compatibility and interoperability with the National Crime Information Center database; to facilitate information sharing and to ensure that felony warrants entered into State and local warrant databases can be automatically entered into the National Crime Information Center database. The grant funds may also be used to hire additional personnel, as needed, for the validation of warrants entered into the National Crime Information Center database. (b) Eligibility.--In order to be eligible for a grant authorized under subsection (a), a State shall submit to the United States Attorney General-- (1) a plan to develop and implement, or upgrade, systems described in subsection (a); (2) a report that-- (A) details the number of felony warrants outstanding in the State; (B) describes any backlog of warrants that have not been entered into the State and local warrant databases or into the National Crime Information Center database, over the preceding 3 years (including the number of such felony warrants); (C) explains the reasons for the failure of State and local government agencies to enter felony warrants into the National Crime Information Center database; and (D) demonstrates that State and local government agencies have made good faith efforts to eliminate any such backlog; (3) guidelines for warrant entry by State and local government agencies that will ensure that felony warrants entered into State and local warrant databases will also be entered into the National Crime Information Center database and explain the circumstances in which, as a matter of policy, certain felony warrants will not be entered into the National Crime Information Center database; and (4) an assurance that the State will implement such practices and procedures as may be necessary to ensure that all felony warrants for Part I crimes (as classified for the Federal Bureau of Investigation's Uniform Crime Report) that are issued after the date of enactment of this Act are entered into the National Crime Information Center database. (c) Requirements.--Each State that receives a grant under this section shall, as a condition of receiving such grant, report to the Attorney General on an annual basis the number of felony warrants entered into the State and local warrant databases, the number of felony warrants entered into the National Crime Information Center database, and, with respect to felony warrants not entered into the National Crime Information Center database, the reasons for not entering such warrants. (d) Authorization.--There are authorized to be appropriated to the Attorney General $25,000,000 for each of the fiscal years 2009 and 2010 for grants to State and local government agencies for resources to carry out the requirements of this section. SEC. 4. FBI COORDINATION. The Federal Bureau of Investigation shall provide to State and local government agencies the technological standard that ensures compatibility and interoperability of all State and local warrant databases with the National Crime Information Center database. SEC. 5. REPORT REGARDING FELONY WARRANT ENTRY. (a) In General.--Not later than 270 days after the date of the enactment of this Act, the Comptroller General of the United States shall submit to the Committees on the Judiciary of the House of Representatives and the Senate a report regarding-- (1) the number of felony warrants currently active in each State; (2) the number of those felony warrants that State and local government agencies have entered into the National Crime Information Center database; (3) the number of times State and local law enforcement in each State has been contacted regarding a fugitive apprehended in another State over the preceding 3 years; and (4) the number of fugitives from each State who were apprehended in other States over the preceding 3 years but not extradited. (b) Assistance.--To assist in the preparation of the report required by subsection (a), the Attorney General shall provide the Comptroller General of the United States with access to any information collected and reviewed in connection with the grant application process described in section 3. (c) Report.--On an annual basis, the Attorney General shall submit to the Committees on the Judiciary of the House of Representatives and the Senate a report containing the information received from the States under this section 3(c). SEC. 6. ADDITIONAL RESOURCES FOR FUGITIVE TASK FORCES AND EXTRADITION. (a) Presidential Threat Protection Act of 2000.--Section 6(b) of the Presidential Threat Protection Act of 2000 (28 U.S.C. 566 note) is amended by adding at the end the following: ``There are authorized to be appropriated to the Attorney General for the United States Marshals Service to carry out the provisions of this section $20,000,000 for fiscal year 2009 and $10,000,000 for each of the fiscal years 2010 through 2014.''. (b) Justice Prisoner and Alien Transport System.--There are authorized to be appropriated to the Attorney General for the United States Marshals Service $3,000,000 for each of fiscal years 2009 through 2014 to assist in extradition of fugitives through the Justice Prisoner and Alien Transport System.
Fugitive Information Networked Database Act of 2009 or the FIND Act of 2009 - Permits a state attorney general to apply for Department of Justice (DOJ) grants to develop and implement or upgrade systems for the preparation, submission, and validation of state felony warrants that are compatible and interoperable with the National Crime Information Center database. Allows grant funds to be used to hire additional personnel to validate warrants entered into the database. Directs the Federal Bureau of Investigation (FBI) to provide state and local government agencies the technology to make state and local warrant databases compatible and interoperative with the National Crime Information Center database. Authorizes appropriations for the Fugitive Apprehension Task Forces and for the extradition of fugitives through the Justice Prisoner and Alien Transport System.
China has overtaken Germany as the world’s third-largest arms exporter and cut its dependence on imports by producing more-sophisticated weapons, according to a new report. State-owned defense suppliers, such as Norinco Group, have become prominent at arms fairs, opening new markets beyond established customers in South Asia by, for example, selling armed drones to Nigeria in its battle against Boko Haram rebels. U.S. and... ||||| The volume of US exports of major weapons rose by 23 per cent between 2005–2009 and 2010–14. The USA’s share of the volume of international arms exports was 31 per cent in 2010–14, compared with 27 per cent for Russia. Russian exports of major weapons increased by 37 per cent between 2005–2009 and 2010–14. During the same period, Chinese exports of major arms increased by 143 per cent, making it the third largest supplier in 2010–14, however still significantly behind the USA and Russia. ‘The USA has long seen arms exports as a major foreign policy and security tool, but in recent years exports are increasingly needed to help the US arms industry maintain production levels at a time of decreasing US military expenditure’, said Dr Aude Fleurant, Director of the SIPRI Arms and Military Expenditure Programme. Imports by Gulf Cooperation Council states on the rise Arms imports to Gulf Cooperation Council (GCC) states increased by 71 per cent from 2005–2009 to 2010–14, accounting for 54 per cent of imports to the Middle East in the latter period. Saudi Arabia rose to become the second largest importer of major weapons worldwide in 2010–14, increasing the volume of its arms imports four times compared to 2005–2009. ‘Mainly with arms from the USA and Europe, the GCC states have rapidly expanded and modernized their militaries’, said Pieter Wezeman, Senior Researcher with the SIPRI Arms and Military Expenditure Programme. ‘The GCC states, along with Egypt, Iraq, Israel and Turkey in the wider Middle East, are scheduled to receive further large orders of major arms in the coming years.’ Asian arms imports continue to increase Of the top 10 largest importers of major weapons during the 5-year period 2010–14, 5 are in Asia: India (15 per cent of global arms imports), China (5 per cent), Pakistan (4 per cent), South Korea (3 per cent) and Singapore (3 per cent). These five countries accounted for 30 per cent of the total volume of arms imports worldwide. India accounted for 34 per cent of the volume of arms imports to Asia, more than three times as much as China. China’s arms imports actually decreased by 42 per cent between 2005–2009 and 2010–14. ‘Enabled by continued economic growth and driven by high threat perceptions, Asian countries continue to expand their military capabilities with an emphasis on maritime assets’, said Siemon Wezeman, Senior Researcher with the SIPRI Arms and Military Expenditure Programme. ‘Asian countries generally still depend on imports of major weapons, which have strongly increased and will remain high in the near future.’ Other notable developments European arms imports decreased by 36 per cent between 2005–2009 to 2010–14. Developments in Ukraine and Russia may counter this trend after 2014 with several states bordering Russia increasing their arms imports. arms imports decreased by 36 per cent between 2005–2009 to 2010–14. Developments in Ukraine and Russia may counter this trend after 2014 with several states bordering Russia increasing their arms imports. Germany’s arms exports decreased by 43 per cent between 2005–2009 and 2010–14. However, it received several large arms orders in 2014 from Middle Eastern states. arms exports decreased by 43 per cent between 2005–2009 and 2010–14. However, it received several large arms orders in 2014 from Middle Eastern states. Arms imports by Azerbaijan increased by 249 per cent between 2005–2009 and 2010–14. increased by 249 per cent between 2005–2009 and 2010–14. African arms imports increased by 45 per cent between 2005–2009 to 2010–14. arms imports increased by 45 per cent between 2005–2009 to 2010–14. Between 2005–2009 and 2010–14 Algeria was the largest arms importer in Africa, followed by Morocco , whose arms imports increased elevenfold. was the largest arms importer in Africa, followed by , whose arms imports increased elevenfold. Cameroon and Nigeria received arms from several states in order to fulfil their urgent demand for weapons to fight Boko Haram. and received arms from several states in order to fulfil their urgent demand for weapons to fight Boko Haram. To fight ISIS, Iraq received arms from countries as diverse as Iran, Russia and the USA in 2014. received arms from countries as diverse as Iran, Russia and the USA in 2014. Deliveries and orders for ballistic missile defence systems increased significantly in 2010–14, notably in the GCC and North East Asia. For editors The SIPRI Arms Transfers Database contains information on all international transfers of major conventional weapons (including sales, gifts and production licences) to states, international organizations and armed non-state groups from 1950 to the most recent full calendar year. SIPRI data reflects the volume of deliveries of arms, not the financial value of the deals. As the volume of deliveries can fluctuate significantly year on year, SIPRI presents data for 5-year periods, giving a more stable measure of trends. The comprehensive annual update of the SIPRI Arms Transfers Database is accessible from today. For information or interview requests contact Stephanie Blenckner ([email protected], +46 8 655 97 47) or Kate Sullivan ([email protected], +46 8 655 97 81). ||||| BEIJING (AP) — China has overtaken Germany to become the world's third-biggest arms exporter, although its 5 percent share remains small compared to the combined 58 percent of exports from the U.S. and Russia, a new study says. FILE - In this Nov. 21, 2010, file photo, Pakistan Air Force personnel sit in front of their JF-17 jet fighter at the 8th China International Aviation and Aerospace Exhibition (Zhuhai Airshow) in Zhuhai,... (Associated Press) China's share of the global arms market rose 143 percent from 2010 to 2014, a period during which the total volume of global arms transfers rose by 16 percent over the previous five years, the Stockholm International Peace Research Institute said in a report released Monday. Its share of the world market was up from 3 percent in the 2009-2014 period, when China was ranked ninth among exporters of warplanes, ships, side arms and other weaponry, said the institute, known as SIPRI. The data show the growing strength of China's domestic arms industry, now producing fourth-generation fighter jets, navy frigates and a wide-range of relatively cheap, simple and reliable smaller weapons used in conflicts around the globe. Responding to the study, Chinese Foreign ministry spokesman Hong Lei said China took a "cautious approach" to arms exports and abided by relevant U.N. resolutions and domestic laws. "We follow the principle that the export of arms will help increase the recipient country's legitimate self-defense capabilities and not undermine international or regional peace and stability, and we don't intervene in their domestic affairs," Hong said. China had long been a major importer of weapons, mainly from Russia and Ukraine, but its soaring economy and the copying of foreign technology has largely reversed the trend, except for the most cutting-edge designs and sophisticated parts such as aircraft engines. China supplies weapons to 35 countries, led by Pakistan, Bangladesh and Myanmar, SIPRI said. Chinese sales included those of armored vehicles and transport and trainer aircraft to Venezuela, three frigates to Algeria, anti-ship missiles to Indonesia and unmanned combat aerial vehicles, or drones, to Nigeria, which is battling the Boko Haram insurgency in its north. China's comparative advantages include its low prices, easy financing and friendliness toward authoritarian governments, said Philip Saunders, director of the Center for the Study of Chinese Military Affairs at the U.S. National Defense University. "Generally speaking, China offers medium quality weapons systems at affordable prices, a combination attractive to cash-strapped militaries in South Asia, Africa and Latin America," Saunders said. Notable successes include a co-production deal with Pakistan to produce the JF-17 fighter, widespread sales of the basic but effective C-802 anti-ship cruise missile, and an agreement to sell the HQ-9 air defense missile system to Turkey that has run into controversy over its incompatibility with NATO weapons systems. China also has exploited niche markets such as North Korea and Iran that the West won't sell to, emphasizing its attractiveness to impoverished countries and pariah states, said Ian Easton, research fellow at The Project 2049 Institute, an Arlington, Virginia-based Asian security think tank. Both those U.S. foes appear to have received satellite jamming and cyber warfare capabilities from China, along with technologies to break into private communications and spy on government opponents, Easton said. "All of these sales should be very disconcerting to American policymakers and military leaders," he said, calling China's rise to the third-place spot among exporters a "disturbing development" that could threaten the security of the U.S. and its allies. China also offers leading-edge drone technology at competitive prices. One model, known variously as the Yilong, Wing Loong or Pterodactyl, has become especially popular with foreign buyers, although Chinese secrecy surrounding such sales makes it difficult to know how many are in use and where. Chinese state broadcaster CCTV quoted retired People's Liberation Army Gen. Xu Guangyu saying at an air show two years ago that the unmanned aircraft, which can be armed with two guided missiles, would cost only about $1 million each. That is about 10 to 20 percent of the price of a comparable U.S. model such as the MQ-1 Predator. Rumored buyers include the United Arab Emirates, Uzbekistan and Saudi Arabia. However, China's incremental growth and the yawning gap with industry leaders America and Russia show the limitations of its aspirations. The U.S. retained a 31 percent share of the global arms market, exporting to at least 94 recipients, SIPRI said. Countries in Asia and Oceania took 48 percent of U.S. exports, followed by the Middle East with 32 percent and Europe at 11 percent, it said. Russia was second with a 27 percent global share, 39 percent of which went to India — the world's largest arms importer overall. China took 11 percent of Russia's exports, followed by Algeria. SIPRI uses a five-year moving average to account for fluctuations in the volume of arms deliveries from year-to-year and doesn't provide monetary values, which are often distorted by governments providing weapons as gifts or at below-market prices. ||||| A man walks past the logo of China Precision Machinery Import and Export Corp (CPMIEC) at its headquarters in Beijing September 27, 2013. ANKARA Turkey is pressing ahead with talks with U.S. and European firms over its first long-range missile defense system, as the preferred Chinese bidder has yet to meet all requirements for the multi-billion dollar project, two officials said on Thursday. NATO member Turkey chose China Precision Machinery Import and Export Corp in 2013 as the preferred candidate for the $3.4 billion deal, prompting U.S. and Western concern about security and the compatibility of the weaponry with NATO systems. Turkey's defense minister said last week it did not plan to integrate the system with NATO infrastructure, only for the presidential spokesman to say days later that the systems would be integrated. One of the defense officials told Reuters on Thursday there were still question marks over the Chinese proposal, particularly around "technology transfer" to boost the Turkish defense industry. "Contacts on this issue are continuing. Securing technology transfer is one of the most important subjects in the tender and on this subject a full guarantee has not been provided," the official said. U.S. and NATO representatives were unhappy with Turkey's choice of China Precision Machinery, which has been under U.S. sanctions for selling items to Iran, Syria or North Korea that are banned under U.S. laws to curb the proliferation of weapons of mass destruction. In addition to bids from the U.S. firm Raytheon Co and the Franco-Italian group Eurosam, the officials told Reuters that Russia, eliminated in the first stage of the tender, was still keen on providing a surface-to-air missile system - a prospect that could also raise concerns in NATO. Eurosam, which is owned by the multinational European missile maker MBDA and France's Thales, came second in the tender. U.S.-listed Raytheon Co also put in an offer with its Patriot missile defense system, which is now operated by 13 countries around the world. One of the officials said defense representatives had gone to Italy at the end of January for talks with Eurosam. "In March, a delegation will go to the United States for talks with the other bidder. Finally, a delegation will go to China and hold talks there," he said. The sources said Russia had renewed its interest in the project. Officials previously said Russia had revised an initial bid and offered to sell Turkey its S-400 medium- to long-range anti-aircraft missile system. However Turkey is not currently holding talks with the Russians. (Reporting by Orhan Coskun and Tulay Karadeniz; Writing by Daren Butler; Editing by Kevin Liffey)
– From 2005 to 2009, China reigned as the world's top weapons importer, but since then, it's fallen to third place—while sliding into the third slot on another list: It's displaced Germany as the third-largest weapons exporter. China retains only 5% of the global export market—the US and Russia hold 31% and 27%, respectively—but a new report by the Swedish International Peace Research Institute shows China's exports of major arms jumped 143% between the period of 2005-2009 and that of 2010-2014. "One of the concerns about China is not just that they are modernizing—we don't anticipate a conflict with China, certainly—but [that] they export," the chief weapons' buyer for the Pentagon warned Congress last year, per the Wall Street Journal. China has been able to make its break by marketing weapons at international fairs—Chinese firms attended the International Defense Exhibition in the UAE last month, the Journal points out—and looking beyond its usual customer base: Turkey is looking to buy its first long-range missile defense system and China and its $3.4 billion proposal are in the lead, Reuters reports. China also sells weapons to "niche markets" that the West snubs, including North Korea and Iran, the AP notes; still, the Journal reports the majority of China's exports go to Pakistan, Bangladesh, and Burma. "The equipment you get nowadays from China is much better than 10-15 years ago," a senior researcher at SIPRI tells the Journal, adding that those seeking weapons can get a better deal buying comparable arms from China rather than Russia or the US.
in non - small cell lung cancer ( nsclc ) , proven ipsi- ( n2 ) or contralateral ( n3 ) mediastinal lymph node involvement often precludes cure by surgery . 2-deoxy-2-[f-18]fluoro - d - glucose - positron emission tomography ( fdg - pet ) is used to stage nsclc patients . the yield of whole - body pet pertains to typing the primary pulmonary lesion and on the preoperative identification of distant and lymph node metastases . moreover , pet simplifies and improves lymph node evaluation by setting the indication for biopsy and improving its yield . mediastinoscopy is the standard technique of invasive lymph node staging but the results in daily practice are quite variable . it has been suggested that the proportion of tumor - positive procedures increases if guided by pet [ 2 , 3 ] . so far , mediastinoscopy is the most often used invasive method , but more recently , endoscopic techniques [ like transesophageal ultrasound - guided fine needle aspiration ( eus - fna ) ] have been developed . because the mediastinal areas covered by mediastinoscopy and eus - fna are largely complementary , proper localization of possible malignant nodes is important to assign patients to the appropriate procedure . fdg - pet criteria of test positivity for mediastinal lymph node staging are based on recognition of focally enhanced uptake ( hot spots ) vs. background , rather than on quantitative assessment ( like the 1-cm short axis criterion with ct scanning ) . results from pet studies pertaining to its accuracy in mediastinal staging are robust , but as the technique is disseminating , observer variation and learning curves still need to be documented . the aim of the present study was to measure the observer agreement and accuracy vs. expert readings of mediastinal lymph nodes in nsclc staging with fdg - pet at various levels of complexity and as a function of experience . we used a set of 30 pet scans from the study by joshi et al . of consecutive patients referred for staging to the department of nuclear medicine and pet research of the vrije university medical centre . to obtain an adequate case mix , we included scans of patients with a range of mediastinal lymph node sizes at ct scanning : ( 1 ) 10 mm short axis diameter ( n = 10 ) , ( 2 ) 10.115 mm ( n = 10 ) , and ( 3 ) > 15 mm ( n = 10 ) . pet scans had been performed according to the standard protocol in our institution using a full ring bgo pet scanner ( ecat exact hr+ , cti / siemens , starting 60 min after 370 mbq fdg ) . the scans were analyzed by 14 nuclear medicine physicians who had extensive experience with spect but variable expertise with pet and mediastinal lymph node staging in nsclc : seven had no personal experience with pet ( the inexperienced group ) , whereas the others had at least 1 year of experience with pet in nsclc patients in their own clinical practice , which comprised access to mobile pet once every 1 or 2 weeks ( the more experienced group ) . on average , the inexperienced group had reviewed 015 pet scans , each compared to a 100150 ( with at least 50% nsclc ) each in the experienced group . prior to this study , the observers had been instructed in workshops by two expert pet readers , a pulmonologist , and a surgeon about the concepts , principles , and practice of mediastinal staging in nsclc by pet and other methods . the results of all observers were compared to the combined judgment of two expert nuclear medicine physicians ( efc and osh ) , and the latter readings were used as the gold standard . the expert readers had been working together in the same university hospital for numerous years and had a broad experience with pet [ 68 ] . we developed a software tool running matlab 5.3 , which allowed simultaneous visualization of pet images in the axial , coronal , and sagittal planes ( at 5 or 10 mm slice thickness ) , with possible cross linking . each observer was requested to identify and interpret any abnormal hot spot representing primary tumor or lymph node , blinded for the results of the other readers . this software tool was installed on the personal computer of each observer , and the results were electronically stored for analysis . to be able to accurately relate the results of different observers , the coordinates of each hot spot identified by an observer were stored and linked to the assigned interpretation . because none of the observers had worked with this software before , we provided a test set ( derived from the original data set ) of three scans to each observer prior to the study . these three scans comprised 29 separate abnormal mediastinal lymph node localizations and therefore provided an adequate way to practice working with naruke s map of lymph node localizations ( adapted from mountain and dresler ) . observers had knowledge of the clinical information provided with the original pet scan referral , except for the mediastinal stage at ct . the observers were asked to interpret abnormal hot spots pertaining to the primary tumor and lymph nodes in terms of their localization and likelihood of malignancy using the classification systems shown in table 1 . furthermore , observers were asked to formulate a recommendation with respect to the next management step to the referring clinician ( table 1 ) . in this context , we instructed them to use the following protocol : ( 1 ) recommend biopsy of mediastinal lymph nodes in case of suspected ( hilar or mediastinal ) lymph node involvement and in case of tumors adjacent to the mediastinum or hilus , ( 2 ) recommend thoracotomy in case of a peripheral primary tumor without suspicious mediastinal lymph nodes at pet , and ( 3 ) recommend an expectative ( wait and see ) policy in case pet shows no abnormal uptake in either the primary site nor in lymph nodes . for the purpose of the present investigation , they were instructed to ignore possible suspicious extrathoracic localizations in these management considerations . table 1classification system of tumor and lymph nodescharacteristicclassificationprimary tumor presenceno tumor presentprimary tumorsecond primary localizationperipheraladjacent to mediastinumadjacent to hiluslymph node localizationno lymph nodes presentn1 l / rn2 l / rn3n4 l / rn5/ * n6/ * n7n8 l / rn9 l / rn10 l / rclavicular l / rlikelihood of malignancydefinitely benignprobably benignequivocalprobably malignantdefinitely malignantmanagement recommendationinvasive lymph node evaluationthoracotomyexpectative policyaccording to the map of lymph node definitions by mountain and dresler classification system of tumor and lymph nodes according to the map of lymph node definitions by mountain and dresler using the individual scores of the observers , we assigned an n - stage according to pet for each observer and each patient using the following classification : n0 , peripheral primary tumor , no mediastinal hot spotn1 , peripheral primary tumor and separate hot spot considered to be a hilar lymph noden0n1 , primary tumor within hilar area , no separate mediastinal hot spotn0n2 , primary tumor adjacent to mediastinum , no separate mediastinal hot spotn2 , hot spot compatible with ipsilateral mediastinal lymph noden3 , hot spot compatible with contralateral mediastinal or clavicular lymph node n0 , peripheral primary tumor , no mediastinal hot spot n1 , peripheral primary tumor and separate hot spot considered to be a hilar lymph node n0n1 , primary tumor within hilar area , no separate mediastinal hot spot n0n2 , primary tumor adjacent to mediastinum , no separate mediastinal hot spot n2 , hot spot compatible with ipsilateral mediastinal lymph node n3 , hot spot compatible with contralateral mediastinal or clavicular lymph node we performed a more detailed analysis of the nature of the errors in the management recommendation classification vs. the expert reading , identifying whether these errors followed the observers own interpretation of suspicious lymph node stations or resulted from true errors ( protocol violation ) . for example , the former situation occurred if , in case of a peripheral primary tumor , an observer considered the ipsilateral right lower tracheobronchial station to be positive at pet , whereas the expert only identified the primary lesion . the resulting discrepant management recommendations ( mediastinoscopy vs. thoracotomy , respectively ) directly flow from these classifications . however , if this observer would have advised to proceed directly to thoracotomy , this was considered a protocol violation ( p ) . we also measured how accurately readers could define and localize suspected mediastinal lymph node stations at pet . compatible with known limitations of pet with respect to spatial resolution and accounting for different levels of clinical relevance , we accepted the following differences of nodal classifications : naruke stations 1 and 2 [ left ( l)/right ( r ) , respectively ] , 4r and 10r , 4l and 10l and 5 , and 8 and 9 ( l / r , respectively ) . using this simplified system , we analyzed whether observers defined and localized suspected lymph node metastases vs. the expert readings correctly , incorrectly , or not at all . statistical analysis was done by spss version 13.0 software . to determine interobserver agreement regarding management recommendation and n - stage , and to compare this to expert readings , we calculated the kappa coefficients , using agree version 7.2 . furthermore , to detect potential differences between the two groups of observers with different pet experience with respect to the nature of the management recommendation errors , and the classification of separate mediastinal hot spots , we used the wilcoxon mann we used a set of 30 pet scans from the study by joshi et al . of consecutive patients referred for staging to the department of nuclear medicine and pet research of the vrije university medical centre . to obtain an adequate case mix , we included scans of patients with a range of mediastinal lymph node sizes at ct scanning : ( 1 ) 10 mm short axis diameter ( n = 10 ) , ( 2 ) 10.115 mm ( n = 10 ) , and ( 3 ) > 15 mm ( n = 10 ) . pet scans had been performed according to the standard protocol in our institution using a full ring bgo pet scanner ( ecat exact hr+ , cti / siemens , starting 60 min after 370 mbq fdg ) . the scans were analyzed by 14 nuclear medicine physicians who had extensive experience with spect but variable expertise with pet and mediastinal lymph node staging in nsclc : seven had no personal experience with pet ( the inexperienced group ) , whereas the others had at least 1 year of experience with pet in nsclc patients in their own clinical practice , which comprised access to mobile pet once every 1 or 2 weeks ( the more experienced group ) . on average , the inexperienced group had reviewed 015 pet scans , each compared to a 100150 ( with at least 50% nsclc ) each in the experienced group . prior to this study , the observers had been instructed in workshops by two expert pet readers , a pulmonologist , and a surgeon about the concepts , principles , and practice of mediastinal staging in nsclc by pet and other methods . the results of all observers were compared to the combined judgment of two expert nuclear medicine physicians ( efc and osh ) , and the latter readings were used as the gold standard . the expert readers had been working together in the same university hospital for numerous years and had a broad experience with pet [ 68 ] . we developed a software tool running matlab 5.3 , which allowed simultaneous visualization of pet images in the axial , coronal , and sagittal planes ( at 5 or 10 mm slice thickness ) , with possible cross linking . each observer was requested to identify and interpret any abnormal hot spot representing primary tumor or lymph node , blinded for the results of the other readers . this software tool was installed on the personal computer of each observer , and the results were electronically stored for analysis . to be able to accurately relate the results of different observers , the coordinates of each hot spot identified by an observer were stored and linked to the assigned interpretation . because none of the observers had worked with this software before , we provided a test set ( derived from the original data set ) of three scans to each observer prior to the study . these three scans comprised 29 separate abnormal mediastinal lymph node localizations and therefore provided an adequate way to practice working with naruke s map of lymph node localizations ( adapted from mountain and dresler ) . observers had knowledge of the clinical information provided with the original pet scan referral , except for the mediastinal stage at ct . the observers were asked to interpret abnormal hot spots pertaining to the primary tumor and lymph nodes in terms of their localization and likelihood of malignancy using the classification systems shown in table 1 . furthermore , observers were asked to formulate a recommendation with respect to the next management step to the referring clinician ( table 1 ) . in this context , we instructed them to use the following protocol : ( 1 ) recommend biopsy of mediastinal lymph nodes in case of suspected ( hilar or mediastinal ) lymph node involvement and in case of tumors adjacent to the mediastinum or hilus , ( 2 ) recommend thoracotomy in case of a peripheral primary tumor without suspicious mediastinal lymph nodes at pet , and ( 3 ) recommend an expectative ( wait and see ) policy in case pet shows no abnormal uptake in either the primary site nor in lymph nodes . for the purpose of the present investigation , table 1classification system of tumor and lymph nodescharacteristicclassificationprimary tumor presenceno tumor presentprimary tumorsecond primary localizationperipheraladjacent to mediastinumadjacent to hiluslymph node localizationno lymph nodes presentn1 l / rn2 l / rn3n4 l / rn5/ * n6/ * n7n8 l / rn9 l / rn10 l / rclavicular l / rlikelihood of malignancydefinitely benignprobably benignequivocalprobably malignantdefinitely malignantmanagement recommendationinvasive lymph node evaluationthoracotomyexpectative policyaccording to the map of lymph node definitions by mountain and dresler classification system of tumor and lymph nodes according to the map of lymph node definitions by mountain and dresler n - stage according to pet for each observer and each patient using the following classification : n0 , peripheral primary tumor , no mediastinal hot spotn1 , peripheral primary tumor and separate hot spot considered to be a hilar lymph noden0n1 , primary tumor within hilar area , no separate mediastinal hot spotn0n2 , primary tumor adjacent to mediastinum , no separate mediastinal hot spotn2 , hot spot compatible with ipsilateral mediastinal lymph noden3 , hot spot compatible with contralateral mediastinal or clavicular lymph node n0 , peripheral primary tumor , no mediastinal hot spot n1 , peripheral primary tumor and separate hot spot considered to be a hilar lymph node n0n1 , primary tumor within hilar area , no separate mediastinal hot spot n0n2 , primary tumor adjacent to mediastinum , no separate mediastinal hot spot n2 , hot spot compatible with ipsilateral mediastinal lymph node n3 , hot spot compatible with contralateral mediastinal or clavicular lymph node we performed a more detailed analysis of the nature of the errors in the management recommendation classification vs. the expert reading , identifying whether these errors followed the observers own interpretation of suspicious lymph node stations or resulted from true errors ( protocol violation ) . for example , the former situation occurred if , in case of a peripheral primary tumor , an observer considered the ipsilateral right lower tracheobronchial station to be positive at pet , whereas the expert only identified the primary lesion . the resulting discrepant management recommendations ( mediastinoscopy vs. thoracotomy , respectively ) directly flow from these classifications . however , if this observer would have advised to proceed directly to thoracotomy , this was considered a protocol violation ( p ) . we also measured how accurately readers could define and localize suspected mediastinal lymph node stations at pet . compatible with known limitations of pet with respect to spatial resolution and accounting for different levels of clinical relevance , we accepted the following differences of nodal classifications : naruke stations 1 and 2 [ left ( l)/right ( r ) , respectively ] , 4r and 10r , 4l and 10l and 5 , and 8 and 9 ( l / r , respectively ) . using this simplified system , we analyzed whether observers defined and localized suspected lymph node metastases vs. the expert readings correctly , incorrectly , or not at all . statistical analysis was done by spss version 13.0 software . to determine interobserver agreement regarding management recommendation and n - stage , and to compare this to expert readings , we calculated the kappa coefficients , using agree version 7.2 . furthermore , to detect potential differences between the two groups of observers with different pet experience with respect to the nature of the management recommendation errors , and the classification of separate mediastinal hot spots , we used the wilcoxon mann the 30 pet scans comprised a total of 89 locations of suspected malignancy , according to the gold standard ( expert reading ) . thirty - four represented tumor locations , 55 were lymph nodes ( 10 hilar , 39 mediastinal , and six supraclavicular ) . according to expert readers , there was a mean of three sites ( primary lesion and lymph nodes ) per patient ( range 113 ) . the experts classified ( according to table 1 ) 82 lesions as definitely malignant , five as probably malignant , and two as equivocal . in the final analysis , these probably and definitely malignant locations were classified as malignant . n2 , and five n3 , according to the definitions mentioned earlier . management recommendations were correct in 80% of cases ( 86 errors out of 420 recommendations , 42 in the experienced group and 44 in the inexperienced group ) . the accuracy vs. expert reading was moderate ( kappa 0.59 ) at either level of experience ( table 2 ) . the level of agreement among inexperienced observers tended to be lower but did not reach significance . four scans accounted for a total of 38 errors ( 44% ) , while not a single mistake by any observer was made in eight . table 2interobserver agreement and accuracy as a function of experience with respect to the classification of inexperienced observers ( n = 7)experienced observers ( n = 7)overallmanagement recommendationagreement vs. expert0.60 ( 0.420.77)0.58 ( 0.370.79)0.59 ( 0.420.76)pair wise agreement0.48 ( 0.350.62)0.56 ( 0.410.71)0.50 ( 0.370.63)n - stageagreement vs. expert0.58 ( 0.360.80)0.72 ( 0.550.88)0.65 ( 0.470.83)pair wise agreement0.56 ( 0.440.68)0.61 ( 0.490.74)0.58 ( 0.460.69 ) kappa ( 95% confidence interval)weighted kappa ( 95% confidence interval ) interobserver agreement and accuracy as a function of experience with respect to the classification of n - stage and management recommendation kappa ( 95% confidence interval ) weighted kappa ( 95% confidence interval ) in the group of inexperienced readers , 29 ( of 44 ; 66% ) of the incorrect management recommendations were protocol violations ( type p ) , vs. 17 ( of 42 ; 40% ) in the experienced readers group ( p = 0.12 ) . on the contrary , errors that directly flow from reading errors ( type m ) were significantly more prevalent in the group of experienced readers ( 25 out of 42 = 59% ) , vs. 15 out of 44 ( 34% ) in the inexperienced readers group ( p = 0.03 ) . common errors ( type p , protocol violations ) were , e.g. , to recommend expectative policy or directly to thoracotomy in a patient without enhanced pet uptake in primary tumor and mediastinal lymph nodes . however , the provided clinical information stated that bronchoalveolar cell carcinoma had been proven histologically . therefore , mediastinal lymph node evaluation should have been recommended because the mediastinum in a patient with adenocarcinoma without fdg uptake of the primary tumor can not be reliably evaluated so that histological confirmation of the mediastinum is required . n - stage classifications were correct in 68% of cases ( 286 out of 420 assigned n - stages , 138 in the inexperienced group and 148 in the experienced group ) . experienced observers tended to have a better agreement with the expert reading than inexperienced ones ( weighted kappa s 0.72 and 0.58 , respectively ) . n - stages were overestimated in 17.4% ( 16.7% by the experienced and 18.1% by the inexperienced observers ) and underestimated in 14.5% of cases ( 12.9 and 16.2% , respectively ) . the individual scores of the observers ( table 3 ) reveal that errors in either direction were made by most of them . table 3details on n - stage ( using the classification system described in the methods section ) in 30 scans for each observer n - stage classified correctly [ % ( n)]n - stage overestimated [ % ( n)]inexperienced observersinexp 170.0 ( 21)20.0 ( 6)inexp 256.7 ( 17)20.0 ( 6)inexp 370.0 ( 21)13.3 ( 4)inexp 463.3 ( 19)23.3 ( 7)inexp 566.7 ( 20)20.0 ( 6)inexp 666.7 ( 20)20.0 ( 6)inexp 766.7 ( 20)10.0 ( 3)total65.7 ( 138)18.1 ( 38)experienced observersexp 163.3 ( 19)30.0 ( 9)exp 276.7 ( 23)6.7 ( 2)exp 373.3 ( 22)10.0 ( 6)exp 573.3 ( 22)13.3 ( 4)exp 673.3 ( 22)16.7 ( 5)exp 760.0 ( 18)20.0 ( 6)total70.5 ( 148)16.7 ( 35)percentage of n - stages classified correctly vs. expert readingpercentage of overestimated n - stages vs. expert reading details on n - stage ( using the classification system described in the methods section ) in 30 scans for each observer percentage of n - stages classified correctly vs. expert reading percentage of overestimated n - stages vs. expert reading because we used three scans to practice on localizing mediastinal lymph nodes , 27 scans remained with 26 separate lymph node localizations . the detection rate of individual mediastinal lymph node stations was similar for inexperienced and experienced observers ( 71 and 74% , respectively , table 4 ) , and the variation within the groups was also comparable . however , experienced readers were better at localizing the stations than inexperienced readers were ( correct in 68 vs. 51% , respectively ) . the most common mislocalizations ( table 5 ) were to classify right tracheobronchial stations ( 4r ) as upper - right paratracheal ( 2r ) , subcarinal ( 7 ) as right tracheobronchial ( 4r ) , and left para - esophageal ( 8/9l ) as left tracheobronchial ( 4l ) . table 4accuracy of inexperienced and experienced observers to detect and localize the 26 mediastinal lymph node stations present according to the expert reading identified [ % ( n)]correctly localized [ % ( n)]inexperienced observersinexp 176.9 ( 20)30.0 ( 6)inexp 284.6 ( 22)63.6 ( 14)inexp 361.5 ( 16)62.5 ( 10)inexp 480.8 ( 21)23.8 ( 5)inexp 569.2 ( 18)55.6 ( 10)inexp 665.4 ( 17)64.7 ( 11)inexp 757.7 ( 15)66.7 ( 10)total70.9% ( 129)51.2% ( 66)experienced observersexp 176.9 ( 20)65.0 ( 13)exp 261.5 ( 16)81.3 ( 13)exp 369.2 ( 18)83.3 ( 15)exp 473.1 ( 19)89.5 ( 17)exp 584.6 ( 22)77.3 ( 17)exp 680.8 ( 21)42.9 ( 9)exp 769.2 ( 18)38.9 ( 7)total73.6% ( 134)67.9% ( 91 ) percentage of identified nodal stations vs. expert reading percentage of correctly localized nodal stations vs. expert reading ( e.g. , inexp 1 identified 20 out of the 26 stations , and 6 out of 20 were localized correctly).table 5mediastinal lymph node stations by experienced and inexperienced observers , according to mountain and dreslerexpert ( ca)experienced and inexperienced observers2r34l4r678r8lsctmissed2 r ( 1 r)419373134 l ( 5 , 10 l)24114314 r ( 10 r)18185111141971213151258 r ( 9 r)48118 l ( 9 l)91819sc 221463mediastinal lymph node stations using the simplified system mentioned in the materials and methods section regarding the acceptance of different lymph node classifications , consistent with clinical practice , for expert and both groups of observersca = correct alternative according to simplified system , sc = supra- or infraclavicular lymph nodes , t = tumorobserver identified pertaining mediastinal lymph node as primary or second primary tumor accuracy of inexperienced and experienced observers to detect and localize the 26 mediastinal lymph node stations present according to the expert reading percentage of identified nodal stations vs. expert reading percentage of correctly localized nodal stations vs. expert reading ( e.g. , inexp 1 identified 20 out of the 26 stations , and 6 out of 20 were localized correctly ) . mediastinal lymph node stations by experienced and inexperienced observers , according to mountain and dresler mediastinal lymph node stations using the simplified system mentioned in the materials and methods section regarding the acceptance of different lymph node classifications , consistent with clinical practice , for expert and both groups of observers ca = correct alternative according to simplified system , sc = supra- or infraclavicular lymph nodes , t = tumor observer identified pertaining mediastinal lymph node as primary or second primary tumor observer variation is the achilles heel of diagnostic imaging and especially of tests that apply visual interpretation . it is therefore surprising that the clinical pet literature contains few studies on observer variation beyond the level of occasional reports on variation between two observers participating in an accuracy study . the present study reports on the results of 14 observers stratified by their experience with pet , and it accounts for several aspects of the clinical context of nsclc staging ( management recommendation , n - stage , nodal stations ) . we found that the accuracy ( vs. expert reading ) was moderate to substantial at moderate levels of interobserver agreement . our results suggest that clinical experience with pet improves the ability of readers to localize mediastinal hot spots correctly , and this is relevant with respect to the next clinical step : i.e. , to decide which invasive verification method should follow and to enhance the yield of such procedures . moreover , within the more experienced group , the agreement of assigning n - stages and management recommendations tended to be better . finally , familiarity with clinical practice and staging protocols for nsclc patients may have contributed to fewer inconsistencies in management recommendations . our management advice constructs were designed to account for generally recognized limitations of pet in mediastinal staging . with slightly different endpoints , the interobserver agreement of ct reading appears to be similar to what we have reported for pet : in ct evaluation of mediastinal lymph node size , guyatt et al . reported a kappa of 0.61 regarding the presence of any nodes greater than 1 cm on ct scan . however , agreement in different nodal groups varied widely , and it appeared to be far more difficult for the left superior mediastinal nodes . in our study , we found that some mistakes were made relatively more often regarding localizing separate lymph nodes ( table 5 ) . with the increasing clinical methods to verify imaging findings ( transesophageal , transbronchial eus - fna , mediastinoscopy , video - assisted thoracoscopy ) , the relevance of interpreting images at the nodal level is growing . ct helps to improve the yield of pet and ct reading in patients newly presenting with lung cancer , but also in restaging after neoadjuvant therapy . using pet ct in this study , instead of pet alone , would probably have been more clinically relevant . however , we believe that the errors related to localizing suspicious foci will improve with pet ct , but this is not the case for detection and interpretation errors . other limitations of our study were the relative unfamiliarity of the observers with the display and registration software and , perhaps , the lack of standardized computer screens . in the netherlands , the availability of fdg - pet is rapidly expanding , even in smaller hospitals , and this has major implications for local nuclear medicine physicians , as well as for residents . to our knowledge , the duration of time that is needed before results on pet are adequately reviewed and interpreted ( the learning curve ) by nuclear medicine physicians is unknown . we had anticipated striking differences between experienced and inexperienced readers , but this was not the case . however , there was obvious room for improvement in the experienced group and we suggest that optimal performance is not acquired by experience alone but requires higher levels of direct feedback . we propose that such feedback could be achieved efficiently in experimental settings like those applied in our study . we believe that data sets like that of the present study should play a key role in the training of residents because they can learn and demonstrate improving skills at any time during their training . however , for example , in the dutch setting , this requires that residents should spend more time in such skill labs and less in daily clinical production . emerging alternatives to invasively stage the mediastinum in nsclc puts high levels of skill to interpret pet and ct scans in nsclc patients . observer variation of pet in mediastinal staging appears to be similar to ct reading , as reported in literature , with obvious room for improvement . training of imaging specialists may require higher levels of feedback , which can more efficiently be obtained in skill labs using existing databases than are currently achievable in local daily clinical practice .
purposeto test the extent of variation among nuclear medicine physicians with respect to staging non - small cell lung cancer with positron emission tomography ( pet).procedurestwo groups of nuclear medicine physicians with different levels of pet experience reviewed 30 pet scans . they were requested to identify and localize suspicious mediastinal lymph nodes ( mln ) using standardized algorithms . results were compared between the two groups , between individuals , and with expert reading.resultsoverall we found good interobserver agreement ( kappa 0.65 ) . experience with pet translated into a better ability to localize mln stations ( 68% vs. 51% , respectively ) , and experienced readers appeared to be more familiar with translating pet readings into clinically useful statements.conclusionsalthough our results suggest that clinical experience with pet increases observers ability to read and interpret results from pet adequately , there is room for improvement . experience with pet does not necessarily improve the accuracy of image interpretation .
preservation of primary teeth before the eruption of permanent teeth is desirable since they help to determine the shape of dental arches , act as a natural space maintainer between teeth , prevent detrimental tongue and speech habits , conserve esthetics , and maintain chewing function . apart from that , they play an important role in growth , development , and maturation of the entire facioskeletal complex . over the years , formocresol ( fc ) has remained as the gold standard for pulpotomy procedure due to its very high and consistent results that date back to more than a century . despite fc 's high success rate and its position as gold standard in pulpotomy , a substantial shift has been seen with the use of this medicament because of certain reasons . it contains formaldehyde which is regarded to be a potential carcinogenic and mutagenic compound and is thus very toxic and its use in dentistry is of great concern . in 2004 , the disquiet among the dental professionals began after the classification of formaldehyde as a potent carcinogen for humans by the international agency for research on cancer . there lies a sufficient evidence of cancer of the nasopharynx , nasal cavity , and paranasal air sinuses as well as leukemia . studies began from 1975 when gravenmade 's glutaraldehyde was introduced till the recent being the extract of ankaferd blood stopper plant in 2012 , but none of the materials were regarded as ideal pulpotomy agent . on the contrary , mineral trioxide aggregate ( mta ) introduced by torabinejad et al . was found to be a very successful pulpotomy agent due to its excellent biocompatibility , property of dentin bridge formation , and less intake of procedural time with 100% clinical and radiographic success rate . in addition , it demonstrates superior long - term treatment outcomes in pulpotomy of primary molars than ferric sulfate . however , it has certain drawbacks such as it is not cost - effective and thus out of reach for some of the masses of indian population . furthermore , it exhibits a short shelf - life since it is sensitive to moisture . so as to subjugate with the shortcomings of mta , a shift toward mother nature was given a thought , and various ayurvedic and medicinal plants were extensively studied . one such plant falling under this category is aloe vera ( aloe barbadensis mill - family liliaceae ) . aloe , native to africa , is also known as lily of the desert , the plant of immortality , and the medicinal plant . around 2000 years ago , the greek scientists regarded a. vera as the universal panacea . it contains 75 potentially active constituents such as vitamins , enzymes , minerals , sugars , anthraquinones , fatty acids , hormones , and other useful substances . due to these constituents , it is very much popular for its anti - inflammatory , anti - bacterial , anti - fungal , anti - viral as well as protective nature against a broad range of microorganisms . it is also implicated in the treatment of healing of skin burns and wounds . in an animal model , gala - garcia et al . evaluated the effect of a. vera on rat pulp tissue and found to have acceptable biocompatibility and can also lead to tertiary dentinal bridge formation . in addition , in an another in vivo human model , gupta et al . evaluated the effect of freshly extracted a. vera gel from its leaves as a pulpotomy medicament in primary molar teeth . clinical and radiographic evaluation of all the pulpotomized teeth was carried out for about 1 month followed by histopathological evaluation . they concluded that freshly extracted a. vera gel can be used as a successful pulpotomy agent . to the best of our knowledge till date , there are no studies available in literature comparing the efficacy of fresh a. barbadensis plant extract and mta as pulpotomy agents in primary teeth . hence , this study was undertaken to evaluate and compare the clinical and radiographic outcomes of fresh a. barbadensis plant extract and mta as pulpotomy agents in primary molars . the objective of this study was to introduce a natural alternative to commercially available pulpotomy agents for primary teeth which may be potent , economical , and with least side effects . the research protocol was reviewed and approved by the institutional ethics committee prior to the beginning of this study . parents or legally responsible persons of the selected participants received detailed information about the study , materials being used along with their advantages , disadvantages , limitations , and drawbacks , and signed a free informed consent form , permitting the participation of their children . a total of 48 children who met the inclusion criteria requiring pulpotomy procedure in sixty teeth involving primary molars were included in the study after screening 127 children in the age group of 410 years who reported to the department of pedodontics and preventive dentistry . the sample size was calculated after power analysis which was more than 80% for this study . the selected tooth in each patient was isolated with the help of a rubber dam , followed by application of caries detector dye ( reveal caries indicator , prevest denpro limited , jammu , jammu and kashmir , india ) with an applicator tip . after 1 min , the caries detector dye was washed away with distilled water , and the stained dentin was removed with the help of a no . 557 round bur to evaluate the depth of the carious lesion and the condition of the deepest layer of carious dentin . if the deepest layer of carious dentin was darkly stained , shiny and hard on exploration , indicating arrested caries , the caries was left , a permanent restoration was carried out and the tooth was excluded from the study . in teeth where carious dentin was lightly stained , dull and soft on exploration , the carious dentin was slowly removed , and in case of an accidental exposure of the pulp , the tooth was considered for pulpotomy . in cases where carious pulp exposure was already preexisting , once the tooth was selected for pulpotomy , local anesthesia ( lignox 2% adrenaline 1:80,000 , warren indoco remedies ltd . , mumbai , india ) was administered after removal of rubber dam , which was later reapplied for the pulpotomy procedure . 558 straight fissure bur , and the coronal pulp was amputated with the help of a sharp sterile spoon excavator . the initial pulpal bleeding was arrested by application of pressure over the root canal orifices with a cotton pellet moistened in saline for 4 min . upon removal of cotton pellet , if hemostasis was achieved , the tooth was indicated for either a. vera ( group a ) or mta ( group b ) pulpotomy through randomization protocol performed by the parent selecting a color - coded stick out from an opaque bag mentioning the medicament with an allocation ratio of 1:1 . in cases where no hemostasis was achieved at the end of 4 min , a healthy plant of pure a. barbadensis mill , approximately 4 years old , certified by the indian agricultural research institute , new delhi , india , was procured at regular intervals from this institute throughout the study period . from the whole plant , a healthy leaf was selected and cut from its stem base , cleaned with 70% ethyl alcohol , and stored in distilled water for 1 h to eliminate aloin . after 1 h , with the help of a sterile bard - parker blade , the outer green rind portion was removed , and the knife was introduced inside the inner mucilage layer as described by ramachandran and rao . the mucilage or the inner clear jelly - like substance , approximately 10 mm , was removed and washed again . the a. vera gel was further covered with a layer of collagen sponge ( abgel , sri gopal krishna labs pvt . ltd . , mumbai , india ) followed by placement of glass ionomer cement ( type 2 ; ketac molar easymix , 3 m espe , germany ) restoration . a stainless steel crown was cemented onto the pulpotomized tooth as a final restoration in the same appointment . in the control group , mta ( angelus industries limited , brazil ) powder was mixed with distilled water as per manufacturer 's instructions into a thick paste and was placed onto the pulp stumps . after the initial setting , it was followed by the placement of glass ionomer cement ( type 2 ; ketac molar easymix , 3 m espe , germany ) restoration . the pulpotomized tooth was prepared to receive a stainless steel crown and cemented in the same appointment . all the pulpotomized teeth were evaluated clinically and radiographically using predetermined criteria ( clinical : pain , tenderness , mobility , swelling , sinus ; radiographic : widening of periodontal ligament space , radiolucency , root resorption , pulp obliteration ) after 1 , 3 , 6 , 9 , and 12 months . the data obtained were tabulated and subjected to statistical analysis using statistical package for social sciences software version 17 ( spss inc . , pearson 's chi - square test and fisher 's exact test were used to compare the success between the groups . the significance level of all the statistical tests utilized in this study was set at p 0.05 ( 5% ) . the mean age of the children participated in this study was observed to be 6.5 1.2 years with 33 boys and 15 girls in both the groups . tables 2 and 3 represent the summary of distribution of the number of teeth with clinical and radiographic success or failure [ figures 14 ] at various follow - up periods and overall success rate ( clinical and radiographic ) observed between group a and in group b. all the teeth that had experienced clinical failure were excluded from the study immediately whereas those teeth presented with only radiographic failure were kept under observation and carried to the next follow - up period . it is evident from these tables that the combined ( clinical and radiographic ) success rates of a. vera pulpotomy and mta pulpotomy were 6.9% and 71.4% , respectively , after taking the dropout teeth into consideration , and the difference was statistically significant ( p < 0.001 ) . distribution of teeth available at various follow - up intervals ( 1 , 3 , 6 , 9 , and 12 months ) summary of distribution of number of teeth with clinical and radiographic success or failure at various follow - up periods ( 1 , 3 , 6 , 9 , and 12 months ) overall success rate ( clinical and radiographic in percentage ) observed between group a ( aloe vera ) and in group b ( mineral trioxide aggregate ) composite digital image showing failure at 1-month follow - up interval of pulpotomized tooth treated with aloe vera plant extract ( swelling ) composite digital radiographic image showing postoperative and failure within 1-month follow - up period of pulpotomized tooth treated with aloe vera plant extract ( group a ) composite digital radiographic image showing various follow - up periods of pulpotomized tooth treated with aloe vera plant extract ( group a ) composite digital radiographic image showing postoperative and failure within 1-month follow - up period of pulpotomized tooth treated with mineral trioxide aggregate ( group b ) the ideal pulp dressing material after pulpotomy should leave the radicular pulp vital , healthy and enclosed within an odontoblastic - lined dentine chamber . along with that , it should be bactericidal , be harmless to the radicular pulp and surrounding tissues , should promote healing of the radicular pulp , and not interfere with the normal physiological process of root resorption . many materials have been investigated as possible replacements for fc because of concerns regarding its systemic distribution and potential for toxicity , carcinogenicity , and mutagenicity . varying success rates and concerns regarding the safety of the application of these materials make it clear that additional research on the use of these pharmaco - therapeutic agents is necessary . plants possessing medicinal properties have been widely employed in folk medicine as well as a therapeutic resource to primary medicine . in the search of newer materials , there is a rapidly increasing interest and research toward these medicinal plants , considering that they will not be exerting any deleterious effect to the adjoining living tissues . hence , the present study was undertaken to compare the clinical and radiographic outcomes of fresh a. barbadensis plant extract and mta as pulpotomy medicaments in primary molar teeth . the overall success rate of mta pulpotomy at the end of 12-month follow - up was calculated as 71.4% after taking the dropout patients into considerations . in contrast , sonmez et al . have reported a lower success rate of 67% . they placed a wet cotton pellet over mta for 1 day followed by the final restoration in the next day . the high success rate of mta pulpotomy can be attributed to the property of dentin bridge formation , excellent biocompatibility , and alkalinity . however , good sealing ability without any sign of solubility forms the most distinctive feature . only one failure was reported in case of mta group at the end of 1 month follow - up . the failure was due to internal resorption in one of the root canals of the tooth [ figure 4 ] . the cause of failure can be attributed to the medicament itself and this is in accordance with the results obtained by holan et al . from ancient times , a. vera plant extract has been extensively used by human beings as food , as well as medicine for the treatment of variety of systemic diseases . around 2000 years ago , the greek scientists regarded a. vera as the universal panacea . shelton reviewed its chemical and therapeutic properties and concluded that a. vera gel is nontoxic , bactericidal , virucidal , and fungicidal against a broad range of microorganisms . it can be used as an anti - inflammatory agent , moisturizing agent as well as a wound - healing agent . in dentistry , it is used as a healing agent in treating aphthous ulcers , extraction socket , and chronic oral lesions as well as in the treatment of lichen planus . the anti - inflammatory role of steroids present in a. vera gel is well established , leading to the production of low levels of prostaglandins . in view of various sanctified properties of a. vera , the dental pulp is a specialized loose connective tissue , containing cells , fibers , ground substance , blood vessels , and nerve endings . the components found in the pulp tissue are very much similar to those found in dermal and epidermal layers of the skin . hence , in the present study , pulpal stump was considered as an open wound , and fresh a. vera gel was applied directly over the pulp stump . there were no reported complications or side effects with a. vera being used in contact with animal as well as human pulp tissue and histologically . a. vera gel exhibits a limited shelf - life because of the rapid oxidation process when it is exposed to external environment as well as due to the microbial interaction which further degrades the gel and its constituents . the mucilaginous layer or the inner clear jelly substance of a. vera contains highest concentration of potentially beneficial components . they hold secret to the plant 's medicinal properties and hence only this layer was placed onto the pulp stumps . since the gel is semisolid in nature , an absorbable collagen sponge was placed over the gel . however , in contrast , we have observed 75% of failure ( both clinical and radiographic ) in a. vera group within 1 month of follow - up [ figures 13 and table 2 ] . the overall success rate of a. vera pulpotomy at the end of 12-month follow - up was 6.9% after taking dropout patients into consideration . the results obtained in the present study were contradictory to gupta et al . who concluded that freshly extracted a. vera gel can be used as a successful pulpotomy agent . gala - garcia et al . concluded that the application of a. vera placed in direct contact with the exposed rat pulp tissue has acceptable biocompatibility and can lead to tertiary dentin bridge formation . they attributed the reason behind this result to the various bioactive substances such as glycoprotein , polysaccharides , and beta - sitosterol in the stimulation of wound healing , cell proliferation , and angiogenesis . the teeth presented with only radiographic failures in this study were not treated immediately and were observed for further follow - up as they were asymptomatic and did not show any sign of clinical failure . the difference between the overall success rate of a. vera and mta pulpotomy was found to be statistically significant ( p < 0.001 ) at all follow - up periods . failures in pulpotomized teeth can be attributed to the medicament placed inside the pulp chamber because the changes produced inside the radicular pulp occur as a result of medicament pulp interaction . hence , further histological investigations should be conducted to ascertain the reaction between fresh a. vera gel and human dental pulp tissue . how a processed form of a. vera behaves in the vicinity of the human pulp tissue is also unknown and should be further explored in comparison with fresh a. vera plant extract . mta was found to be superior when compared to fresh a. barbadensis plant extract as a pulpotomy agent in primary molars . even though a. vera gel is a natural and economically viable medicament , it has proved to be an unfavorable pulpotomy agent in primary teeth . further studies are needed to ascertain the histological reaction between fresh a. vera gel and human dental pulp .
background : the purpose of this study was to compare the clinical and radiographic outcomes of fresh aloe vera barbadensis plant extract and mineral trioxide aggregate ( mta ) as pulpotomy agents in primary molar teeth.materials and methods : pulpotomy procedure was performed in sixty primary molar teeth which were randomly allocated to two groups , i.e. , aloe vera pulpotomy ( group a ) and mta pulpotomy ( group b ) . all the pulpotomized teeth were evaluated clinically and radiographically at 1 , 3 , 6 , 9 , and 12 months of time interval using predetermined criteria.results:the success rates between groups a and b at the end of the 1st month were 24.1% and 96.4% , at the end of 3rd month were 57.1% and 100% , at the end of 6th month were 75% and 100% , at the end of 9th month were 66.6% and 100% , and at the end of 12 months were 100% and 100% respectively . the overall success rates at the end of 12-month follow - up period were 6.9% and 71.4% , respectively , after taking dropout patients into consideration , and the difference was statistically significant ( p < 0.001).conclusions : mta pulpotomy was found to be superior when compared to fresh a. barbadensis plant extract pulpotomy in primary molars .
we included nine patients ( five men and four women ) with type 1 diabetes on stable continuous subcutaneous insulin pump therapy . diagnostic criteria for type 1 diabetes were past or present positive antibodies against gad or islet cells and plasma free c - peptide levels < 0.3 nmol / l . their weight had been stable for at least 3 months prior to participation in this study . a1c levels had been below 8.5% during the year prior to the start of the study . kg / m , clinical signs of autonomic neuropathy , known sleep disorders , habitual sleep duration of less than 6 h or more than 9 h , psychiatric disorders , and use of sleep medication , -blocking agents , or prokinetic drugs . all patients had normal blood pressure and serum creatinine levels and urinary microalbumin excretion rates below 30 mg/24 h. the study was approved by the medical ethical committee of the leiden university medical center , and written informed consent was obtained from all subjects prior to the study . the subjects were studied on 3 days , separated by intervals of at least 3 weeks . subjects kept a detailed diary of their diet and physical activity for 3 days prior to each study day and were asked to maintain a standardized schedule of bedtimes and mealtimes in accordance with their usual habits . actigraphy ( actiwatch aw7 ; cambridge neurotechnology , cambridge , u.k . ) was performed to objectively assess patterns of habitual active and inactive ( sleep ) periods for 7 days prior to the actual study , including one weekend . in addition , self - reported sleep duration and sleep quality were assessed using validated questionnaires ( pittsburgh sleep quality index , epworth sleepiness scale , and berlin questionnaire ) ( 810 ) . subjects were admitted to our clinical research center the night preceding each study day and spent 8.5 h in bed from 2300 h to 0730 h on all three occasions . the first study day served to let subjects become accustomed to sleep conditions in a research setting . the optimal overnight infusion rate of insulin was determined in each subject prior to the start of the study . subjects were randomly assigned to partial sleep deprivation on either the second ( n = 4 ) or third ( n = 5 ) study occasion . during the night of sleep restriction , subjects also spent 8.5 h in bed but were only allowed to sleep from 0100 h to 0500 h. they were allowed to read or watch movies in an upward position , and their wakefulness was monitored and assured if necessary . sleep was visually scored for each of the three nights according to the guidelines of the american association of sleep medicine ( aasm ) ( 11 ) . in short , scoring of sleep stages depends on electroencephalography , eye movements , and submental muscle activity . to detect possible sleep disorders that might affect the study , respiratory movements were recorded by measurement of changes in nasal pressures and of truncal respiratory movements . recordings were made using a portable polysomnography ( psg ) recorder ( titanium ; embla systems , broomfield , co ) . sleep and wake stages were visually scored in consecutive epochs of 30 s , resulting in a list of epochs spent in wake , stages i ( drowsiness ) , ii , iii , and rapid eye movement ( rem ) dream sleep . the times at which subjects went to bed and turned out the lights as well as times of getting out of bed were noted . the lists of stages were used to calculate the duration of time spent each night in the above - mentioned sleep and wake stages . these durations were also expressed as percentages of total sleep duration , defined as the summed duration of sleep stages i , ii , iii , and rem . hyperinsulinemic euglycemic clamp studies were performed the day after the second and third study occasions . after an overnight fast , a catheter was inserted into an antecubital vein for infusion of isotopes , glucose , and insulin , and a sampling catheter was inserted into a dorsal hand vein of the contra lateral arm . for all blood samples , the heated hand technique was used to obtain arterialized blood ( 12 ) . a primed ( 17.6 mol / kg ) continuous ( 0.22 mol kg min ) infusion of [ 6,6-h2]glucose ( cambridge isotope laboratory , andover , ma ) was started at 0830 h , after basal blood samples had been taken for determination of background glucose enrichment . labeled glucose was infused by a pilot c syringe pump ( fresenius vial , brezins , france ) . blood samples were obtained after 160 , 170 , and 180 min of [ 6,6-h2]glucose infusion for assessment of glucose kinetics in the basal state and concentrations of glucose and plasma nonesterified fatty acids ( nefas ) . subsequently , administration of subcutaneous insulin was stopped and infusion of intravenous insulin was started , using the method of defronzo et al . briefly , this consisted of a primed ( 80 mu m min for 5 min and subsequently 40 mu m min for 5 min ) , followed by continuous ( 20 mu m min ) infusion of insulin ( actrapid , novo nordisk , alphen a / day rijn , the netherlands ) , dissolved in sterile nacl 0.9% , using a pilot c syringe pump . a variable infusion of glucose 20% enriched with 3% plasma glucose concentrations were measured in intervals of 5 min with a bedside calibrated glucose analyzer ( accu - chek ; roche , mannheim , germany ) , and the infusion rate of glucose 20% was adjusted in order to keep the plasma glucose levels constant at 5.0 blood samples were obtained after 150 , 160 , 170 , and 180 min of combined insulin and [ 6,6-h2]glucose infusion for assessment of glucose kinetics and of concentrations of glucose , insulin , and plasma nefa . serum concentrations of glucose were measured using a fully automated modular p 800 analyzer ( roche / hitachi ; mannheim , germany ) with intra - assay variations of 1% . serum insulin concentrations were measured by enzyme labeled chemiluminescent immunometric assay ( immulite 2500 ; siemens , bad nauheim , germany ) with an intra - assay coefficient of variation ( cv ) of 4% . nefa levels were determined spectrophotometrically by enzymatic colorimetric acyl - coa synthase / acyl - coa oxidase assay ( wako chemicals , neuss , germany ) with intra - assay cv of 2.7% . enrichment of plasma [ 6,6-h2]glucose was determined in a single analytical run , using gas chromatography coupled to mass spectrometry , as described previously ( 14 ) . all isotope enrichments were measured on a gas chromatograph mass spectrometer ( model 6890/5973 ; hewlett - packard , palo alto , ca ) . isotopic steady state was achieved during the final 30 min of the basal period and the final 30 min of the hyperinsulinemic clamp study . therefore , the rates of appearance ( ra ) and disappearance ( rd ) of glucose were calculated as the tracer infusion rates divided by the tracer - to - tracee ratios . endogenous glucose production ( egp ) during the basal steady state is equal to ra of glucose , whereas egp during the hyperinsulinemic clamp study was calculated as the difference between ra and the glucose infusion rates . differences between the effects of the night of normal sleep duration and the night of partial sleep restriction were analyzed by the wilcoxon signed rank test for paired samples . all analyses were performed using spss for windows , version 16.0 ( spss , chicago , il ) . sleep was visually scored for each of the three nights according to the guidelines of the american association of sleep medicine ( aasm ) ( 11 ) . in short , scoring of sleep stages depends on electroencephalography , eye movements , and submental muscle activity . to detect possible sleep disorders that might affect the study , respiratory movements were recorded by measurement of changes in nasal pressures and of truncal respiratory movements . recordings were made using a portable polysomnography ( psg ) recorder ( titanium ; embla systems , broomfield , co ) . sleep and wake stages were visually scored in consecutive epochs of 30 s , resulting in a list of epochs spent in wake , stages i ( drowsiness ) , ii , iii , and rapid eye movement ( rem ) dream sleep . the times at which subjects went to bed and turned out the lights as well as times of getting out of bed were noted . the lists of stages were used to calculate the duration of time spent each night in the above - mentioned sleep and wake stages . these durations were also expressed as percentages of total sleep duration , defined as the summed duration of sleep stages i , ii , iii , and rem . hyperinsulinemic euglycemic clamp studies were performed the day after the second and third study occasions . after an overnight fast , a catheter was inserted into an antecubital vein for infusion of isotopes , glucose , and insulin , and a sampling catheter was inserted into a dorsal hand vein of the contra lateral arm . for all blood samples , the heated hand technique was used to obtain arterialized blood ( 12 ) . a primed ( 17.6 mol / kg ) continuous ( 0.22 mol kg min ) infusion of [ 6,6-h2]glucose ( cambridge isotope laboratory , andover , ma ) was started at 0830 h , after basal blood samples had been taken for determination of background glucose enrichment . labeled glucose was infused by a pilot c syringe pump ( fresenius vial , brezins , france ) . blood samples were obtained after 160 , 170 , and 180 min of [ 6,6-h2]glucose infusion for assessment of glucose kinetics in the basal state and concentrations of glucose and plasma nonesterified fatty acids ( nefas ) . subsequently , administration of subcutaneous insulin was stopped and infusion of intravenous insulin was started , using the method of defronzo et al . briefly , this consisted of a primed ( 80 mu m min for 5 min and subsequently 40 mu m min for 5 min ) , followed by continuous ( 20 mu m min ) infusion of insulin ( actrapid , novo nordisk , alphen a / day rijn , the netherlands ) , dissolved in sterile nacl 0.9% , using a pilot c syringe pump . a variable infusion of glucose 20% enriched with 3% [ 6,6-h2]glucose was started 4 min after the start of insulin infusion . plasma glucose concentrations were measured in intervals of 5 min with a bedside calibrated glucose analyzer ( accu - chek ; roche , mannheim , germany ) , and the infusion rate of glucose 20% was adjusted in order to keep the plasma glucose levels constant at 5.0 blood samples were obtained after 150 , 160 , 170 , and 180 min of combined insulin and [ 6,6-h2]glucose infusion for assessment of glucose kinetics and of concentrations of glucose , insulin , and plasma nefa . serum concentrations of glucose were measured using a fully automated modular p 800 analyzer ( roche / hitachi ; mannheim , germany ) with intra - assay variations of 1% . serum insulin concentrations were measured by enzyme labeled chemiluminescent immunometric assay ( immulite 2500 ; siemens , bad nauheim , germany ) with an intra - assay coefficient of variation ( cv ) of 4% . nefa levels were determined spectrophotometrically by enzymatic colorimetric acyl - coa synthase / acyl - coa oxidase assay ( wako chemicals , neuss , germany ) with intra - assay cv of 2.7% . enrichment of plasma [ 6,6-h2]glucose was determined in a single analytical run , using gas chromatography coupled to mass spectrometry , as described previously ( 14 ) . all isotope enrichments were measured on a gas chromatograph mass spectrometer ( model 6890/5973 ; hewlett - packard , palo alto , ca ) . isotopic steady state was achieved during the final 30 min of the basal period and the final 30 min of the hyperinsulinemic clamp study . therefore , the rates of appearance ( ra ) and disappearance ( rd ) of glucose were calculated as the tracer infusion rates divided by the tracer - to - tracee ratios . endogenous glucose production ( egp ) during the basal steady state is equal to ra of glucose , whereas egp during the hyperinsulinemic clamp study was calculated as the difference between ra and the glucose infusion rates . differences between the effects of the night of normal sleep duration and the night of partial sleep restriction were analyzed by the wilcoxon signed rank test for paired samples . all analyses were performed using spss for windows , version 16.0 ( spss , chicago , il ) . the results of two of the nine patients were excluded because of nocturnal hypoglycemia and subsequent nocturnal hyperglycemia during the study ( n = 1 ) , and because of the presence of a previously undetected sleep apnea syndrome ( n = 1 ) . therefore , the analyses included the data of seven patients ( three men ) for analysis . mean age of these seven subjects was 44.3 6.6 years , mean weight 72.0 4.0 kg , mean height 175 3 cm , and mean bmi 23.5 0.9 kg / m . mean a1c of the patients was 7.6 0.3% , and mean duration of diabetes was 23 3.5 years . self - reported sleep duration and recorded habitual sleep duration by actigraphy were not different ( 475 8 vs. 490 7 min , p = 0.12 ) . sleep duration was considerably shorter in the night with partial sleep restriction , compared with the night with normal sleep duration ( p = 0.02 ) ( table 1 ) . sleep in the sleep - deprived night showed a higher proportion of stage iii sleep ( p = 0.02 ) and a lower proportion of rem sleep ( p = 0.04 ) . the effects of a night of normal sleep duration versus a night of sleep duration restricted to 4 h on sleep parameters assessed by psg , basal and insulin - stimulated glucose , and fatty acid metabolism in seven patients with type 1 diabetes data are means sem . lbm , lean body mass ; tst , total sleep time . the mean overnight rate of subcutaneous infusion of insulin was 0.7 ie / h and was identical in both conditions ( table 1 ) . compared with normal sleep duration , partial sleep deprivation did not alter basal levels of glucose or nefa measured the following morning . in addition , partial sleep restriction did not affect basal egp assessed by primed , continuous infusion of [ 6,6-h2]glucose . steady state glucose and insulin levels did not differ between the two clamp studies ( table 1 and fig . sleep restriction did not affect egp during the clamp conditions . however , sleep restriction decreased the rate of glucose disposal ( rd ) during the clamp by 14% ( p = 0.04 ) . accordingly , the rate of infusion of glucose necessary to maintain constant plasma glucose levels during the hyperinsulinemic clamp study was 21% lower after the night of reduced sleep duration than after the night of normal sleep duration ( p = 0.04 ) , reflecting decreased peripheral insulin sensitivity . individual values obtained during steady state of the hyperinsulinemic euglycemic clamp studies of nefas ( a ) , egp ( b ) , glucose disposal rate ( c ) , and the glucose infusion rate ( d ) after a night of normal sleep duration versus after a night of partial sleep deprivation in patients with type 1 diabetes ( n = 7 ) . the results of two of the nine patients were excluded because of nocturnal hypoglycemia and subsequent nocturnal hyperglycemia during the study ( n = 1 ) , and because of the presence of a previously undetected sleep apnea syndrome ( n = 1 ) . therefore , the analyses included the data of seven patients ( three men ) for analysis . mean age of these seven subjects was 44.3 6.6 years , mean weight 72.0 4.0 kg , mean height 175 3 cm , and mean bmi 23.5 0.9 kg / m . mean a1c of the patients was 7.6 0.3% , and mean duration of diabetes was 23 3.5 years . self - reported sleep duration and recorded habitual sleep duration by actigraphy were not different ( 475 8 vs. 490 7 min , p = 0.12 ) . sleep duration was considerably shorter in the night with partial sleep restriction , compared with the night with normal sleep duration ( p = 0.02 ) ( table 1 ) . sleep in the sleep - deprived night showed a higher proportion of stage iii sleep ( p = 0.02 ) and a lower proportion of rem sleep ( p = 0.04 ) . the effects of a night of normal sleep duration versus a night of sleep duration restricted to 4 h on sleep parameters assessed by psg , basal and insulin - stimulated glucose , and fatty acid metabolism in seven patients with type 1 diabetes data are means sem . the mean overnight rate of subcutaneous infusion of insulin was 0.7 ie / h and was identical in both conditions ( table 1 ) . compared with normal sleep duration , partial sleep deprivation did not alter basal levels of glucose or nefa measured the following morning . in addition , partial sleep restriction did not affect basal egp assessed by primed , continuous infusion of [ 6,6-h2]glucose . steady state glucose and insulin levels did not differ between the two clamp studies ( table 1 and fig . sleep restriction did not affect egp during the clamp conditions . however , sleep restriction decreased the rate of glucose disposal ( rd ) during the clamp by 14% ( p = 0.04 ) . accordingly , the rate of infusion of glucose necessary to maintain constant plasma glucose levels during the hyperinsulinemic clamp study was 21% lower after the night of reduced sleep duration than after the night of normal sleep duration ( p = 0.04 ) , reflecting decreased peripheral insulin sensitivity . individual values obtained during steady state of the hyperinsulinemic euglycemic clamp studies of nefas ( a ) , egp ( b ) , glucose disposal rate ( c ) , and the glucose infusion rate ( d ) after a night of normal sleep duration versus after a night of partial sleep deprivation in patients with type 1 diabetes ( n = 7 ) . in this study , we assessed the effects of a single night of partial sleep restriction on insulin sensitivity in patients with type 1 diabetes . the results indicate that a single night of partial sleep restriction reduces insulin sensitivity of insulin - stimulated glucose uptake by 1421% . we conclude that sleep duration is a determinant of peripheral insulin sensitivity in patients with type 1 diabetes . in the current study we included the data of only seven patients with type 1 diabetes . the strictly controlled design of this pathophysiological study in combination with the fact that each subject served as his / her own control enabled us to establish subtle effects of partial sleep deprivation on parameters of insulin sensitivity . nonetheless , larger numbers of subjects are required to assess the involvement of relevant patient characteristics such as sex , age , and antecedent glucoregulation on the effects of sleep restriction on insulin sensitivity . this is the first study that documents an adverse effect of partial sleep restriction on insulin sensitivity in patients with type 1 diabetes . in healthy subjects , , it can be expected that sleep restriction increases postprandial glucose levels in patients with type 1 diabetes in the absence of concurrent adaptations of the dose of exogenous insulin . several epidemiological studies documented an association between chronic partial sleep restriction and development of insulin resistance and type 2 diabetes ( 5,16,17 ) . therefore , exposure to chronic sleep restriction might contribute to insulin resistance in patients with type 1 diabetes . in turn , insulin resistance is associated with an increased risk for microvascular and macrovascular complications in type 1 diabetes ( 18 ) . unfortunately , the current study was not designed to elucidate the mechanisms involved in the induction of insulin resistance by partial sleep deprivation . a single night of partial sleep restriction to 4.5 h does not cause endocrine changes that simply explain the induction of insulin resistance ( 19 ) . subsequent nights of partial sleep deprivation induce subtle changes in cortisol and catecholamine secretion ( 7,15,20 ) . however , the relations between these effects of sleep deprivation on endocrine homeostasis and glucose tolerance are uncertain . partial sleep deprivation for a single and subsequent nights increased the sympathetic tone based on recordings of heart rate variability after sleep deprivation ( 21,22 ) . however , the relationship between elevated sympathovagal balance at the level of the heart and the sympathetic outflow to liver , muscles , and adipose tissue is uncertain ( 21 ) . interestingly , in addition to sleep duration , the composition of sleep in terms of sleep stages is also a determinant of insulin sensitivity . selective suppression of slow - wave sleep , without a change in total sleep duration , decreased glucose tolerance in healthy subjects ( 23 ) . the differential effects of altered sleep composition versus decreased total sleep duration on insulin sensitivity awaits further study . data on sleep physiology and sleep disturbances in patients with type 1 diabetes are rare . jauch - chara et al . ( 24 ) reported alterations in neuroendocrine sleep architecture and a trend toward less slow - wave sleep in 14 patients with type 1 diabetes . children with type 1 diabetes have a more disrupted sleep than healthy children ( 25 ) . if type 1 diabetes indeed causes disruption of sleep patterns , this may in turn impair glucose regulation , creating a vicious circle . in conclusion , the present study indicates that partial sleep restriction decreases insulin sensitivity of insulin - mediated glucose uptake in patients with type 1 diabetes . it is important to further assess the relationship between sleep physiology and glucoregulation in patients with type 1 diabetes .
objectivesleep restriction results in decreased insulin sensitivity and glucose tolerance in healthy subjects . we hypothesized that sleep duration is also a determinant of insulin sensitivity in patients with type 1 diabetes.research design and methodswe studied seven patients ( three men , four women ) with type 1 diabetes : mean age 44 7 years , bmi 23.5 0.9 kg / m2 , and a1c 7.6 0.3% . they were studied once after a night of normal sleep duration and once after a night of only 4 h of sleep . sleep characteristics were assessed by polysomnography . insulin sensitivity was measured by hyperinsulinemic euglycemic clamp studies with an infusion of [ 6,6 - 2h2]glucose.resultssleep duration was shorter in the night with sleep restriction than in the unrestricted night ( 469 8.5 vs. 222 7.1 min , p = 0.02 ) . sleep restriction did not affect basal levels of glucose , nonesterified fatty acids ( nefas ) , or endogenous glucose production . endogenous glucose production during the hyperinsulinemic clamp was not altered during the night of sleep restriction compared with the night of unrestricted sleep ( 6.2 0.8 vs. 6.9 0.6 mol kg lean body mass1 min1 , ns ) . in contrast , sleep restriction decreased the glucose disposal rate during the clamp ( 25.5 2.6 vs. 22.0 2.1 mol kg lean body mass1 min1 , p = 0.04 ) , reflecting decreased peripheral insulin sensitivity . accordingly , sleep restriction decreased the rate of glucose infusion by 21% ( p = 0.04 ) . sleep restriction did not alter plasma nefa levels during the clamp ( 143 29 vs. 133 29 mol / l , ns).conclusionspartial sleep deprivation during a single night induces peripheral insulin resistance in these seven patients with type 1 diabetes . therefore , sleep duration is a determinant of insulin sensitivity in patients with type 1 diabetes .
device scaling has been the engine driving the microelectronics revolution as predicted by moore s law.@xcite by reducing the size of transistors , processors become faster and more power efficient at an exponential rate . currently the main challenge in device scaling is the integration of high - k oxides as gate oxides into silicon technology . the gate oxide is an integral part of a metal - oxide - semiconductor field - effect transistor ( mosfet ) . it is the dielectric of a capacitor , which is used to attract charge carriers into the channel between source and drain , and thus switches the transistor between its conducting and its non - conducting state . with a thickness of approximately 1 - 2 nm,@xcite the gate oxide is the smallest structure of a transistor . further scaling would result in an unacceptably high quantum mechanical leakage current and thus a large power consumption . in current transistors , the gate oxide is made from sio@xmath6 and sio@xmath7n@xmath8 . future transistor generations will have to employ oxides with a higher dielectric constant ( high - k ) . this allows greater physical thicknesses and thus reduces the quantum mechanical leakage currents . the main contenders for the replacement of sio@xmath6 in future transistors are , from today s point of view , oxides containing alkaline earth metals like sr or ba , third - row elements like y or la , forth - row elements like ti , zr and hf , or mixtures thereof . prominent examples are perovskite structures around srtio@xmath9@xcite and laalo@xmath9@xcite , fluorite structures like zro@xmath6 and hfo@xmath6@xcite and also y@xmath6o@xmath9 and la@xmath6o@xmath9@xcite or pyrochlore structures like la@xmath6hf@xmath6o@xmath10@xcite and la@xmath6zr@xmath6o@xmath10.@xcite recently , also promising results on pr@xmath6o@xmath9 have been published.@xcite while the first high - k - oxides will be grown with an interfacial sio@xmath6 layer , a further reduction in scale requires high - k - oxides with a direct interface to silicon . the requirement to limit interface states , and the often crystalline nature of the oxides demand an epitaxial growth of the oxides on silicon . considering layer - by - layer growth by molecular beam epitaxy ( mbe ) , the first growth step for high - k oxides is the deposition of the metal on silicon . therefore we have investigated deposition of metals out of the three most relevant classes for high - k oxides on si(001 ) . these are the divalent alkaline - earth metals and the three- and the four - valent transition metals . the results on adsorption of zr and sr have been published previously.@xcite the present paper completes the study with a description of la - adsorption on si(001 ) as example of a trivalent metal . our previous work has shown that zr tends to form silicides readily.@xcite silicide grains have been observed after zr sputtering on si(001),@xcite unless silicide formation is suppressed by early oxidation which , however , leads to interfacial sio@xmath6 . the sr silicides are less stable in contact with silicon and due to their sizable mismatch in lattice constant , nucleation does not proceed easily . the alkaline - earth metals sr and ba have been used in the first demonstration of an atomically defined interface between a high - k oxide , namely srtio@xmath9 and silicon.@xcite by following through the detailed steps of the formation of this interface , starting at the low - coverage structures of metal adsorption , we were able to provide a new picture for the phase diagram of sr on si(001).@xcite the phase diagram has been important to link the theoretical interface structure of srtio@xmath9 on si(001 ) to the experimental growth process.@xcite from the interface structure and its chemistry we could show in turn that the band - offset , a critical parameter for a transistor , can be engineered to match technological requirements by carefully controlling the oxidation of the interface.@xcite since many of the characteristics of sr adsorption carry over to la - adsorption let us briefly summarize the main results.@xcite sr donates its electrons to the empty dangling bonds of the si - surface . the si - dimers receive electron pairs one - by - one , and unbuckle as they become charged . when all si dangling bonds are filled , i.e. beyond @xmath11 monolayer ( ml ) , additional electrons enter the anti - bonding states of the si - dimers at the surface , and thus break up the si - dimer - row reconstruction . at low coverage , sr forms chains running at an angle of 63@xmath12 to the si - dimer rows . as the coverage increases , the chains condense first into structures at @xmath13 ml and at @xmath14 ml , which are determined by the buckling of the si - dimers and their electrostatic interaction with the positive sr ions . at @xmath11 ml a chemically fairly inert layer forms , where all dangling bonds are filled and all ideal adsorption sites in the valley between the si - dimer rows are occupied . the paper is organized similar to our previous work on sr adsorption . in sec . [ sec : compdet ] we describe the computational details of the calculation . in sec . [ sec : sisurf ] and [ sec : silicides ] we review briefly the reconstruction of the si(001 ) surface and we discuss the known bulk la silicides . [ sec : iso+dimer ] , [ sec : chains ] and [ sec : condchains ] deal with the low coverage limit , where la ad - atoms form dimers and chain structures . beyond the canonical coverage of 1/3 ml ( sec . [ sec : third ] ) we observe a change in the oxidation state of the la ad - atoms from 3 + to 2 + ( sec . [ sec:2 + 3 + ] ) . the results are placed into context in sec . [ sec : phasediagram ] where we propose a phase diagram for la on the surface . the computational supercells used for the simulation of the low - coverage structures are shown in the appendix . the calculations are based on density functional theory@xcite using a gradient corrected functional.@xcite the electronic structure problem was solved with the projector augmented wave ( paw ) method,@xcite an all - electron electronic structure method using a basis set of plane waves augmented with partial waves that incorporate the correct nodal structure . the frozen core states were imported from the isolated atom . for the silicon atoms we used a set with two projector functions per angular momentum for @xmath15 and @xmath16-character and one projector per angular momentum with @xmath4-character . the hydrogen atoms saturating the back surface had only one @xmath15-type projector function . for lanthanum we treated the 5@xmath15 and 5@xmath16 core shells as valence electrons . we used two projector functions per magnetic quantum number for the @xmath15 , @xmath16 , and @xmath5 angular momentum channels and one for the @xmath4 channel . the augmentation charge density has been expanded in spherical harmonics up to @xmath17 . the kinetic energy cutoff for the plane wave part of the wave functions was set to 30 ry and that for the electron density to 60 ry . a slab of five silicon layers was used as silicon substrate . this thickness was found to be sufficient in previous studies on sr adsorption.@xcite the dangling bonds of the unreconstructed back surface of the slab have been saturated by hydrogen atoms . the lateral lattice constant was chosen as the experimental lattice constant @xmath18 of silicon,@xcite which is 1 % smaller than the theoretical lattice constant . since we always report energies of adsorbate structures relative to the energy of a slab of the clean silicon surface , the lateral strain due to the use of the experimental lattice constant cancels out . the slabs repeat every 16 perpendicular to the surface , which results in a vacuum region of 9.5 for the clean silicon surface . the car - parrinello ab - initio molecular dynamics@xcite scheme with damped motion was used to optimize the electronic and atomic structures . all structures were fully relaxed without symmetry constraints . the atomic positions of the backplane of the slab and the terminating hydrogen atoms were frozen . many of the adsorption structures are metallic , which requires a sufficiently fine grid in k - space . we used an equivalent to twelve by twelve points per @xmath19 surface unit cell . previous studies have shown that a mesh of eight by eight k - points is sufficient.@xcite we have chosen a higher density here as this allows us to use commensurate k - meshes for @xmath20 and @xmath21 surface reconstructions . for metallic systems , the orbital occupations were determined using the mermin functional,@xcite which produces a fermi - distribution for the electrons in its ground state . the electron temperature was set to 1000 k. in our case this temperature should not be considered as a physical temperature but rather as a broadening scheme for the states obtained with a discrete set of k - points . the mermin functional adds an entropic term to the total energy , which is approximately canceled by taking the mean of the total energy @xmath22 and the mermin - free energy @xmath23 as proposed by gillan:@xcite @xmath24 the forces are , however , derived from the free - energy @xmath25 . the resulting error has been discussed previously.@xcite in order to express our energies in a comprehensible manner , we report all energies relative to a set of reference energies . this set is defined by bulk silicon and the lowest energy silicide lasi@xmath6 . the reference energies are listed in tab . [ tab : reference ] . the reference energy @xmath26 $ ] for a la atom , corresponding to the coexistence of bulk silicon and bulk la , is extracted from the energy @xmath27 $ ] of the disilicide calculated with a @xmath28 k - mesh for the tetragonal unitcell with @xmath29 and @xmath30 and the reference energy of bulk silicon @xmath31 $ ] as @xmath32=e[\mathrm{lasi}_2]-2e_0[\mathrm{si}].\ ] ] the bulk calculation for silicon was performed in the two atom unit cell with a ( @xmath33 ) k - mesh and at the experimental lattice constant of 5.4307 .@xcite .reference energies used in this paper without frozen core energy . see text for details of the calculation . [ cols="<,>",options="header " , ] in all low - energy structures each la atom is thus surrounded by four silicon atoms having filled dangling bonds . three of them are partners of filled si - dimers while one is a buckled si - dimer with the negative si atom pointing towards the la ad - atom . on the basis of counting unbuckled si - dimers , these structures are in a 3 + oxidation state . the la - chain is the configuration with lowest energy in the low coverage limit . the lowest energy chain structures are of the order of 0.17 ev per la atom more stable than the most favorable isolated la - dimer . at elevated temperatures , entropic effects will lead to increasingly shorter chain fragments . from the energy - difference of the linear chain and the isolated la - dimer , we obtain an estimate for the chain termination energy of approximately 0.09 ev . it should be noted , that experiments often observe shorter chain sequences than predicted from thermal equilibrium as high - temperature structures are frozen in . the electronic structure of the la - chain is analogous to that of the sr single chain.@xcite the empty silicon surface has an occupied and an un - occupied band formed from the dangling bonds of the si - dimers . la donates electrons into the upper dangling bond band . those dangling bond states , which become filled , are shifted down in energy due to the change in hybridization on the one side and due to the proximity of the positive la - cations on the other side . with increasing coverage , the chains become closer packed . in the case of sr , there was a preference for a periodicity of @xmath34 surface lattice spacings along the si - dimer row direction.@xcite this restriction has been attributed to the requirement that every cation be surrounded by four si - atoms with filled dangling bonds , and that there is no frustration of the si - dimer buckling , i.e. adjacent si - dimers are buckled antiparallel . for la the situation is more complex . due to the longer periodicity of the la chains compared to those of sr , there are two families of chain packing for la as shown in fig . [ fig : lacondchain ] . in the first family the la chains are displaced only parallel to the si - dimer row direction . in the second family the chains are in addition displaced perpendicular to the si - dimer row . the first family has a preference of @xmath34 surface lattice spacings along the dimer row as in the case of sr adsorption . the spacing in the second family is arbitrary . the reason is that in family one , the buckling of every second si - dimer row is pinned on both sides by two neighboring la chains ( see fig . [ fig : lacondchain]a ) . a si - dimer is pinned , if its buckling is determined by the coulomb attraction of its raised , and thus negatively charged , si atom to a nearby la ion . since the buckling alternates along the si - dimer row , this pinning can lead to indirect , long - ranged interaction between different la - chains . in the second family the buckling of every si - dimer row is pinned only at one la - chain as seen in fig . [ fig : lacondchain]b , while there is no preference of the si - dimer buckling at the other la - chain . thus for la we find in contrast to sr@xcite arbitrary chain spacings . the closest packing of la - chains before they merge is 1/5 ml . we consider two la - chains merged if la atoms of different la chains occupy nearest - neighbor @xmath35 sites within one valley . we predict a distinct phase at this coverage as seen in fig . [ fig : phase_diagram ] and discussed later . this structure , shown in fig . [ fig : lacondchain]b , is derived from chains of the second family . an explanation for finding a phase at 1/5 ml is that the energy at higher coverage increases due to the electrostatic interaction of the la atoms within one valley . for the first family , the highest possible coverage before la - chains merge is 1/6 ml ( fig . [ fig : lacondchain]a ) . note that the chains can change their direction without appreciable energy cost as shown in fig . [ fig : lachain]b . experimentally measured diffraction patterns would reflect a configurational average . the layer resolved density of states is shown in fig . [ fig:5th_dos ] . we see that the fermi - level lies in a band gap of the surface . above the fermi - level and still in the band - gap of bulk si , surface bands are formed , which originate from the remaining empty dangling bonds of the buckled si - dimers . as in the case of sr , these states form flat bands in the band - gap of silicon , which approximately remain at their energetic position as the la coverage is increased . its density of states , however , scales with the number of empty dangling bonds . if the spacing of the chains is further reduced , they condense at 1/3 ml to the structure shown in fig . [ fig : thirdml ] . there are several versions of this structure type . they have a repeating sequence of two la - atoms and one vacant @xmath35 site in each valley in common . the relative displacement of this sequence from one valley to the next , however , may differ . we investigated several structures and found the one shown in fig . [ fig : thirdml ] to be the most stable . a structure with a sequence of four @xmath35 sites occupied with metal ions separated by two empty @xmath35 sites , has been the most favorable structure at this coverage in the case of sr adsorption.@xcite for la , however , this configuration is energetically unfavorable . b ) with a reduced chain spacing . the calculational supercell cell is outlined.,width=241 ] at a coverage of 1/3 ml , all silicon dangling bonds are filled due to the electrons provided by the la ad - atoms . this surface is isoelectronic to the sr covered surface at 1/2 ml.@xcite for the sr - covered silicon surface , the increased oxidation resistance of the corresponding 1/2 ml structure has been observed experimentally.@xcite similarly we suggest that the surface covered with 1/3 ml of la will have an increased oxidation resistance . in fig . [ fig:3rd_dos ] we show the layer - resolved density of states of the most stable structure at 1/3 ml . in analogy to the 1/2 ml covered sr surface , there are no surface states deep in the band gap of silicon , because all si - dimer dangling bonds are filled and shifted into the valence band due to the electrostatic attraction of the electrons to the positive la ions . note , however , that in contrast to the canonical surface coverage of sr on si(001 ) at a coverage of 1/2 ml , the canonical la surface exhibits vacant @xmath35-sites . for a discussion about the si band gap . this dos corresponds to the supercell outlined in fig . [ fig : thirdml].,width=302 ] up to the canonical coverage of 1/3 ml , all thermodynamically stable reconstructions could be explained by la being in the @xmath36 oxidation state . in contrast to the isolated la - atoms and la - dimers , the oxidation state can clearly be identified from the number of unbuckled si - dimers : each unbuckled dimer has received two electrons . a @xmath36 oxidation state is also consistent with the density of states . if we follow the picture that emerged from sr , we would anticipate that increasing the coverage above 1/3 ml in case of la would lead to filling the si - dimer antibonds , which results in a breaking up of the dimer bonds . for la the situation is different : the la-@xmath4 band is located at much lower energies as compared to sr . therefore the energy to break the si - dimer bonds is larger than that to add electrons into the la @xmath4-shell . as a result we find that la changes its oxidation state from @xmath36 to @xmath37 . oxidation states of la that are even lower are unfavorable due to the coulomb repulsion of electrons within the la-@xmath4 and @xmath5 shells . thus the structures above 1/3 ml can be explained in terms of la@xmath3 ions and are similar to those found for sr.@xcite it may be instructive to compare two structures with different oxidation states of la . a good example is found at a coverage of 2/3 ml : the lowest energy structure is a @xmath38 reconstruction already found for sr@xcite and depicted in fig . [ fig:2_3rds]a . this is a clear 2 + structure . since every si - dimer only accepts two electrons , they can just accommodate two of the three valence electrons of la . the lowest structure with formal la@xmath2 ions , which can clearly be identified as having all si - dimer bonds broken , is shown in fig . [ fig:2_3rds]b . it has an energy which is 0.36ev per la atom higher than the structure with la@xmath3 ions . at 1/2 ml , we find a structure where all @xmath35 sites are occupied to be most stable . there the la @xmath4-states are partially occupied . we confirmed that the system is not spin polarized . the crossover of the energy surfaces of the 2 + and the 3 + structures is shown in fig . [ fig : crossover ] using a set of surface reconstructions , for which the charge state can be determined unambiguously . it can be clearly seen that the 2 + structures become significantly more stable above 1/2 ml . from fig . [ fig : enevscov ] it is apparent that the energy rises sharply as the la atoms cross over to the 2 + oxidation state beyond the canonical interface at a coverage of 1/3 ml . based on the surface energies composed in fig . [ fig : enevscov ] we extracted the zero - kelvin phase diagram shown in fig . [ fig : phase_diagram ] . the slope of the line - segments of the lower envelope in fig . [ fig : enevscov ] corresponds to the chemical potential , at which the two neighboring phases coexist ( for a more elaborate discussion , refer to ref.@xcite ) . the stable phases are defined by the coverages where two line segments with different slopes meet . the zero for the la chemical potential has been chosen as the value at which lasi@xmath6 and silicon coexist . consequently , all phases in regions of positive chemical potentials are in a regime where the formation of bulk silicides is thermodynamically favorable . b , [ fig : thirdml ] and [ fig:2_3rds]a for the structures at 1/5 , 1/3 and 2/3 ml , respectively . at 0.42 ml we predict a @xmath39 reconstruction which originates from the half - ml structural template with a la vacancy concentration of 17% ( see discussion in the text).,width=302 ] below a chemical potential of @xmath40ev we expect single chain structures as described in sec . [ sec : chains ] . at 1/5 ml we predict a distinct phase since this is the highest possible coverage without la ad - atoms at nearest neighbor @xmath35 sites ( compare fig . [ fig : lacondchain]b ) . at a chemical potential of -0.19ev the stability region of the 1/3 ml coverage ( fig . [ fig : thirdml ] ) starts . the transition from the phase at 1/3 ml to the @xmath41 reconstructed surface at 1/2 ml , where all @xmath35-sites are filled , can be described by a decrease of la - vacancies ( compare fig . [ fig : thirdml ] of this manuscript and fig . 9 of ref . @xcite ) . from this point of view , the phase at 1/3 ml can be described by an ordered array of la - vacancies in the 1/2 ml structure . there is an effective repulsion between la - vacancies due to the repulsion between la - atoms on neighboring @xmath35-sites . we describe the total energy by an empirical model energy of the form @xmath42 , where @xmath43 is the concentration of la vacancies , @xmath44 is the energy of the structure with all @xmath35-sites filled ( 1/2 ml ) , @xmath45 is the formation energy of an isolated la - vacancy , and @xmath46 describes the repulsion between vacancies . coexistence between the two phases would result from a negative value of @xmath46 . in that case , adding an additional ad - atom to a phase requires more energy than starting a new phase with the next higher coverage . between 1/3 and 1/2 ml , however , @xmath46 is positive as filling a portion of vacancies is favorable compared to creating patches of pure 1/2 ml coverage . we calculated the energy of an adsorption structure with three la atoms on neighboring @xmath35 sites separated by one vacancy within one valley . la triplets in different valleys have been arranged , so that the distance between vacancies is maximized in order to minimize the repulsive energy . based on the energies at 1/3 and 1/2 ml as well as at the intermediate coverage of 3/8 ml just described , we can determine the three parameters @xmath44 , @xmath45 and @xmath46 to be 0.05 , -0.56 and 0.26ev , respectively . at a certain vacancy concentration of @xmath47% ( i.e. a la - coverage of 0.42 ml ) we find a phase boundary with the next stable phase at 2/3 ml at a chemical potential of 0.94ev . according to our phase diagram , the pure surface reconstruction at 1/2 ml is never formed . the shaded region in fig . [ fig : phase_diagram ] corresponds to 1/2 ml structural template with variable vacancy concentration . as seen in the phase diagram shown in fig . [ fig : phase_diagram ] bulk silicide formation becomes thermodynamically stable within the stability region of the 1/3 ml coverage . in a growth experiment we would expect the formation of bulk silicide grains to be delayed beyond a coverage of 1/3ml . the nucleation of silicide grains may suffer from the large mismatch between bulk silicide phases and silicon . this is of particular importance during the initial stages of nucleation because the strained interface region occupies most of the volume of the grain . thus it may be of interest to know the stability of silicide thin films on si(001 ) . we found one such silicide layer which is shown in fig . [ fig : silicide ] . it consists of a @xmath48 silicon surface in contact with two la layers that sandwich a layer of si@xmath49 ions in between . while we have not performed a thorough search of other candidates , the energy of this silicide layer indicates that silicide formation will at the latest be initiated beyond a coverage of 2/3 ml . we can thus only pin down the onset of silicide formation within a coverage interval between 1/3 ( thermodynamically ) and 2/3 ml ( including kinetic considerations ) . in this paper we have investigated the surface structures of la adsorbed on si(001 ) as a function of coverage . we propose a theoretical phase diagram by relating the phase boundaries at zero temperature to chemical potentials , which can be converted into partial pressure and temperature in thermal equilibrium . our findings elucidate the chemistry of third row elements on si(001 ) and the phases of la on si(001 ) , and are expected to provide critical information for the growth of a wide class of high - k oxides containing la . the phase diagram may be used as a guide for the growth of la - based oxides on si(001 ) . .,width=302 ] .,width=302 ] [ fig : dimerorientation ] shows the three possible la - dimer orientations on the si(001 ) surface . we did not draw the periodic images introduced by the calculational supercell in order to emphasize that fact that this local arrangement corresponds to an isolated dimer . [ fig : dimersupercells ] sketches the supercells used . they were chosen in order to avoid frustration of si dimers due to periodic images . tab . [ tab : liniearchainenergies ] summarizes the energetics of chains structures built from la dimers . the supercells used in the corresponding total energy calculations are sketched in fig . [ fig : supercells ] . we thank a. dimoulas , j. fompeyrine , j .- loquet , g. norga and c. wiemer for useful discussions . this work has been funded by the european commission in the project `` invest '' ( integration of very high - k dielectrics with cmos technology ) and by the aurora project of the austrian science fond . parts of the calculations have been performed on the computers of the `` norddeutscher verbund fr hoch- und hchstleistungsrechnen ( hlrn ) '' . this work has benefited from the collaborations within the esf programme on electronic structure calculations for elucidating the complex atomistic behavior of solids and surfaces. 99 corresponding author : [email protected] g.e . moore , spie * 2438 * , 2 ( 1995 ) international technology roadmap for semiconductors , 2001 ed . http://public.itrs.net/ r.a . mckee , f.j . walker and m.f . chisholm , phys . * 81 * , 3014 ( 1998 ) . m. v. cabaas , c. v. ragel , f. conde , j. m. gonzlez - calbet , m. vallet - reg , solid state ionics , * 101 * , 191 , ( 1997 ) . m. nieminen , t. sajavaara , e. rauhala , m. putkonen and l. niinist , j. mater . , * 11 * , 2340 , ( 2001 ) . b .- e . park and h. ishiwara , appl . phys . lett . , * 79 * , 806 , ( 2001 ) . wilk , r.m . wallace and j.m . anthony , j. appl . phys . * 89 * , 5243 2001 . s. guha , n.a . bojarczuk , and v. narayanan , appl . lett . * 80 * , 766 ( 2002 ) s. guha , e. cartier , m.a . gribelyuk , n.a . bojarczuk , and m.c . copel , appl . . lett . * 77 * , 2710 ( 2000 ) g. apostolopoulos , g. vellianitis , a. dimoulas , j.c . hooker and t. conard app . * 84 * , 260 ( 2004 ) . j. fompeyrine , g. norga , a. guiller , c. marchiori , j. w. seo , j .- p . locquet , h. siegwart , d. halley , c. rossel in _ proceedings of the wodim 2002 , 12th workshop on dielectrics in microelectronics _ ( imep , grenoble , france , 2002 ) , p. 65 . j. w. seo , j. fompeyrine , a. guiller , g. norga , c. marchiori , h. siegwart , and j .- locquet , app . lett , * 83 * , 5211 , ( 2003 ) . osten , j.p . liu , h .- j . mssig and p. zaumseil , microelectronics reliability * 41 * , 991 ( 2001 ) c.j . frst , k. schwarz and p.e . blchl , comp . mater . * 27 * , 70 ( 2003 ) . ashman , c.j . frst , k. schwarz and p.e . blchl , phys . b * 69 * , 75309 ( 2004 ) . sun , j. lozano , h. ho , h.j . park , s. veldmann and j.m . white , appl . surf . sci . * 161 * , 115 ( 2000 ) c.j . frst , c.r . ashman , k. schwarz and p.e . blchl , nature * 427 * , 53 ( 2004 ) . norga , a. guiller , c. marchiori , j.p . locquet , h. siegwart , d. halley , c. rossel , d. caimi , j.w . seo , and j. fompeyrine , materials research society symp . * 786 * , e 7.3.1 ( 2004 ) . p. hohenberg and w. kohn , phys . rev . * 136 * , b864 ( 1964 ) . w. kohn and l.j . sham , phys . * 140 * , a1133 ( 1965 ) . perdew , k. burke , and m. ernzerhof , phys . lett . * 77 * , 3865 ( 1996 ) . blchl , phys . b * 50 * , 17953 ( 1994 ) . peter e. blchl , clemens j. frst and johannes schimpl , bull . * 26 * , 33 ( 2003 ) r. c. weast , crc handbook of chemistry and physics , 83@xmath50 ed . , crc press , inc . , boca raton , 2002 , p. * 4 * -164 . r. car and m. parrinello , phys . lett . * 55 * , 2471 ( 1985 ) . n. d. mermin , phys . rev . * 137 * , a1441 ( 1965 ) . m. gillan , j. phys : cond . mat . * 1 * , 689 ( 1989 ) e. zintl , angew . chem . * 52 * , 1 ( 1939 ) ; w. klemm , proc . london * 1958 * , 329 ; e. bussmann , z. anorg . allg . chem . * 313 * , 90 ( 1961 ) . h. nakano , s. yamanaka , j. sol . stat . chem . * 108 * , 260 - 266 , ( 1994 ) . d. hohnke , e. parthe , act . a * 20 * , 572 - 582 , ( 1966 ) h. mattausch , o. oeckler , a. simon , z. anorgan . chemie a * 625 * , 1151 - 1154 , ( 1999 ) . e. i. gladyshevskii , ivnam * 1 * , 648 - 651 , ( 1965 ) . the isolated la atom has one unpaired electron . hence the calculations have been performed spin polarized . due to the periodic boundary conditions , the unpaired electron enters a delocalized , partially filled band . as a result most of the calculations on isolated defects produced no net spin . y. liang , s. gan , and m. engelhard , app . lett . , * 79 * , 3591 ( 2001 ) . the adsorption energy per @xmath19 unit cell is defined as @xmath51-ne_0[\mathrm{m\ layer\mbox{-}si\mbox{-}slab}]\big]-e_0[\mathrm{la } ] \big\}\cdot x$ ] , where @xmath52 $ ] is the total energy of the supercell used for the specific surface reconstruction , @xmath53 is the number of @xmath54 surface unit cells contained in that supercell and @xmath55 is the slab - thickness in units of silicon layers of the supercell . @xmath56 denotes the number of la atoms in the supercell and @xmath57 the la coverage for that reconstruction . this energy can be alternatively calculated using the energy per la atom multiplied with the coverage @xmath57 . the reference energies @xmath44 are listed in tab . [ tab : reference ] .
this paper reports state - of - the - art electronic structure calculations of la adsorption on the si(001 ) surface . we predict la chains in the low coverage limit , which condense in a stable phase at a coverage of @xmath0 monolayer . at @xmath1 monolayer we predict a chemically rather inert , stable phase . la changes its oxidation state from la@xmath2 at lower coverages to la@xmath3 at coverages beyond @xmath1 monolayer . in the latter oxidation state , one electron resides in a state with a considerable contribution from la-@xmath4 and @xmath5 states .
computer chess has been an active area of research since shannon s @xcite seminal paper , where he suggested the basic minimax search strategies and heuristics , which have been refined and improved over the years . the many advances since then in improving the search engine algorithms , the static evaluation of chess positions , the representation and learning of chess knowledge , the use of large opening and endgame databases , and the exploitation of computer hardware including parallel processing and special - purpose hardware , have resulted in the development of powerful computer - chess programs , some of which are of top - grandmaster - level strength @xcite . the ultimate test for a chess - playing program is to play a match against the world champion or one of the leading grandmasters . in may 1997 an historic match was played between between ibm s deep blue chess computer and garry kasparov , then world champion , resulting in a spectacular win for the machine , 3.5 2.5 . ( see www.chess.ibm.com for the archived web site of this historic match . ) despite the computer winning the match there has been an ongoing debate since then on whether the highest ranking computer - chess programs are at the world - championship level , through most seem to agree that it is inevitable that eventually chess machines will dominate . using special - purpose hardware and parallelization deep blue @xcite was capable of analysing up to 200 million positions per second , while the only other chess program to - date , known to run at a comparable speed , is hydra @xcite . the hydra team , consider their program to be successor of deep blue and their goal is to create the strongest chess - playing computer , which can convincingly defeat the human world chess champion . a recent six game match played in june 2005 between hydra and leading british grandmaster michael adams resulted in a convincing win for the machine , 5.5 0.5 . ( see http://tournament.hydrachess.com for the web site archiving the match . ) there have been other recent man - machine chess matches against top performing multiprocessor chess engines capable of analysing several million positions per second , with the results against world champions still being inconclusive @xcite . it remains to be seen whether the era of chess man - machine contests is nearing its end , nonetheless , with machines having ever growing computing resources the future looks bleak for any human contestants in such matches . here we concentrate on the late opening / early middle - game phase of the game , and , in particular the research question we address is whether the opening books used by modern chess engines in machine versus machine competitions are `` comparable '' to those used by chess players in human versus human competitions . for humans , opening preparation is known to be very important , as can be seen , for example , by the large proportion of chess books concentrating on the opening phase of the game . modern chess players also use software packages , such as those developed by chessbase ( www.chessbase.com ) or chess assistant ( www.chessassistant.com ) , to assist them in their opening preparation for matches . these packages typically make use of large databases of opening positions , referred to as _ opening books _ , whose positions can be searched and are linked to recent databases of games ( some of which may be annotated by experts ) . this combined with the use of state - of - the - art chess engines for position analysis , provides players with extremely powerful tools for opening study and preparation . it is often recommended that chess students combine the study of openings with typical middle game motifs and endgame structures which may arise from the openings in question , and computer chess software can be very useful for this purpose @xcite . opening theory has become so developed that it is common between expert chess players to play the first 15 moves or so from memory ; see , for example the encyclopedia of chess openings marketed by chess informant ( www.sahovski.co.yu ) . as an antidote to the study of opening theory the former world chess champion , bobby fischer , suggested a chess variant known as fischer random chess or chess960 ( www.chessvariants.org/diffsetup.dir/fischer.html ) , where the initial position of the chess pieces is randomised . due to the 960 different starting positions in chess960 , knowledge of current chess opening theory is not very useful , and thus the strongest player will win without having to memorise lengthy opening variations . chess960 is becoming a popular variant of chess but , at least in the near future , it is unlikely to replace classical chess which still fascinates millions world wide . as the top chess engines now compete at grandmaster level , the opening book has become an important feature contributing to their success . these days it quite normal for an opening book specialist to be an integral part of the development team of a chess engine . as an example of computer chess opening preparation , back in 1995 fritz 3 defeated a prototype of deep blue in the world computer chess championship when deep blue made a crucial mistake as it went out of its opening book and had to assess the position using its search and evaluation engine . for its matches against garry kasparov in 1997 , the deep blue team included grandmaster joel benjamin , who was responsible for developing deep blue s knowledge and fine tuning its opening book @xcite . since then it has become common practice to include a grandmaster level chess player in the chess engine s team especially for high profile man - machine matches . due to the importance of the opening book as a component of a chess engine , computer chess programmers have been developing automated methods for improving the quality of their books . for a game such as chess , having a rich and well developed opening theory , a good starting point for building an opening book is to have available a large database of high - quality games with a sizeable proportion of recent game records . game statistics including a summary of the game results when the move was played , the popularity of the move in the database , how strong are the players of the move , and how recently the move was played , can be weighted to produce an evaluation of the `` goodness '' of a move that can inform the chess engine s evaluation function . deep blue utilised these statistics to automatically extend its relatively small hand - crafted opening book , which consisted of about 4000 positions @xcite , and , similarly hydra extends its relatively small opening book , typically containing about 10 moves per variation ( www.hydrachess.com/hydrachess/faq.php ) . while deep blue combines the `` goodness '' factors in a non - linear fashion in order to influence the choice of move in the absence of information from its opening book @xcite , hydra combines the `` goodness '' factors in a linear fashion to influence the choice of playable moves @xcite . an opening book and its extension constructed by the above method will not be perfect , simply due to the fact that opening theory is still dynamic , and the statistics often reflect what is fashionable rather than what is objectively best . hydra takes this factor into account by adjusting the thinking time of a move when the engine chooses a book move in preference to a move selected by the engine . in games such as awari , where large databases of games do not exist , the above techniques can not be applied , so the opening book needs to be constructed by using the game engine to perform deep searches and generate the evaluation of the positions to be stored in the book @xcite . a best - first search method was proposed by buro @xcite , where at each level the move with the highest score is expanded first . in this way bad moves are ignored and no human intervention is necessary . however , this method ignores moves that are not much worst than the best move , thus allowing an opponent to `` drop out '' of the book after a few moves and forcing the chess engine to assess positions using its search and evaluation algorithms . to avoid this situation lincke @xcite proposed to expand moves in such a way that priority is given to moves at lower search levels , whose score is within a tolerance level from the best move . it is also useful to incorporate some form of learning to tune the opening book in order to avoid playing the same mistake repeatedly . a method developed by hyatt @xcite looks at the next ten move evaluations in games it played after the opening book was left , and extrapolates an approximation of the true value to store in the book as a learnt value for the position . the learning is conditioned upon the depth of searches that produced the learnt value , the strength of the opposition in the game played , and whether the engine or its opponent made a mistake . we now give an overview of the experiment we have carried out . for the purpose of data analysis we used the _ nunn2 _ test suite devised by grandmaster dr . john nunn to test chess engines strength on a variety of late opening / early middle - game positions ; the nunn2 test is distributed by chessbase together with its chess engine , fritz . the nunn2 test was chosen , since its 25 positions arise from a variety of openings with different characteristics , and for all these positions there are several reasonable candidate moves . we augmented the nunn2 test with the initial position , as we were interested to find out which first moves do humans and machines prefer . to compare the choices of humans to those of engines , we made use of two high - quality opening books : _ powerbook 2005 _ , marketed by chessbase , derived from a large collection of human versus human games , and _ comp2005 _ derived from a large collection of machine versus machine games compiled by walter eigenmann ( www.beepworld.de/members38/eigenmann ) . for each position we collected the statistics related to the move choices for the position from both opening books , including the rank of each move choice , the number of games in the database in which the move choice was played , and the percentage score achieved for this choice . the analysis we carried out from the data compared both the distribution of the move choices and the ranks of the choices implied by this distribution . the ranks of the moves made by humans and engines were compared using a nonparametric association measure we have used in previous studies , where we compared move choices of different chess engines @xcite and the ranking of search results by web search engines @xcite . the measure is a weighted version of spearman s footrule @xcite , which we call the m - measure . the distributions of move choices were compared using the jensen - shannon divergence ( jsd ) nonparametric measure @xcite , which allows us to measure the similarity between two distributions . in addition , for each position we computed the degree of overlap between move choices of humans and engines and the expected percentage score for the position , i.e. out of the total number of games played from the position what was the percentage of wins and draws . the results show a surprisingly close association between humans and machines opening books . the m - measure is over 0.75 while the jsd is just above 0.70 , on average , on a scale between 0 and 1 . it is also shown that , apart from two outliers , the m - measure and jsd are highly correlated with a correlation coefficient just above 0.65 . moreover , the degree of overlap between move choices is also just above 0.60 on average , so despite the strong association between humans and machines choice of opening moves there are also differences , although these disparate moves do not tend to be the highly ranked moves . finally , for the positions we investigated , the expected scores from white s point of view were similar , on average over 55% , for both humans and machines , which indicates a significant advantage to white . the rest of the paper is organised as follows . in section [ sec : measures ] we describe the measures we used to compare the rankings induced by the two opening books . in section [ sec : data ] we give the detail of the data collection phase . in section [ sec : analysis ] we present the data analysis carried out and interpret the results . in section [ sec : discuss ] we discuss possible extensions and applications of the comparison techniques we have used , and finally , in section [ sec : concluding ] we give our concluding remarks . we used several nonparametric measures @xcite to test the correspondence between the two opening books . to illustrate the measures consider the initial chess position and assume that only the top-10 move choices were recorded in powerbook 2005 ( pb ) and in comp2005 ( comp ) . ( in the experiment 20 moves were actually recorded in each opening book for the initial position . ) the data collected is shown in table [ table : top10 ] , where the second and fourth column indicate the rank of the move in pb and comp , respectively , while the third and fifth columns indicate the popularity ( i.e. the number of games in the database in which the move was played ) in pb and comp , respectively ; a zero entry in a column implies that no games were recorded for that move in the corresponding opening book . 12 pt .[table : top10 ] data for top-10 . [ cols="^,^,>,^,>",options="header " , ] possible extensions and applications of the comparison techniques we have presented are : 1 . applying the technique to more comprehensive test sets such as the don dailey test @xcite , which consists of 200 positions all of which are 5 moves for each player from the initial position . a more principled approach could also be taken by collecting positions from the encyclopedia of chess openings classification system . 2 . comparing opening books of two individuals , be they human or machine . to carry out such a comparison we need to have game databases of sufficient size , from which we can construct the respective opening books . 3 . comparing how an opening book changes over time . for example , we could compare powerbook 2005 to the newer powerbook 2006 . extend the technique to middle game and endgame positions with the aid of test sets such as the wm test of gurevich and schumacher of positions from world champion games ; the wm test can be downloaded from www.computerschach.de . this is more applicable to comparing the move choices of two available chess engines , which can display the ranking of the top - n move choices being considered , since , in general , there may be several reasonable moves from such positions and in game records we have access only to the move that was chosen . 5 . applying the similarity m - measure to tuning the weights of evaluation function features such as material balance , mobility , development , pawn structure , and king safety @xcite to those of a specific chess engine . the principle underlying such a technique is to compare , via the m - measure , the top - n move choices of the evaluation function we are training to the top - n move choices of the chess engine we are learning from , and to apply a gradient descent ( or hill climbing ) method to adjust the weights in the direction of the function we are learning from ( cf . we have compared the opening books of humans and computers using nonparametric measures . it seems that there is a strong association between the two books , as the m - measure is over 0.75 and the jsd just above 0.70 , on average . the degree of overlap of move choices is just above 0.60 , on average , so despite the correspondence there are also significant differences . moreover , for the positions we investigated , the expected scores from white s point of view were , on average over 55% , for both humans and machines , which indicates a significant advantage to white for the positions we considered . more experiments need to be carried out on different test sets covering either a wider range of opening variations or , alternatively , specialising within a small number of popular opening variations . as mentioned in section [ sec : discuss ] the method we have presented can be used to compare two individuals opening choices , be they human or machine . apart from a better understanding of the difference between human and machine players such a comparison may help detect anomalies in an opening book that could be exploited during a match . finally , the m - measure , or a refinement of it , does not rely on statistics being readily available , so could be used as a similarity measure in learning the evaluation function of an opponent .
the opening book is an important component of a chess engine , and thus computer chess programmers have been developing automated methods to improve the quality of their books . for chess , which has a very rich opening theory , large databases of high - quality games can be used as the basis of an opening book , from which statistics relating to move choices from given positions can be collected . in order to find out whether the opening books used by modern chess engines in machine versus machine competitions are `` comparable '' to those used by chess players in human versus human competitions , we carried out analysis on 26 test positions using statistics from two opening books one compiled from humans games and the other from machines games . our analysis using several nonparametric measures , shows that , overall , there is a strong association between humans and machines choices of opening moves when using a book to guide their choices .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Social Security Solvency Act of 2004''. SEC. 2. ADJUSTMENT TO RATE OF INCREASE IN CONTRIBUTION AND BENEFIT BASE. Section 230(b)(2) of the Social Security Act (42 U.S.C. 430(b)(2)) is amended to read as follows: ``(2) the sum of-- ``(A) the ratio (expressed as a percentage) of (i) the national average wage index (as defined in section 209(k)(1)) for the calendar year before the calendar year in which the determination under subsection (a) is made to (ii) the national average wage index (as so defined) for 1992, plus ``(B) for purposes of determining the contribution and benefit base effective with respect to remuneration paid during calendar years after 2005 and before 2037 and self-employment income derived in taxable years beginning with or during such calendar years, 2 percentage points,''. SEC. 3. APPLICATION OF THE CHAINED CONSUMER PRICE INDEX FOR ALL URBAN CONSUMERS IN DETERMINING COST-OF-LIVING INCREASES IN BENEFITS. (a) In General.--Section 215(i)(1) of the Social Security Act (42 U.S.C. 425(i)(1)) is amended-- (1) in subparagraph (G), by striking the period and inserting ``; and''; and (2) by adding at the end the following new subparagraph: ``(H) the term `Consumer Price Index' means the chained consumer price index for all urban consumers, published by the Bureau of Labor Statistics.''. (b) Effective Date.--The amendments made by this section shall apply with respect to increases described in section 215(i)(2)(A) of the Social Security Act effective with the month of December of calendar years after 2005. SEC. 4. RETENTION OF ESTATE TAX; TRANSFERS TO SOCIAL SECURITY TRUST FUND. (a) Exclusion Equivalent Made Permanent at 2009 Amount.--The item relating to 2009 in the table in section 2010(c) of the Internal Revenue Code of 1986 (relating to applicable credit amount) is amended by striking all that follows ``the applicable exclusion amount'' and inserting ``. For purposes of the preceding sentence, the applicable exclusion amount is $3,500,000.''. (b) Conforming Amendments.-- (1) Subtitles A and E of title V of the Economic Growth and Tax Relief Reconciliation Act of 2001, and the amendments made by such subtitles, are hereby repealed; and the Internal Revenue Code of 1986 shall be applied as if such subtitles, and amendments, had never been enacted. (2)(A) Subsection (a) of section 901 of the Economic Growth and Tax Relief Reconciliation Act of 2001 is amended by striking ``this Act'' and all that follows and inserting ``this Act (other than title V) shall not apply to taxable, plan, or limitation years beginning after December 31, 2010.''. (B) Subsection (b) of such section 901 is amended by striking ``, estates, gifts, and transfers''. (3) Subsections (d) and (e) of section 511 of the Economic Growth and Tax Relief Reconciliation Act of 2001, and the amendments made by such subsections, are hereby repealed; and the Internal Revenue Code of 1986 shall be applied as if such subsections, and amendments, had never been enacted. (c) Transfers to Trust Fund.-- (1) In general.--There are hereby appropriated to the Federal Old-Age and Survivors Insurance Trust Fund amounts equivalent to the taxes received in the Treasury under chapters 11 and 13 of the Internal Revenue Code of 1986 (relating to estate tax and tax on generation-skipping transfers, respectively). (2) Transfers.--The amounts appropriated by paragraph (1) shall be transferred from time to time (but not less frequently than quarterly) from the general fund of the Treasury on the basis of estimates made by the Secretary of the Treasury of the amounts referred to in such paragraph. Any such quarterly payment shall be made on the first day of such quarter. Proper adjustments shall be made in the amounts subsequently transferred to the extent prior estimates were in excess of or less than the amounts required to be transferred. (3) Reports.--The Secretary of the Treasury shall submit annual reports to the Congress and to the Commissioner of Social Security regarding-- (A) the transfers made under this subsection during the year, and the methodology used in determining the amount of such transfers, and (B) the anticipated operation of this subsection during the next 5 years. SEC. 5. FUTURE ADJUSTMENT OF EMPLOYMENT TAX RATES TO KEEP SOCIAL SECURITY TRUST FUNDS IN BALANCE. (a) Statement of Projected Insolvency in Annual Report of Board of Trustees.--Section 201(c) of the Social Security Act (42 U.S.C. 401(c)) is amended, in the second sentence following clause (5), by striking ``Trustees).'' and inserting ``Trustees), the Board's best estimate of the date as of which, using intermediate assumptions, the Trust Funds will, with no change in rates of tax under chapters 2 and 21 of the Internal Revenue Code of 1986, first have assets insufficient to pay scheduled benefits in full on a timely basis, and, if such date is within 2 years after the date of the filing of the report, the minimum increase necessary in such rates of tax (using such assumptions and assuming pro rata adjustments in the taxes applicable under sections 1401(a), 3101(a), and 3111(a) of such Code) necessary to take effect (effective for the calendar year and applicable taxable years in which such date occurs) to preclude such an insufficiency (rounded, if not a multiple of 0.01 percent, to the next higher multiple of 0.01 percent).''. (b) Employee Contribution.--Subsection (a) of section 3101 of the Internal Revenue Code of 1986 (relating to rate of tax for old-age, survivors, and disability insurance) is amended by adding at the end the following flush sentence: ``In the case of the year in which occurs the date determined under section 201(c) of the Social Security Act to be the date as of which the Trust Funds will first have assets insufficient to pay scheduled benefits in full on a timely basis, the rate in effect under the preceding sentence for such year and each year thereafter (without regard for this sentence) shall be increased to the extent determined under section 201(c) of such Act to be necessary to preclude such an insufficiency. Such increase shall be prescribed by the Secretary.''. (c) Employer Contribution.--Subsection (a) of section 3111 of such Code (relating to rate of tax for old-age, survivors, and disability insurance) is amended by adding at the end the following flush sentence: ``In the case of the year in which occurs the date determined under section 201(c) of the Social Security Act to be the date as of which the Trust Funds will first have assets insufficient to pay scheduled benefits in full on a timely basis, the rate in effect under the preceding sentence for such year and each year thereafter (without regard for this sentence) shall be increased to the extent determined under section 201(c) of such Act to be necessary to preclude such an insufficiency. Such increase shall be prescribed by the Secretary.''. (d) Self-Employment Contribution.--Subsection (a) of section 1401 of such Code (relating to rate of tax for old-age, survivors, and disability insurance) is amended by adding at the end the following flush sentence: ``In the case of the year in which occurs the date determined under section 201(c) of the Social Security Act to be the date as of which the Trust Funds will first have assets insufficient to pay scheduled benefits in full on a timely basis, the rate in effect under the preceding sentence for such year and each year thereafter (without regard for this sentence) shall be increased to the extent determined under section 201(c) of such Act to be necessary to preclude such an insufficiency. Such increase shall be prescribed by the Secretary.''.
Social Security Solvency Act of 2004 - Amends title II (Old Age, Survivors, and Disability Insurance) of the Social Security Act (SSA) to: (1) revise requirements for calculating the contribution and benefit base with respect to remuneration paid (including self-employment income derived) during the calendar years between 2005 and 2037; and (2) apply the chained consumer price index for all urban consumers in determining cost-of-living increases in benefits. Amends the Internal Revenue Code, with respect to the unified credit against the estate tax, to set the applicable exclusion amount permanently at $3.5 million (the amount currently set for 2009). Makes appropriations (and requires transfers from the General Fund) at least quarterly to the Federal Old-Age and Survivors Insurance Trust Fund equivalent to estate taxes and taxes on generation-skipping transfers. Amends SSA title II to require the statement of projected insolvency in the annual report of the Board of Trustees of the Social Security Trust Funds to include: (1) the Board's best estimate of the date as of which the Trust Funds will, with no change in employment tax rates, first have assets insufficient to pay scheduled benefits in full on a timely basis; and (2) if such date is within two years after the filing of the report, the minimum increase necessary in such tax rates necessary to preclude such an insufficiency period. Amends the Internal Revenue Code to provide for future adjustment of employment tax rates to keep the Social Security Trust Funds in balance.
polynomial optimization problem ( pop ) is a problem for minimizing a polynomial objective function over a basic closed semialgebraic set defined by polynomial inequalities and equalities : @xmath0 where and @xmath1 are real polynomial functions of @xmath2 . pop represents various kinds of optimization problems and can be solved efficiently under moderate assumptions by semidefinite programming ( sdp ) relaxations developed by several authors , in particular lasserre @xcite and parrilo @xcite ; see , for recent developments with equality constraints @xcite and references therein . let @xmath3 $ ] denote the polynomial ring @xmath4 $ ] and @xmath3_k$ ] be the set of polynomials with degree up to @xmath5 . the method constructs sequences @xmath6 of optimization problems and their dual problems @xmath7 from pop ( [ pop ] ) ; @xmath8_k\to { \operatorname{\mathbb{r } } } , \text { linear};\\ & l(1)=1 , l(m_k)\subset [ 0,\infty)\\ ( { \mathcal{d}}_k)\quad \text{maximize } & q\\ \text{subject to } & f - q\in m_k,\ q\in { \operatorname{\mathbb{r}}},\end{aligned}\ ] ] where @xmath5 is an integer greater than or equal to @xmath9 , and @xmath10 is defined from the constraint system of pop ( [ pop ] ) : @xmath11 ^ 2 , r_j\in { \operatorname{\mathbb{r}}}[x ] , \deg(\sigma_ig_i)\leq k , \deg(r_jh_j)\leq k\right\}.\ ] ] here @xmath12 and @xmath13 ^ 2 $ ] is the set of the sum of square polynomials . the union @xmath14 of all @xmath10 is called the quadratic module generated by @xmath15 and @xmath16 . let @xmath17 and @xmath18 be the optimal values of @xmath19 and @xmath20 , respectively . lasserre @xcite formulated them as sdp problems , and showed that the sequences @xmath21 and @xmath22 converge to the optimal value of the given pop under moderate assumptions . in addition , he showed that if the feasible region has nonempty interior ( no equality constraints ) , the equality @xmath23 holds for any @xmath24 ; sdp has no duality gap . on the other hand , marshall @xcite focused on the quadratic module @xmath14 from pop ( [ pop ] ) instead of the basic semialgebraic set , and then proved that @xmath23 if @xmath25 holds , where @xmath26 ; the feasible region of pop ( [ pop ] ) and @xmath27\mid p(x)=0 , \forall x\in k\}$ ] ; the vanishing ideal of @xmath28 . however , it is difficult to check this assumption for a given pop . in this paper , we study the method through investigations on vanishing ideals of general semialgebraic sets in @xmath29 . let @xmath30 for an ideal @xmath31 $ ] . when we deal with a polynomial ring over @xmath32 , the hilbert s nullstellensatz describes a relationship between varieties and ideals . on the other hand , for an ideal @xmath33 in @xmath3 $ ] , the real nullstellensatz says that @xmath34 if @xmath33 is real ; see section 3 for the details . we give elementary proofs for some criteria on reality of ideals ( theorem [ realcond ] and theorem [ condtopdim ] ) . then we discuss equivalent conditions for the equality @xmath35 , where @xmath36 is a semialgebraic set ( theorem [ main_condition ] and [ condition ] ) . both conditions @xmath34 and @xmath35 are verifiable and closely related to duality of sdp ( proposition [ duality ] ) . in addition , we propose an algorithm to calculate generators of @xmath37 , using some techniques of the cylindrical algebraic decomposition ( cad ) after collins @xcite . applying these , one can equivalently modify any pop so that associated semidefinite programming relaxation problems have no duality gap . no duality gap of sdp is an important property theoretically and practically . for example , it is one of fundamental conditions for convergence of interior point methods , or it is used to confirm optimality of a solution . this paper is organized as follows : in section 2 , we present a relationship between the vanishing ideal of @xmath28 and the duality of sdp for pop ( [ pop ] ) . in section 3 , elementary proofs are given for some criteria on reality of ideals and equivalent conditions are obtained for the equality @xmath35 . algorithms for deciding reality of ideals and for calculating generators of @xmath38 are given in section 4 . we discuss the duality of sdp relaxation problems for pop ( [ pop ] ) . we rewrite pop ( [ pop ] ) as follows . @xmath39 where @xmath40 and @xmath41 is the ideal generated by @xmath42 in @xmath3 $ ] . the following proposition is a sufficient condition for the duality . [ duality ] suppose that @xmath43 . then @xmath23 . moreover if there is a feasible point in @xmath44 , it has an optimal solution . if @xmath10 is closed in the euclidean topology , the similar arguments in ( * ? ? ? * corollary 21 ) ensure @xmath45 and existence of an optimal solution . closedness of @xmath10 is shown in the later paragraphs of this section ( theorem [ closed ] ) . it should be noted that marshall @xcite has shown a similar result . under the assumption @xmath25 , he showed closedness of @xmath46_k)$ ] . although our assumption is slightly stronger than his , it can be verified directly as given below . the following theorem gives one of verifiable conditions for our assumption . the proof is given in section 3 and a decision algorithm for the condition ( 2 ) is given in section 4 . [ main_condition ] let @xmath36 be a semialgebraic set in @xmath29 , @xmath33 be an ideal in @xmath3 $ ] and let @xmath47 be the prime decomposition of @xmath33 . the following conditions are equivalent . 1 . @xmath35 ; 2 . for any @xmath48 ( @xmath49 ) , @xmath50 holds , where @xmath51 is the topological dimension of @xmath52 . here the dimension @xmath53 of an ideal @xmath33 is defined in section 3 ( @xmath54 = \dim^{{\rm top } } \emptyset = -1 $ ] ) . theorem [ main_condition ] becomes simpler if @xmath55 is nonempty , where @xmath56 is the interior of @xmath36 . the following is a corollary of theorem 3.13 in section 3 . [ duality_of_prime ] suppose that @xmath41 is prime . if there exists a feasible point @xmath57 for pop @xmath58 such that @xmath59 and the rank of the jacobian matrix @xmath60 is equal to @xmath61 , then @xmath23 . we consider the following pop . @xmath62 where @xmath63 $ ] , @xmath64 and @xmath65 . we assume that @xmath66 is nonempty . then it follows from corollary [ duality_of_prime ] that sdp relaxation problems for pop ( [ pop_ex ] ) have no duality gap . indeed , it is clear that @xmath67 is prime . in addition , @xmath68=n-\dim i$ ] . to show closedness of @xmath10 , we start with the following technical lemma . for @xmath69 , let @xmath70 . for @xmath71 , let @xmath72 , @xmath73 , and @xmath74_{d_i}^{\lambda(d_i)}\times \prod_{j=1}^m { \operatorname{\mathbb{r}}}[x]_{e_j}$ ] . we define a mapping @xmath75 by @xmath76 it is known that @xmath77 is surjective ( the gram matrix description of sums of squares ) ; see for instance @xcite . [ lemma_assump ] under the same assumption of proposition [ duality ] , @xmath78 if and only if @xmath79 belongs to the quotient of ideals @xmath80\mid sg_i\in i\}$ ] for all @xmath81 and @xmath82 . . then @xmath83 belongs to @xmath33 and hence vanishes on @xmath28 . since @xmath84 , we have each @xmath85 on @xmath28 and thus @xmath86 . therefore the assumption implies @xmath87 . the converse is obvious . let @xmath3_k$ ] be endowed with euclidean topology . the following theorem is a slight modification of theorem 3.1 of marshall @xcite . [ closed ] under the same assumption of proposition [ duality ] , @xmath10 is closed . let @xmath88_{d_i}\mid sg_i\in i\}$ ] and @xmath89_{e_j}$ ] . then @xmath90 and @xmath91 are closed subspaces of vector spaces @xmath3_{d_i}$ ] and @xmath3_{e_j}$ ] respectively . we define @xmath92 , where @xmath93_{d_i}/j_{d_i}$ ] for @xmath94 , @xmath82 and @xmath95_{e_j}/i_{e_j}$ ] for @xmath96 . then @xmath97 is a normed space . let @xmath98 be the induced mapping by @xmath77 . then we have @xmath99 is surjective , @xmath100 for @xmath101 , and @xmath102 by lemma [ lemma_assump ] . hence we have @xmath103 , where @xmath104 is the image of the unit sphere in @xmath97 under @xmath99 . in addition , @xmath104 is compact and does not contain zero element . now we suppose @xmath105 and @xmath106 in @xmath3_k$ ] . let @xmath107 and @xmath108 be the cosets of @xmath109 and @xmath110 respectively . then there exist @xmath111 and @xmath112 such that @xmath113 . by compactness of @xmath104 , we may assume @xmath114 converges to some element @xmath115 . then the limit of @xmath116 exists , since @xmath117 converges to @xmath118 . therefore we have @xmath119 and hence @xmath120 . the first main result of this section is theorem [ realcond ] , which is called the simple point criterion for reality of ideals and has already been proved by m. marshall @xcite . in this section we give an elementary proof of this theorem and introduce another equivalent condition ( theorem [ condtopdim ] ) . using these , we also obtain some conditions equivalent to @xmath121 or to @xmath35 for a semialgebraic set @xmath36 ( theorem [ condition ] and theorem [ main_condition ] ) . let @xmath122 and @xmath123 or @xmath32 . for an ideal @xmath33 of @xmath124 $ ] , @xmath125 denotes the set of all zero points of @xmath33 in @xmath126 . for a subset @xmath104 of @xmath126 , @xmath127 denotes the set of all polynomials witch vanish on @xmath104 . a semialgebraic set in @xmath29 is a subset of the form @xmath128 where @xmath129 $ ] and @xmath130 is either @xmath131 , @xmath132 or @xmath133 . for an ideal @xmath33 of @xmath124 $ ] , the dimension of @xmath33 , @xmath53 , is the transcendence degree of @xmath33 . if @xmath33 is @xmath124 $ ] itself , @xmath134 $ ] is defined as @xmath135 . if @xmath33 is prime , @xmath53 coincides with the depth of @xmath33 ( krull dimension of @xmath124/i$ ] ) . for an ideal @xmath33 of @xmath124 $ ] with primary decomposition @xmath136 , @xmath53 is @xmath137 . the dimension does not depend on extensions of the coefficient field . see @xcite for more details . finally , the rank of an ideal @xmath138 is the maximal rank of the jacobian @xmath139 in @xmath125 . the topological dimension @xmath140 of @xmath125 is the maximal dimension as manifolds . for a polynomial @xmath141 $ ] , @xmath142 or @xmath143 denotes @xmath144 . note that @xmath145 for @xmath146 . the results and the proofs of this section are still valid if @xmath147 and @xmath32 are replaced by real closed field and its extension with @xmath148 respectively . an ideal @xmath33 in @xmath3 $ ] is called real if the following equivalent conditions are satisfied . 1 . @xmath34 ; 2 . for any integer @xmath149 and any @xmath150 $ ] , the equation @xmath151 implies @xmath152 for all @xmath153 . we give the equivalent condition for an ideal to be real . the following proposition is well known ( see lemma 2.5 of @xcite for example ) . [ decomposition ] let @xmath154 be a primary decomposition of an ideal @xmath155 $ ] . then @xmath33 is real if and only if each @xmath156 is prime and real . for a while , we assume that ideals are prime , and investigate the reality . [ theorem 1.11 of @xcite][iprime ] if a prime ideal @xmath33 in @xmath3 $ ] is real , then @xmath157 i:=\{ph\mid p\in { \operatorname{\mathbb{c}}}[x],h\in i\}$ ] is prime in @xmath158 $ ] . by assumption , @xmath159 $ ] . suppose @xmath160 $ ] to be prime in @xmath3 $ ] and @xmath161 not to be prime in @xmath158 $ ] . there exist @xmath162 $ ] such that @xmath163 and @xmath164),\end{aligned}\ ] ] and thus , @xmath165 these equations yield that @xmath166 are contained in @xmath33 . since @xmath167 , both of @xmath168 and @xmath169 are not in @xmath33 . hence @xmath170 belongs to @xmath33 . by reality of @xmath33 , both @xmath171 and @xmath172 belong to @xmath33 . this implies that @xmath173 belongs to @xmath161 , which is a contradiction . [ df3.4 ] if a prime ideal @xmath33 in @xmath3 $ ] is real , then @xmath174 . fix the generators @xmath175 of @xmath33 . from lemma [ iprime ] , @xmath161 is prime in @xmath158 $ ] , and thus @xmath176 holds , because @xmath32 is algebraically closed . suppose @xmath177 on @xmath125 and let @xmath178 s be the sub - determinants of jacobian of size @xmath179 . from the assumption , there exists @xmath153 such that @xmath178 is identically zero on @xmath125 but is not identically zero on @xmath180 . such @xmath178 does not belong to @xmath161 , thus @xmath181 , which contradicts reality of @xmath33 . the following lemma can be shown similarly to the proof of lemma [ iprime ] , [ gg * ] if @xmath33 is a prime ideal in @xmath3 $ ] and @xmath157 i$ ] is not prime in @xmath158 $ ] . then , there exists an irreducible polynomial @xmath182 in @xmath158\setminus i ' $ ] such that @xmath183 . by assumption , @xmath159 $ ] . suppose @xmath160 $ ] to be prime and @xmath161 not to be prime . similar to the proof of lemma [ iprime ] , there exist @xmath184 $ ] such that @xmath185 and @xmath186 . . then @xmath188\setminus i'$ ] and @xmath189 . if @xmath190 is factorized as @xmath191 , then @xmath192 and @xmath193 . since @xmath33 is prime , @xmath194 or @xmath195 belongs to @xmath33 reset @xmath190 as suitable one . \(i ) the ideal @xmath196 is decomposed as @xmath197 . here , @xmath198 is prime , while @xmath161 is decomposed as @xmath199 , thus @xmath33 is not real , neither is @xmath200 . + ( ii ) the ideal @xmath201 is prime and @xmath161 is also prime . however , since the rank of @xmath33 is zero , @xmath33 is not real . + ( iii ) the ideal @xmath202 is prime in @xmath3 $ ] , while @xmath161 is decomposed as @xmath203 , thus @xmath33 is not real . this example is essential for the proof of the following proposition . the following proposition is essentially a special case of theorem 9.3 of @xcite , which is related to going - up theorem . we give an elementary and constitutive proof . [ ig ] if @xmath33 is a prime ideal in @xmath3 $ ] and @xmath157 $ ] is not prime in @xmath158 $ ] . then , there exist irreducible polynomials @xmath204\setminus i ' $ ] such that @xmath205 , @xmath206 , @xmath207 belong to @xmath33 and @xmath208 ( denoted as @xmath209 ) is the prime decomposition of @xmath161 . from lemma [ gg * ] , there exists an irreducible polynomial @xmath210\setminus i ' $ ] s.t . @xmath211 and @xmath212 . if we set @xmath213 , since @xmath33 is prime , we have @xmath214\setminus i$ ] . we decompose @xmath161 inductively with respect to @xmath5 . suppose @xmath161 is decomposed as @xmath215 ( @xmath216 ) as the assertion . if we set @xmath217 , since @xmath33 is prime , @xmath218\setminus i$ ] . if both @xmath219 and @xmath220 are prime , the assertion follows . suppose @xmath219 is not prime ( which implies @xmath220 is not prime either ) . there exist @xmath221\setminus i_g'$ ] s.t . we have @xmath223 = i,\ ] ] @xmath224 \mbox { and } h_{k+1}\ol{h_{k+1 } } \in { \operatorname{\mathbb{r}}}[x].\ ] ] since @xmath33 is prime , @xmath225 or @xmath226 belongs to @xmath33 . we can assume @xmath227 . similar to the proof of lemma [ gg * ] , we can also assume @xmath228 is irreducible . set @xmath229 ( @xmath230\setminus i$ ] ) . then we have , for @xmath231 , @xmath232 when @xmath233 for some @xmath153 , we have @xmath234 . thus , @xmath235 or @xmath236 is in @xmath33 . since @xmath33 is prime , the former implies @xmath237 and the latter implies @xmath238 , each of which is a contradiction . similarly , @xmath239 yields a contradiction . suppose @xmath240 for all @xmath241 . @xmath161 is decomposed as @xmath242 where @xmath243 we show @xmath244 , i.e. @xmath245 belong to @xmath161 . this means @xmath246 belong to @xmath161 , which is obvious by the assumption . similarly , the assumption that @xmath247 for all @xmath231 yields @xmath248 . exchanging @xmath228 and @xmath249 , we have @xmath250 since @xmath251 is an ascending chain , this procedure terminates in finite steps . finally , we show that if @xmath161 is primarily decomposed as , this is the prime decomposition . suppose @xmath252 for some @xmath253 and @xmath254 . then , @xmath255=i.$ ] since @xmath33 is prime , @xmath256 . by construction , we have @xmath257 . let @xmath33 be a prime ideal in @xmath3 $ ] , then @xmath258 holds for @xmath157 i$ ] . moreover if @xmath161 is not prime in @xmath158 $ ] , then @xmath259 and @xmath260 hold . if @xmath161 is prime , the assertion is obvious . we assume @xmath161 not to be prime . we show the latter assertion , which leads the former immediately . by the general theory , @xmath261 holds . now , by symmetry , it is clear that @xmath262 . next , we show the equation for the rank . we can assume @xmath263 . by the symmetry of @xmath264 it implies @xmath265 . set @xmath217 . if @xmath266 , then @xmath267=i$ ] , which is a contradiction . since @xmath268 and @xmath219 is prime , we have @xmath269 for all @xmath153 , and hence @xmath270 using @xmath271 , we have @xmath272 the opposite inequality @xmath273 is obvious . the following lemma is very elementary , but the authors could not find it in any literature . let @xmath274 and @xmath275 the prime decomposition of an ideal @xmath33 in @xmath158 $ ] . then the rank of @xmath33 on @xmath276 is less than @xmath61 . without loss of generality we can assume @xmath277 . suppose @xmath278 for some @xmath279 . then , there exist @xmath280 such that @xmath281 and from the implicit function theorem , the topological dimension of @xmath282 in a neighborhood @xmath283 of @xmath284 is @xmath169 . since @xmath285 is prime in @xmath158 $ ] , @xmath286 is also @xmath169 , which implies that a polynomial @xmath287 is identically zero on @xmath288 , unless otherwise @xmath289 . from the inclusion @xmath290 , @xmath190 is also identically zero on @xmath291 for all @xmath231 . since @xmath292 is prime , @xmath293 on @xmath294 , which implies @xmath295 . thus , @xmath296 , which implies @xmath297 ; contradicts @xmath274 . [ notprime ] if @xmath33 is a prime ideal in @xmath3 $ ] and @xmath157 i$ ] is not prime in @xmath158 $ ] . then @xmath298 by proposition [ ig ] , @xmath161 is decomposed as . by the above lemma , it is enough to show @xmath299 . since @xmath300 for all @xmath153 , @xmath301 implies @xmath302 and hence @xmath303 . [ df3.9 ] for a prime ideal @xmath33 in @xmath124 $ ] , @xmath33 is real if and only if @xmath304 . [ topdim ] assume that @xmath33 is prime in @xmath3 $ ] and @xmath161 is prime in @xmath158 $ ] , then @xmath305 implies @xmath306 . set @xmath307 , then @xmath308 is @xmath169 . let @xmath309 be a point in @xmath125 such that @xmath310 let @xmath311 satisfy @xmath312 . by a suitable reordering of the variables , the equations @xmath313 can be solved for the first @xmath314 variables as functions of the last @xmath169 variables in a neighborhood of @xmath309 . let @xmath315 be such solution functions . we write @xmath316 , @xmath317 and @xmath318 for @xmath319 $ ] . then @xmath320 holds for all @xmath321 in a neighborhood of @xmath322 . let @xmath323 belong to @xmath161 , then we have @xmath324 for all @xmath321 , unless otherwise the dimension of the manifold @xmath180 is less than @xmath169 , which is a contradiction . hence @xmath321 is a coordinate of the manifold @xmath180 in a neighborhood of @xmath309 . moreover , since @xmath325 $ ] , @xmath321 is also a coordinate of the real manifold @xmath125 . [ realcond ] let @xmath33 be a prime ideal in @xmath3 $ ] + i ) if @xmath33 is real , then @xmath161 is prime and @xmath305 . + ii ) if @xmath305 , then @xmath33 is real . + i ) it follows from lemma [ iprime ] and lemma [ df3.4 ] . + ii ) from proposition [ notprime ] , @xmath161 is prime . we set @xmath326 and notations as in the proof of lemma [ topdim ] . suppose @xmath327 on @xmath282 . the derivatives of @xmath328 are zero , so @xmath329 and hence @xmath330,\end{aligned}\ ] ] which implies @xmath331 . from lemma [ df3.9 ] , @xmath33 is real . [ condtopdim ] a prime ideal @xmath33 in @xmath3 $ ] is real if and only if @xmath332 . the `` only if '' part follows from theorem [ realcond ] i ) and lemma [ topdim ] . we show the `` if '' part . suppose @xmath333 . if @xmath157 $ ] is not prime , then @xmath125 is included in @xmath334 as in the proof of proposition [ notprime ] , and hence we have @xmath335 . thus @xmath161 is prime in @xmath158 $ ] . we show that if a polynomial @xmath336 $ ] is identically zero on @xmath125 , then @xmath337 on @xmath180 ; which implies @xmath338=i$ ] i.e. @xmath33 is real . let @xmath175 generate @xmath33 and denote by @xmath339 the field obtained by the extension of @xmath340 by the coefficients of @xmath341 . there exists a point @xmath284 in @xmath342 whose transcendence degree is @xmath53 on @xmath339 . such a point is a generic point of @xmath180 on @xmath339 , from theorem 2 of chapter iv of @xcite . thus @xmath323 is identically zero on @xmath180 . \(i ) the ideal @xmath343 is decomposed as @xmath344 . @xmath345 implies @xmath346 is real prime , and similarly @xmath347 is real prime . hence @xmath33 is real . + ( ii ) the ideal @xmath348 is decomposed as @xmath349 , where @xmath350 . for each , @xmath351 implies @xmath200 is real prime , and @xmath352 implies @xmath353 is real prime . hence @xmath33 is real . now we return to the semialgebraic set @xmath36 . we recall that @xmath354 is the interior of a semialgebraic set @xmath36 in @xmath29 . [ condition ] suppose that @xmath355 is nonempty and that @xmath47 is the prime decomposition of @xmath33 . then the following conditions are equivalent . + ( i ) @xmath356 ; + ( ii ) for any @xmath48 ( @xmath49 ) , there exists @xmath357 in @xmath358 such that @xmath359 ; + ( iii ) for any @xmath48 ( @xmath49 ) , @xmath360 holds . \(i ) @xmath361 ( ii ) suppose that there exists @xmath48 ( @xmath362 ) such that @xmath363 for any @xmath364 . from theorem [ realcond ] ( i ) , there exists @xmath357 in @xmath365 such that @xmath366 , and hence the set @xmath367 is a proper subvarietiy of @xmath365 . hence , there exists a polynomial @xmath368 identically zero on @xmath358 and not identically zero on @xmath365 . . set @xmath336 $ ] as @xmath370 , where @xmath371 for @xmath372 . then @xmath323 is identically zero on @xmath373 and @xmath374 , which is a contradiction . + ( ii ) @xmath361 ( iii ) suppose that there exists @xmath357 in @xmath358 such that @xmath359 . from the proof of theorem [ realcond ] , there exists a neighborhood @xmath375 of @xmath357 in @xmath29 such that @xmath376 . + ( iii ) @xmath361 ( i ) it is clear that @xmath377 implies @xmath378 . the assertion follows from theorem [ condtopdim ] and proposition [ decomposition ] . we give a proof of theorem [ main_condition ] . [ proof of theorem [ main_condition ] ] ( i ) @xmath361 ( ii ) suppose that @xmath379 for some @xmath48 . from theorem [ condtopdim ] , @xmath380 . thus , @xmath381 is included in some proper subvarieties of @xmath365 . hence , there exists a polynomial @xmath368 such that @xmath382 on @xmath381 and @xmath368 is not identically zero on @xmath365 . thus @xmath383 , which yields a contradiction similarly to the proof of theorem [ condition ] ( i ) @xmath361 ( ii ) . + ( ii ) @xmath361 ( i)replace @xmath358 by @xmath381 in the proof of theorem [ condition ] ( iii ) @xmath361 ( i ) . the condition obtained by replacing @xmath56 by @xmath36 in ( ii ) of theorem [ condition ] : `` for any @xmath48 ( @xmath49 ) , there exists @xmath357 in @xmath381 such that @xmath359 . '' does not guarantee ( i ) of theorem [ main_condition ] . indeed , set @xmath384 and @xmath385 , then @xmath386 , the origin o is in @xmath125 and @xmath387 . however @xmath388 is not included in @xmath33 . in this section we propose an algorithm to calculate generators of ideal @xmath389 . applying it , one can obtain an equivalent problem to pop ( [ pop ] ) such that the resulting problem satisfies condition ( ii ) of theorem [ main_condition ] . the algorithm uses a part of the cylindrical algebraic decomposition ( cad ) after g. e. collins ( see @xcite and references therein for basic literature ) . the following algorithm is for detecting whether the condition of @xmath390 holds or not . note that if this condition holds , @xmath33 itself should be real , because @xmath391 . we omit details of the cad procedures , which are illustrated in the examples below . 1 . compute the primary decomposition of @xmath33 , @xmath136 . if @xmath158i_t$ ] is not prime in @xmath158 $ ] for some @xmath48 , then @xmath33 is not real , otherwise , go to ( 2 ) . 2 . for each @xmath395 choose coordinates @xmath396 , where @xmath397 , such that @xmath398 . here @xmath399 denotes the field extended by @xmath400 from @xmath32 . 2 . let @xmath401 and @xmath402 denote the set of polynomials @xmath403 and @xmath404 respectively . execute the projection of cad for the polynomial sets @xmath401 and @xmath402 from @xmath29 to @xmath405 , where @xmath406 . for @xmath407 let @xmath408 and @xmath409 denote the set of irreducible factors of resulting polynomials on @xmath410 from @xmath401 and from @xmath411 respectively . also let @xmath412 denote the set of cells in @xmath410 from @xmath402 . 3 . for any open cell @xmath413 , take a sample point @xmath414 . lift @xmath415 to the point where some polynomial @xmath416 is zero , i.e. @xmath417 . denote @xmath418 ( @xmath419 can be more than one point ) . 5 . iterate the above step to the top level . condition ( ii ) of theorem [ main_condition ] holds if and only if there exists a point @xmath415 such that the point can be lifted to a point @xmath420 belonging to @xmath36 . set @xmath424 , @xmath425 , @xmath426 and @xmath427 . applying algorithm 4.1 , we have the following : + ( 1 ) @xmath428 is prime and @xmath158i$ ] is also prime . + ( 2 ) the dimension of @xmath33 is 1 and @xmath429 . the set of irreducible factors of the resultants and sub - resultants for @xmath33 with respect to @xmath430 is @xmath431 . that for @xmath33 and @xmath432 is @xmath433 ( fig . [ ex4.1 ] ) . the irreducible factors of the resultants and sub - resultants for @xmath434 with respect to @xmath435 is approximately @xmath436 let @xmath437 as a sample point of the interval @xmath438 . by lifting it to @xmath439-plane by @xmath440 , we obtain two points @xmath441 . further , by lifting it to @xmath442-plane by @xmath33 , we obtain only one point @xmath443 , which satisfies @xmath444 . thus , @xmath390 holds . 1 . 1 . compute the primary decomposition of @xmath33 , @xmath136 . 2 . if @xmath156 is not prime in @xmath3 $ ] , replace @xmath156 by its associated prime @xmath446 . if @xmath447i_t$ ] is not prime in @xmath158 $ ] , @xmath448 is decomposed as proposition [ ig ] . add @xmath449 to @xmath33 and go back to ( a ) . 2 . if necessary , operate an invertible linear transformation on @xmath29 so that @xmath450 holds for all @xmath48 , where @xmath451 denotes @xmath452 . 3 . let @xmath453 be one of the numbers satisfying @xmath454 and @xmath455 , where @xmath456 . such @xmath453 can be calculated by algorithm 4.1 . 4 . for each @xmath156 , if @xmath457 let @xmath458 be a polynomial in @xmath459 , otherwise do for @xmath395 : 1 . let @xmath401 and @xmath402 denote the set of polynomials @xmath403 and @xmath404 respectively . execute the projection of cad for the polynomial sets @xmath401 and @xmath402 from @xmath29 to @xmath405 , eliminating @xmath460 in this ordering . for @xmath461 , let @xmath408 and @xmath409 denote the set of irreducible factors of resulting polynomials on @xmath410 from @xmath401 and from @xmath411 . also let @xmath412 denote the set of cells in @xmath410 from @xmath402 . 2 . let @xmath462 denotes the set of cells in @xmath463 such that @xmath464 is included in @xmath381 . by assumption @xmath465 , the dimension of @xmath466 ( the projection of @xmath464 to @xmath467 ) is less than @xmath451 . define @xmath468 as one of the irreducible defining polynomials of @xmath466 in @xmath469 , and let @xmath458 be the least common multiple of @xmath468 s for all @xmath470 . since @xmath471 for all @xmath472 such that @xmath473 , while @xmath474 , the polynomial @xmath458 is not included in @xmath475 for all @xmath472 such that @xmath473 . 5 . replace @xmath33 by @xmath476 ( @xmath392 are not changed ) and return to algorithm 4.1 . we show @xmath477 , which implies @xmath478 . suppose @xmath479 , then since @xmath480 is prime , @xmath458 belongs to @xmath480 for some @xmath48 . if @xmath481 , then by the beginning of ( 4 ) , @xmath458 does not belong to @xmath480 . thus , we have @xmath482 . hence by ( 4.b ) , @xmath458 does not belong to @xmath475 for all @xmath472 satisfying @xmath473 , while we have supposed @xmath483 . thus we have @xmath484 , which contradicts @xmath482 by the definition of @xmath453 in ( 3 ) . [ ex_4.1 ] set @xmath485 and @xmath486 . applying algorithm 4.2 , we have the following . + ( 1 ) @xmath487 , where @xmath488 and @xmath489 , is the prime decomposition and @xmath158i_1 $ ] and @xmath158i_2 $ ] are prime . + ( 2 ) both dimensions of @xmath285 and @xmath490 are 2 and @xmath491 . the set of irreducible factors of the resultants and sub - resultants for @xmath285 and @xmath432 with respect to @xmath430 is @xmath492 . the irreducible factors of the resultants and sub - resultants for @xmath493 with respect to @xmath435 is @xmath494 , thus we have @xmath495 by lifting @xmath496 to @xmath439-plane by @xmath493 , we obtain @xmath497 further , by lifting it to @xmath442-plane by @xmath498 , we obtain only one point @xmath499 satisfying @xmath444 . we take defining polynomial @xmath500 of @xmath501 from @xmath434 , e.g. @xmath502 . similarly , from @xmath490 and @xmath432 we obtain @xmath503 . by lifting them to @xmath442-plane by @xmath504 , we obtain @xmath505 , among which only @xmath506 satisfy @xmath444 . we take defining polynomial @xmath507 of @xmath508 from @xmath434 , i.e. @xmath509 . we replace @xmath33 by @xmath510 ( this does not satisfy @xmath390 yet ) . from @xmath511 , we should add @xmath512 to @xmath285 . thus we should replace @xmath33 by @xmath513 now , if we set @xmath514 and @xmath515 , then @xmath390 is satisfied . y. t and t. t. appreciate the financial support from the faculty of marin technology , tokyo university of marine sciences and technology . they are also supported by the japan society for the promotion of science as grand - in - aid for young scientists ( b ) . h. w was supported by grant - in - aid for jsps fellows 20003236 . d. w. dubois , g. efroymson , algebraic theory of real varieties . i. , _ 1970 studies and essays _ ( presented to yu - why chen on his 60th birthday , april 1 ) , math . center , nat . taiwan univ . , taipei , 1970 , 107135 .
we study the ideal generated by polynomials vanishing on a semialgebraic set and propose an algorithm to calculate the generators , which is based on some techniques of the cylindrical algebraic decomposition . by applying these , polynomial optimization problems with polynomial equality constraints can be modified equivalently so that the associated semidefinite programming relaxation problems have no duality gap . elementary proofs for some criteria on reality of ideals are also given .
Credit: Image Courtesy of Paul Renne Evidence that the impact of a kilometers-wide asteroid ravaged Earth's ecosystems and wiped out the dinosaurs millions of years ago is now stronger than ever. Using a high-resolution dating technique that measures the ratios of two argon isotopes, researchers analyzed 14 samples of material that had been flung from the impact site just north of the Yucat√°n Peninsula. Those dates, when combined with similar analyses reported previously, pin down the event—dubbed the Chicxulub impact after a small Mexican village closest to the offshore impact site—to approximately 66,038,000 years ago. And argon-argon dating of volcanic ash samples unearthed in Montana (image) from a bed of coal lying just a few centimeters above the iridium-rich layer deemed to contain fallout from the asteroid impact—and just 5 cm above rocks containing large amounts of dino-era pollen—suggests that mass extinctions occurred about 66,043,000 years ago. Considering the statistical errors in the two analyses, the impact and the dino die-offs may have occurred at the same time, or they may have occurred no more than 32,000 years apart, the researchers report online today in Science. Regardless, the team says, the new results certainly knock a hole in the notion that the mass extinctions, including the dinosaurs, occurred as much as 300,000 years before the asteroid impact, which some other researchers have contended for decades. See more ScienceShots. ||||| Fresh Clues In Dinosaur Whodunit Point To Asteroid toggle caption Courtesy of Don Dixon/cosmographica.com Some 66 million years ago, about 75 percent of species on Earth disappeared. It wasn't just dinosaurs but most large mammals, fish, birds and plankton. Scientists have known this for a long time just from looking at the fossil record. If you dig deep enough, you find lots of dinosaur bones. And then a few layers up, they're gone. But scientists couldn't figure out exactly what had caused this phenomenon. Of course, there were lots of theories. "Some of them are pretty wacky," says J. David Archibald, an evolutionary biologist at San Diego State University who wrote the book Dinosaur Extinction and the End of an Era. "The really weird ones, of course, are that space hunters came and killed them all off, they died of constipation, mammals ate their eggs." Then, in 1980, a new theory surfaced. "It's the one that everybody hears about all the time because it's most dramatic," Archibald says. Near what is now the town of Chicxulub in the Yucatan Peninsula, an asteroid more than 5 miles across slammed into the Earth. It caused tsunamis and earthquakes, and threw up a cloud of dust that smothered the world. It sounds like a movie premise, but the Chicxulub impact left behind evidence. It threw up small blobs of black glass that were later found in Haiti. It dusted the world with iridium, an element that is rare on Earth but common in meterorites. It left a barely detectable imprint on the Yucatan Peninsula. Many scientists came to believe that the Chicxulub asteroid alone killed off the dinosaurs — and the public ate it up. "We have this thing for big glitz and dramatic things," Archibald says. "Instantaneous is better." But Princeton professor Gerta Keller wasn't convinced. She has her own theories about the mass extinction. "Vulcanism has played a major role," Keller says. In the hundreds of thousands of years before the Chicxulub impact, volcanoes in a region of India known as the Deccan Traps erupted repeatedly. They spewed sulfur and carbon dioxide, poisoning the atmosphere and destabilizing ecosystems. Keller says the dinosaurs were already on death's door by the time the asteroid hit. And there is confusion about when that actually happened. "If [the impact] is the cause, it had to be precisely at the time of the mass extinction," Keller says. "It can't be before and it can't be afterwards." Enlarge this image toggle caption Courtesy of Courtney Sprain Courtesy of Courtney Sprain Keller's data suggest that the impact happened about 100,000 years before the mass extinction. Previous studies, on the other hand, put it 180,000 years after the dinosaurs died off. Enter Paul Renne, a geologist from the University of California, Berkeley. To pin down the date, he headed out to the badlands of northeastern Montana. "It's a region that has yielded a huge number of dinosaur fossils over the years," Renne says. "It's very famous for that." Renne collected samples of ash that were deposited at the time of the mass extinction just above that treasure trove of fossils. He also obtained some of the glass blobs left by the Chicxulub impact. Measuring the rate of decay of radioactive potassium from these two samples, Renne was able to estimate the age of the impact and the age of the extinction. "And lo and behold they are exactly the same," Renne says. "The impact clearly occurred right at the extinction level." His results are published in the journal Science. They reinforce an idea that many scientists have held for years: The Chicxulub asteroid was the straw that broke the dinosaurs' back. Gerta Keller thinks Renne's method was admirably precise, but she doesn't agree with some of his conclusions. She says his data are contradicted by other samples from Texas where a similar age date shows the Chicxulub impact predates the KT boundary — the point in time between the Cretaceous and Tertiary periods when the dinosaurs are believed to have gone extinct. Still, there is one thing that Keller and Renne agree on: The asteroid isn't the whole story. "There were significant extinctions and ecological perturbations going on a million or 2 million years before the impact, so we think that something else was already happening," Renne says. "What caused those things? There is an outstanding candidate — the early eruptions of the Deccan Traps." The next step will be to find the age of these eruptions. "We need to be able to place that set of eruptions into a time framework," Renne says. Then they can better piece together what happened to the dinosaurs — and the rest of the species that went extinct. Renne and Keller will join Archibald and dozens of their colleagues at the Natural History Museum in London at the end of March to talk over their ideas. "I'm looking forward to rather spirited discussions," Keller says. ||||| Artist’s impression of a 6-mile-wide asteroid striking the Earth. Scientists now have fresh evidence that such a cosmic impact ended the age of dinosaurs near what is now the town of Chixculub in Mexico. The idea that a cosmic impact ended the age of dinosaurs in what is now Mexico now has fresh new support, researchers say. The most recent and most familiar mass extinction is the one that finished the reign of the dinosaurs — the end-Cretaceous or Cretaceous-Tertiary extinction event, often known as K-T. The only survivors among the dinosaurs are the birds. Currently, the main suspect behind this catastrophe is a cosmic impact from an asteroid or comet, an idea first proposed by physicist Luis Alvarez and his son geologist Walter Alvarez. Scientists later found that signs of this collision seemed evident near the town of Chicxulub (CHEEK-sheh-loob) in Mexico in the form of a gargantuan crater more than 110 miles (180 kilometers) wide. The explosion, likely caused by an object about 6 miles (10 km) across, would have released as much energy as 100 trillion tons of TNT, more than a billion times more than the atom bombs that destroyed Hiroshima and Nagasaki. However, further work suggested the Chicxulub impact occurred either 300,000 years before or 180,000 years after the end-Cretaceous mass extinction. As such, researchers have explored other possibilities, including other impact sites, such as the controversial Shiva crater in India, or even massive volcanic eruptions, such as those creating the Deccan Flats in India. Timing of an impact New findings using high-precision radiometric dating analysis of debris kicked up by the impact now suggest the K-T event and the Chicxulub collision happened no more than 33,000 years apart. In radiometric dating, scientists estimate the ages of samples based on the relative proportions of specific radioactive materials within them. [Wipe Out: History's Most Mysterious Mass Extinctions] "We've shown the impact and the mass extinction coincided as much as one can possibly demonstrate with existing dating techniques," researcher Paul Renne, a geochronologist and director of the Berkeley Geochronology Center in California, told LiveScience. Doctoral student Bill Mitchell collecting a volcanic ash sample from the coal bed just above the final dinosaur extinction level. Credit: Image courtesy of Paul Renne "It's gratifying to see these results, for those of us who've been arguing a long time that there was an impact at the time of this mass extinction," geologist Walter Alvarez at the University of California at Berkeley, who did not participate in this study, told LiveScience. "This research is just a tour de force, a demonstration of really skillful geochronology to resolve time that well." The fact the impact and mass extinction may have been virtually simultaneous in time supports the idea that the cosmic impact dealt the age of dinosaurs its deathblow. "The impact was clearly the final straw that pushed Earth past the tipping point," Renne said. "We have shown that these events are synchronous to within a gnat's eyebrow, and therefore, the impact clearly played a major role in extinctions, but it probably wasn't just the impact." The new extinction date is precise to within 11,000 years. "When I got started in the field, the error bars on these events were plus or minus a million years," added paleontologist William Clemens at the University of California at Berkeley, who did not participate in this research. "It's an exciting time right now, a lot of which we can attribute to the work that Paul and his colleagues are doing in refining the precision of the time scale with which we work." Final blow Although the cosmic impact and mass extinction coincided in time, Renne cautioned this does not mean the impact was the only cause of the die-offs. For instance, dramatic climate swings in the preceding million years, including long cold snaps in the general hothouse environment of the Cretaceous, probably brought many creatures to the brink of extinction. The volcanic eruptions behind the Deccan Traps might be one cause of these climate variations. Rock layers near Jordan, Mont., exposing the level (lower arrow) where the dinosaurs and many other animals and plants went extinct. The arrows point to coal beds which contain thin volcanic ash layers that were dated. Credit: [Image courtesy of Klaudia Kuiper "These precursory phenomena made the global ecosystem much more sensitive to even relatively small triggers, so that what otherwise might have been a fairly minor effect shifted the ecosystem into a new state," Renne said. The cosmic impact then proved the deathblow. "What we really need to do is to understand better what was going on before the impact — what was the level of ecological stress that existed that allowed the impact to be the straw that broke the camel's back?" Renne said. "We also need better dates for the massive volcanism at the Deccan Flats to better understand when it first started and how fast it occurred." The scientists detailed their findings in the Feb. 8 issue of the journal Science. Follow LiveScience on Twitter @livescience. We're also on Facebook & Google+. ||||| Asteroid strike did in the dinosaurs: study (AFP) – Feb 7, 2013 WASHINGTON — Scientists said Thursday they are a step closer to proving the death blow for dinosaurs 66 million years ago was a gigantic comet or asteroid that struck near Mexico. Although a catastrophic impact has long been thought to be involved earlier work left doubts about just when the object, estimated at some six miles (10 kilometers) in diameter, struck in relation to when dinosaurs disappeared. But in a study out Thursday in the US journal "Science," researchers used updated techniques to get a more precise date for the impact -- 66,038,000 million years ago -- which they said was accurate within 11,000 years. "When I got started in the field, the error bars on these events were plus or minus a million years," said paleontologist William Clemens, a UC Berkeley professor emeritus who was not directly involved in the study. The researchers also updated their estimate for the time the mass dinosaur extinction, and found that the date was within the same margin of error -- in other words, at around the same time as the asteroid impact. "We have shown that these events are synchronous to within a gnat's eyebrow," said Paul Renne, of the Berkeley Geochronology Center at the University of California, Berkeley. "The impact was clearly the final straw that pushed Earth past the tipping point," he said. But there were other factors as well, he said, including dramatic climate variation over the previous million years, which probably brought many species to the brink of extinction. "These precursory phenomena made the global ecosystem much more sensitive to even relatively small triggers, so that what otherwise might have been a fairly minor effect shifted the ecosystem into a new state," he said. "The impact was the coup de grace." The dinosaur extinction -- which wiped out the large land-based behemoths as well as many ocean creatures -- was first linked to an asteroid or comet strike in 1980, by UC Berkeley professor Luis Alvarez and his son Walter. The impact created a crater, now called Chicxulub, some 110 miles (177 kilometers) wide in the Caribbean, off the coast of Mexico. Copyright © 2014 AFP. All rights reserved. More »
– Paleontologists and dinosaur nerds have long debated what killed off the "terrible lizards"—one meteor or many, volcanoes, or something else. Now one researcher says he has the answer: an asteroid believed to be about six miles wide that landed in the Caribbean about 66 million years ago. The Chicxulub impact was first floated as a theory in 1980, but as a Princeton professor explains to NPR, "If [the impact] is the cause, it had to be precisely at the time of the mass extinction. It can't be before and it can't be afterwards." And as LiveScience notes, previous research suggested the asteroid hit 300,000 years before or 180,000 years after the mass extinction event. Now, a researcher says the asteroid and die-off happened "within a gnat's eyebrow" of each other, reports the AFP. Paul Renne, a University of California geologist, performed high-precision radiometric dating on ash deposited above fossils buried in Montana and on glass blobs found around the Chicxulub crater to determine the age of the events. "And lo and behold they are exactly the same," Renne says. The analyzed ash suggests the mass die-off occurred about 66,043,000 years ago, while the asteroid hit approximately 66,038,000 years ago. "Considering the statistical errors," the two events "may have occurred no more than 32,000 years apart," reports Science, where the study was published. "When I got started in the field, the error bars on these events were plus or minus a million years," marvels one scientist. (Click for another big dinosaur find.)
SECTION 1. SHORT TITLE. This Act may be cited as the ``Special Olympics Sport and Empowerment Act of 2004''. SEC. 2. FINDINGS AND PURPOSES. (a) Findings.--Congress finds the following: (1) Special Olympics celebrates the possibilities of a world where everybody matters, everybody counts, every person has value, and every person has worth. (2) The Government and the people of the United States recognize the dignity and value the giftedness of children and adults with intellectual disabilities. (3) The Government and the people of the United States are determined to end the isolation and stigmatization of individuals with intellectual disabilities. (4) For more than 36 years, Special Olympics has encouraged skill, sharing, courage, and joy through year-round sports training and athletic competition for children and adults with intellectual disabilities. (5) Special Olympics provides year-round sports training and competitive opportunities to 1,500,000 athletes with intellectual disabilities in 26 sports and plans to expand the joy of participation through sport to hundreds of thousands of individuals with intellectual disabilities within the United States and worldwide over the next 5 years. (6) Special Olympics has demonstrated its ability to provide a major positive effect on the quality of life of individuals with intellectual disabilities, improving their health and physical well-being, building their confidence and self-esteem, and giving them a voice to become active and productive members of their communities. (7) In society as a whole, Special Olympics has become a vehicle and platform for breaking down artificial barriers, improving public health, changing negative attitudes in education, and helping athletes with intellectual disabilities overcome the prejudice that individuals with intellectual disabilities face in too many places. (8) The Government of the United States enthusiastically supports Special Olympics, recognizes its importance in improving the lives of individuals with intellectual disabilities, and recognizes Special Olympics as a valued and important component of the global community. (b) Purposes.--The purposes of this Act are to-- (1) provide support to Special Olympics to increase athlete participation in and public awareness about the Special Olympics movement; (2) dispel negative stereotypes about individuals with intellectual disabilities; (3) build athletic and family involvement through sport; and (4) promote the extraordinary gifts of individuals with intellectual disabilities. SEC. 3. ASSISTANCE FOR SPECIAL OLYMPICS. (a) Education Activities.--The Secretary of Education may award grants to, or enter into contracts or cooperative agreements with, Special Olympics to carry out the following: (1) Activities to promote the expansion of Special Olympics, including activities to increase the participation of individuals with intellectual disabilities within the United States. (2) The design and implementation of Special Olympics education programs, including character education and volunteer programs that support the purposes of this Act, that can be integrated into classroom instruction and are consistent with academic content standards. (b) International Activities.--The Secretary of State may award grants to, or enter into contracts or cooperative agreements with, Special Olympics to carry out the following: (1) Activities to increase the participation of individuals with intellectual disabilities in Special Olympics outside of the United States. (2) Activities to improve the awareness outside of the United States of the abilities of individuals with intellectual disabilities and the unique contributions that individuals with intellectual disabilities can make to society. (c) Healthy Athletes.-- (1) In general.--The Secretary of Health and Human Services may award grants to, or enter into contracts or cooperative agreements with, Special Olympics for the implementation of onsite health assessments, screening for health problems, health education, data collection, and referrals to direct health care services. (2) Coordination.--Activities under paragraph (1) shall be coordinated with activities of private health care providers, authorized programs of State and local jurisdictions, and activities of the Department of Health and Human Services, as applicable. (d) Limitation.--Amounts appropriated to carry out this section shall not be used for direct treatment of diseases, medical conditions, or mental health conditions. Nothing in the preceding sentence shall be construed to limit the use of non-Federal funds by Special Olympics. SEC. 4. APPLICATION AND ANNUAL REPORT. (a) Application.-- (1) In general.--To be eligible for a grant, contract, or cooperative agreement under subsection (a), (b), or (c) of section 3, Special Olympics shall submit an application at such time, in such manner, and containing such information as the Secretary of Education, Secretary of State, or Secretary of Health and Human Services, as applicable, may require. (2) Content.--At a minimum, an application under this subsection shall contain the following: (A) Activities.--A description of activities to be carried out with the grant, contract, or cooperative agreement. (B) Measurable goals.--Information on specific measurable goals and objectives to be achieved through activities carried out with the grant, contract, or cooperative agreement. (b) Annual Report.-- (1) In general.--As a condition on receipt of any funds under subsection (a), (b), or (c) of section 3, Special Olympics shall agree to submit an annual report at such time, in such manner, and containing such information as the Secretary of Education, Secretary of State, or Secretary of Health and Human Services, as applicable, may require. (2) Content.--At a minimum, each annual report under this subsection shall describe the degree to which progress has been made toward meeting the goals and objectives described in the applicable application submitted under subsection (a). SEC. 5. AUTHORIZATION OF APPROPRIATIONS. There are authorized to be appropriated-- (1) for grants, contracts, or cooperative agreements under section 3(a), $5,500,000 for fiscal year 2005, and such sums as may be necessary for each of the 4 succeeding fiscal years; (2) for grants, contracts, or cooperative agreements under section 3(b), $3,500,000 for fiscal year 2005, and such sums as may be necessary for each of the 4 succeeding fiscal years; and (3) for grants, contracts, or cooperative agreements under section 3(c), $6,000,000 for each of fiscal years 2005 through 2009.
Special Olympics Sport and Empowerment Act of 2004 - Authorizes the Secretaries of Education, of State, and of Health and Human Services to award grants to, or enter into contracts or cooperative agreements with, Special Olympics for specified education, international, and health activities, including ones promoting Special Olympics and a greater understanding of contributions to society by individuals with intellectual disabilities both within and outside of the United States.
the first color - magnitude diagrams ( cmd ) obtained by baade for the dwarf spheroidal ( dsph ) companions of the milky way , and in particular for the draco system ( baade & swope 1961 ) , showed all of the features present in the cmd s of globular clusters . this , together with the presence of rr lyrae stars ( baade & hubble 1939 ; baade & swope 1961 ) led to the interpretation that dsph galaxies are essentially pure population ii systems . but baade ( 1963 ) noted that there are a number of characteristics in the stellar populations of dsph galaxies that differentiate them from globular clusters , including extreme red horizontal branches and the distinct characteristics of the variable stars . when carbon stars were discovered in dsph galaxies , these differences were recognized to be due to the presence of an intermediate - age population ( cannon , niss & norgaard nielsen 1980 ; aaronson , olszewski & hodge 1983 ; mould & aaronson 1983 ) . in the past few years this intermediate - age population has been shown beautifully in the cmds of a number of dsph galaxies ( carina : mould & aaronson 1983 ; mighell 1990 ; smecker - hane , stetson & hesser 1996 ; hurley - keller , mateo & nemec 1998 ; fornax : stetson , hesser & smecker - hane 1998 ; leo i : lee et al . 1993 , l93 hereinafter ; this paper ) . other dsph show only a dominant old stellar population in their cmds ( ursa minor : olszewski & aaronson 1985 ; martnez - delgado & aparicio 1999 ; draco : carney & seitzer 1986 ; stetson , vandenbergh & mcclure 1985 ; grillmair et al . 1998 ; sextans : mateo et al . 1991 ) . an old stellar population , traced by a horizontal - branch ( hb ) , has been clearly observed in all the dsph galaxies satellites of the milky way , except leo i , regardless of their subsequent star formation histories ( sfh ) . in this respect , as noted by l93 , leo i is a peculiar galaxy , showing a well populated red - clump ( rc ) but no evident hb . this suggests that the first substantial amount of star formation may have been somehow delayed in this galaxy compared with the other dsph . leo i is also singular in that its large galactocentric radial velocity ( 177@xmath53 km @xmath6 , zaritsky et al . 1989 ) suggests that it may not be bound to the milky way , as the other dsph galaxies seem to be ( fich & tremaine 1991 ) . byrd et al . ( 1994 ) suggest that both leo i and the magellanic clouds seem to have left the neighborhood of the andromeda galaxy about 10 gyr ago . it is interesting that the magellanic clouds also seem to have only a small fraction of old stellar population . leo i presents an enigmatic system with unique characteristics among local group galaxies . from its morphology and from its similarity to other dsph in terms of its lack of detectable quantities of hi ( knapp , kerr & bowers 1978 , see section [ leoi_prev ] ) it would be considered a dsph galaxy . but it also lacks a conspicuous old population and it has a much larger fraction of intermediate - age population than its dsph counterparts , and even , a non - negligible population of young ( @xmath7 1 gyr old ) stars . in this paper , we present new _ hst _ f555w ( @xmath1 ) and f814w ( @xmath2 ) observations of leo i. in section [ leoi_prev ] , the previous work on leo i is briefly reviewed . in section [ obs ] , we present the observations and data reduction . in section [ phot ] we discuss the photometry of the galaxy , reduced independently using both allframe and dophot programs , and calibrated using the ground - based photometry of l93 . in section [ cmd ] we present the cmd of leo i , and discuss the stellar populations and the metallicity of the galaxy . in section [ discus ] we summarize the conclusions of this paper . in a companion paper , ( gallart et al . 1998 , paper ii ) we will quantitatively derive the sfh of leo i through the comparison of the observed cmd with a set of synthetic cmds . leo i ( ddo 74 ) , together with leo ii , was discovered by harrington & wilson ( 1950 ) during the course of the first palomar sky survey . the distances to these galaxies were estimated to be @xmath8 200 kpc , considerably more distant than the other dsph companions of the milky way . it has been observed in hi by knapp et al . ( 1978 ) using the nrao 91-m telescope , but not detected . they set a limit for its hi mass of @xmath9 in the central 10(@xmath8 780 pc ) of the galaxy . recently , bowen et al . ( 1997 ) used spectra of three qso / agn to set a limit on the hi column density within 24 kpc in the halo of leo i to be @xmath10 . they find no evidence of dense flows of gas in or out of leo i , and no evidence for tidally disrupted gas . the large distance to leo i and the proximity on the sky of the bright star regulus have made photometric studies difficult . as a consequence , the first cmds of leo i were obtained much later than for the other nearby dsphs ( fox & pritchet 1987 ; reid & mould 1991 ; demers , irwin & gambu 1994 ; l93 ) . from the earliest observations of the stellar populations of leo i there have been indications of a large quantity of intermediate - age stars . hodge & wright ( 1978 ) observed an unusually large number of anomalous cepheids , and carbon stars were found by aaronson et al . ( 1983 ) and azzopardi , lequeux & westerlund ( 1985 , 1986 ) . a prominent rc , indicative in a low z system of an intermediate - age stellar population , is seen both in the @xmath11 $ ] cmd of demers et al . ( 1994 ) and in the @xmath12 $ ] cmd of l93 . the last cmd is particularly deep , reaching @xmath13 ( @xmath14 ) , and suggests the presence of a large number of intermediate age , main sequence stars . there is no evidence for a prominent hb in any of the published cmd s . l93 estimated the distance of leo i to be @xmath15 based on the position of the tip of the red giant branch ( rgb ) ; we will adopt this value in this paper . they also estimated a metallicity of [ fe / h ] = [email protected] dex from the mean color of the rgb . previous estimates of the metallicity ( aaronson & mould 1985 ; suntzeff , aaronson & olszewski 1986 ; fox & pritchet 1987 ; reid & mould 1991 ) using a number of different methods range from [ fe / h]=1.0 to 1.9 dex . with the new _ hst _ data presented in this paper , the information on the age structure from the turnoffs will help to further constrain the metallicity . we present wfpc2 _ hst _ @xmath1 ( f555w ) and @xmath2 ( f814w ) data in one 2.6@xmath16 2.6 field in leo i obtained in march 5 , 1994 . the wfpc2 has four internal cameras : the planetary camera ( pc ) and three wide field ( wf ) cameras . they image onto a loral 800@xmath16800 ccd , which gives an scale of 0.046 pixel@xmath17 for the pc camera and 0.10 pixel@xmath17 for the wf cameras . at the time of the observations the camera was still operating at the higher temperature of 77.0 @xmath18c . figure [ carta ] shows the location of the wfpc2 field superimposed on digitized sky survey image of leo i. the position of the ground - based image of l93 is also shown . the position was chosen so that the pc field was situated in the central , more crowded part of the galaxy . three deep exposures in both f555w ( @xmath1 ) and f814w ( @xmath2 ) filters ( 1900 sec . and each , respectively ) were taken . to ensure that the brightest stars were not saturated , one shallow exposure in each filter ( 350 sec . in f555w and 300 sec in f814w ) was also obtained . figure [ mosaic ] shows the @xmath1 and @xmath2 deep ( 5700 sec . and 4800 sec . respectively ) wf chip2 images of leo i. all observations were preprocessed through the standard stsci pipeline , as described by holtzmann et al . in addition , the treatment of the vignetted edges , bad columns and pixels , and correction of the effects of the geometric distortion produced by the wfpc2 cameras , were performed as described by silbermann et al . photometry of the stars in leo i was measured independently using the set of daophot ii / allframe programs developed by stetson ( 1987 , 1994 ) , and also with a modified version of dophot ( schechter , mateo & saha 1993 ) . we compare the results obtained with each of these programs below . allframe photometry was performed in the 8 individual frames and the photometry list in each band was obtained by averaging the magnitudes of the corresponding individual frames . in summary , the process is as follows : a candidate star list was obtained from the median of all the images of each field using three daophot ii / allstar detection passes . this list was fed to allframe , which was run on all eight individual frames simultaneously . we have used the psfs obtained from the public domain _ hst _ wfpc2 observations of the globular clusters pal 4 and ngc 2419 ( hill et al . the stars in the different frames of each band were matched and retained if they were found in at least three frames for each of @xmath1 and @xmath2 . the magnitude of each star in each band was set to the error - weighted average of the magnitudes for each star in the different frames . the magnitudes of the brightest stars were measured from the short exposure frames . a last match between the stars retained in each band was made to obtain the @xmath19 photometry table . dophot photometry was obtained with a modified version of the code to account for the _ hst _ psf ( saha et al . dophot reductions were made on average @xmath1 and @xmath2 images combined in a manner similar to that described by saha et al . ( 1994 ) in order to remove the effects of cosmic rays . photometry of the brightest stars was measured from the @xmath1 and @xmath2 short exposure frames . the dophot and allframe calibrated photometries ( see section [ transjohn ] ) show a reasonably good agreement . there is a scatter of 2 - 3% for even the brightest stars in both @xmath1 and @xmath2 . no systematic differences can be seen in the @xmath1 photometry . in the @xmath2 photometry there is good systematic agreement among the brightest stars , but a small tendency for the dophot magnitudes to become brighter compared to the allframe magnitudes with increasing @xmath2 magnitude . this latter effect is about 0.02 magnitudes at the level of the rc , and increases to about 0.04 - 0.05 mag by @xmath2 = 26 . we can not decide from these data which program is ` correct ' . however , the systematic differences are sufficiently small compared to the random scatter that our final conclusions are identical regardless of which reduction program is used . in the following we will use the star list obtained with daophot / allframe . our final photometry table contains a total of 31200 stars found in the four wfpc2 chips after removing stars with excessively large photometric errors compared to other stars of similar brightness . the retained stars have @xmath20 , chi@xmath21 and @xmath22sharp@xmath23 . for our final photometry in the johnson - cousins system we will rely ultimately in the photometry obtained by l93 . before this last step though , we transformed the profile fitting photometry using the prescription of holtzmann et al . ( 1995 ) . in this section , we will describe both steps and discuss the differences between the _ hst_-based photometry and the ground based photometry . the allframe photometry has been transformed to standard magnitudes in the johnson - cousins system using the prescriptions of holtzmann et al . ( 1995 ) and hill et al . ( 1998 ) as adopted for the _ hst _ @xmath24 key project data . psf magnitudes have been transformed to instrumental magnitudes at an aperture of radius 0.5 " ( consistent with holtzmann et al . 1995 and hill et al . 1998 ) by deriving the value for the aperture correction for each frame using daogrow ( stetson 1990 ) . the johnson - cousins magnitudes obtained in this way were compared with ground - based magnitudes for the same field obtained by l93 by matching a number of bright ( @xmath25 ) , well measured stars in the _ hst _ ( @xmath26 , @xmath27 ) and ground - based photometry ( @xmath28 , @xmath29 ) . the zero - points between both data sets have been determined as the median of the distribution of ( @xmath30 ) and ( @xmath31 ) . in table [ zeros ] the values for the median of ( @xmath30 ) , ( @xmath31 ) and its dispersion @xmath32 are listed for each chip ( no obvious color terms are observed , as expected , since both photometry sets have been transformed to a standard system taking into account the color terms where needed of the corresponding telescope - instrument system ) . @xmath33 is the number of stars used to calculate the transformation . although the value of the median zero point varies from chip to chip , it is in the sense of making the corrected @xmath1 magnitudes brighter by @xmath34 mag than @xmath26 and the corrected @xmath2 magnitudes fainter than @xmath27 by about the same amount . therefore , the final @xmath35 colors are one tenth of a magnitude bluer in the corrected photometry . lcccc chip & filter & median & @xmath32 & n + chip 1 & f555w & -0.037 & 0.100 & 17 + chip 2 & f555w & -0.110 & 0.103 & 59 + chip 3 & f555w & -0.080 & 0.063 & 43 + chip 4 & f555w & -0.059 & 0.067 & 57 + chip 1 & f814w & 0.035 & 0.075 & 17 + chip 2 & f814w & 0.013 & 0.064 & 53 + chip 3 & f814w & 0.076 & 0.042 & 43 + chip 4 & f814w & 0.080 & 0.041 & 53 + note that the cte effect , which may be important in the case of observations made at the temperature of @xmath36 c , could contribute to the dispersion on the zero - point nevertheless , if the differences @xmath37 , @xmath38 are plotted for different row intervals , no clear trend is seen , which indicates that the error introduced by the cte effect is not of concern in this case . the fact that the background of our images is considerable ( about 70 @xmath39 ) can be the reason for the cte effect not being noticeable . we adopt the l93 calibration because it was based on observations of a large number of standards from graham ( 1981 ) and landolt ( 1983 ) and because there was very good agreement between independent calibrations performed on two different observing runs and between calibrations on four nights of one of the runs . in addition , the holtzmann et al . ( 1995 ) zero points were derived for data taken with the wide field camera ccd s operating at a lower temperature compared to the present data set . in figure [ 4cmd ] we present four @xmath12 $ ] cmds for leo i based on the four wfpc2 chips . leo i possesses a rather steep and blue rgb , indicative of a low metallicity . given this low metallicity , its very well - defined rc , at @xmath40 21.5 , is characteristic of an intermediate - age stellar population . the main sequence ( ms ) , reaching up to within 1 mag in brightness of the rc , unambiguously shows that a considerable number of stars with ages between @xmath8 1 gyr and 5 gyr are present in the galaxy , confirming the suggestion by l93 that the faintest stars in their photometry might be from a relatively young ( @xmath8 3 gyr ) intermediate - age population . our cmd , extending about 2 magnitudes deeper than the l93 photometry and reaching the position expected for the turnoffs of an old population , shows that a rather broad range in ages is present in leo i. a number of yellow stars , slightly brighter and bluer than the rc , are probably evolved counterparts of the brightest stars in the ms . finally , the lack of discontinuities in the turnoffs / subgiant region indicate a continuous star formation activity ( with possible changes of the star formation rate intensity ) during the galaxy s lifetime . we describe each of these features in more detail in section [ compaiso ] , and discuss their characteristics by comparing them with theoretical isochrones and taking into account the errors discussed in section [ photerrors ] . we will quantitatively study the sfh of leo i in paper ii by comparing the distribution of stars in the observed cmd with a set of model cmds computed using the stellar evolutionary theory as well as a realistic simulation of the observational effects in the photometry ( see gallart et al . 1996b , c and aparicio , gallart & bertelli 1997 a , b for different applications of this method to the study of the sfh in several lg dwarf irregular galaxies ) . before proceeding with an interpretation of the features present in the cmd , it is important to assess the photometric errors . to investigate the total errors present in the photometry , artificial star tests have been performed in a similar way as described in aparicio & gallart ( 1994 ) and gallart , aparicio & vlchez ( 1996a ) . for details on the tests run for the leo i data , see paper ii . in short , a large number of artificial stars of known magnitudes and colors were injected into the original frames , and the photometry was redone again following exactly the same procedure used to obtain the photometry for the original frames . the injected and recovered magnitudes of the artificial stars , together with the information of the artificial stars that have been lost , provides us with the true total errors . in figure [ errors ] , artificial stars representing a number of small intervals of magnitude and color have been superimposed as white spots on the observed cmd of leo i. enlarged symbols ( @xmath16 , @xmath41 , @xmath42 ) show the recovered magnitudes for the same artificial stars . the spread in magnitude and color shows the error interval in each of the selected positions . this information will help us in the interpretation of the different features present in the cmd ( section [ compaiso ] ) . a more quantitative description of these errors and a discussion of the characteristics of the error distribution will be presented in appendix a of paper ii . we will adopt , here and in paper ii the distance obtained by l93 @xmath43 from the position of the tip of the rgb . since the ground based observations of l93 cover a larger area than the _ hst _ observations presented in this paper , and therefore sample the tip of the rgb better , they are more suitable to derive the position of the tip . on another hand , since we derive the callibration of our photometry from theirs , we do nt expect any difference in the position of the tip in our data . the adopted distance provides a good agreement between the position of the different features in the cmd and the corresponding theoretical position ( figures [ leoi_isopa ] and [ leoi_isoya ] ) , and its uncertainty does not affect the ( mostly qualitative ) conclusions of this paper . in figure [ leoi_isopa ] , isochrones of 16 gyr ( @xmath44 ) , 3 and 1 gyr ( @xmath45 ) from the padova library ( bertelli et al . 1994 ) have been superimposed upon the global cmd of leo i. in figure [ leoi_isoya ] , isochrones of the same ages and metallicities from the yale library ( demarque et al . 1996 ) are shown . in both cases ( except for the padova 1 gyr old , z=0.001 isochrone ) , only the evolution through the rgb tip has been displayed ( these are the only phases available in the yale isochrones ) . in figure [ leoi_isoclump ] , the hb agb phase for 16 gyr ( @xmath44 ) and the full isochrones for 1 gyr , 600 and 400 myr ( @xmath45 ) from the padova library are shown . a comparison of the yale and padova isochrones in figures [ leoi_isopa ] and [ leoi_isoya ] shows some differences between them , particularly regarding the shape of the rgb ( the rgb of the padova isochrones are in general _ steeper _ and redder , at the base of the rgb , and bluer near the tip of the rgb , than the yale isochrones for the same age and z ) and the position of the subgiant branches of age @xmath46gyr ( which is brighter in the padova isochrones ) . in spite of these differences , the general characteristics deduced for the stellar populations of leo i do not critically depend on the set chosen . however , based on these comparisons , we can gain some insight into current discrepancies between two sets of evolutionary models widely used , and therefore into the main uncertainties of stellar evolution theory that we will need to take into account when analyzing the observations using synthetic cmds ( paper ii ) . in the following , we will discuss the main features of the leo i cmd using the isochrones in figures [ leoi_isopa ] to [ leoi_isoz ] . this will allow us to reach a qualitative understanding of the stellar populations of leo i , as a starting point of the more quantitative approach presented in paper ii . the broad range in magnitude in the ms turnoff region of leo i cmd is a clear indication of a large range in the age of the stars populating leo i. the fainter envelope of the subgiants coincides well with the position expected for a @xmath8 1015 gyr old population whereas the brightest blue stars on the main sequence ( ms ) may be as young as 1 gyr old , and possibly younger . figure [ leoi_isoclump ] shows that the blue stars brighter than the 1 gyr isochrone are well matched by the ms turnoffs of stars a few hundred myr old . one may argue that a number of these stars may be affected by observational errors that , as we see in figure [ errors ] , tend to make stars brighter . they could also be unresolved binaries comprised of two blue stars . nevertheless , it is very unlikely that the brightest blue stars are stars @xmath8 1 gyr old affected by one of these situations , since one has to take into account that : a ) a 1 gyr old binary could be only as bright as @xmath47 in the extreme case of two identical stars , and b ) none of the blue artificial stars at @xmath48 ( which are around 1 gyr old ) got shifted the necessary amount to account for the stars at @xmath49 and only about 4% of them have been shifted a maximum of 0.5 mag . we conclude , therefore , that some star formation has likely been going on in the galaxy from 1 gyr to a few hundreds myr ago . the presence of the bright yellow stars ( see subsection [ yellow ] below ) , also supports this conclusion . concerning the age of the older population of leo i , the present analysis of the data using isochrones alone does not allow us to be much more precise than the range given above ( 1015 gyr ) , although we favour the hypothesis that there may be stars older than 10 gyr in leo i. in the old age range , the isochrones are very close to one another in the cmd and therefore the age resolution is not high . in addition , at the corresponding magnitude , the observational errors are quite large . nevertheless , the characteristics of the errors as shown in figure [ errors ] make it unlikely that the faintest stars in the turnoff region are put there due to large errors because i ) a significant migration to fainter magnitudes of the stars in the @xmath810 gyr turnoff area is not expected and , ii ) because of the approximate symmetric error distribution , errors affecting intermediate - age stars in their turnoff region are not likely to produce the well defined shape consistent with a 16 gyr isochrone ( see figure [ leoi_isopa ] ) . finally , the fact that there are not obvious discontinuities in the turnoff / subgiant region suggests that the star formation in leo i has proceeded in a more or less continuous way , with possible changes in intensity but no big time gaps between successive bursts , through the life of the galaxy . these possible changes will be quantified , using synthetic cmds , in paper ii . core he - burning stars produce two different features in the cmd , the hb and the rc , depending on age and metallicity . very old , very low metallicity stars distribute along the hb during the core he - burning stage . the rc is produced when the core he - burners are not so old , or more metal - rich , or both , although other factors may also play a role ( see lee 1993 ) . rc area in leo i differs from those of the other dsph galaxies in the following two important ways . first , the lack of a conspicuous hb may indicate , given the low metallicity of the stars in the galaxy , that leo i has only a small fraction of very old stars . there are a number of stars at @xmath50 , @xmath51 that could be stars on the hb of an old , metal poor population , but their position is also that of the post turn - off @xmath8 1 gyr old stars ( see figure [ leoi_isoclump ] ) . the relatively large number of these stars and the discontinuity that can be appreciated between them and the rest of the stars in the herszprung - gap supports the hypothesis that hb stars may make a contribution . this possible contribution will be quantified in paper ii . second , the leo i rc is very densely populated and is much more extended in luminosity than the rc of single - age populations , with a width of as much as @xmath52 1 mag . the intermediate - age lmc populous clusters with a well populated rc ( see e.g. bomans , vallenari & de boer 1995 ) have @xmath53 values about a factor of two smaller . the rcs of the other dsph galaxies with an intermediate - age population ( fornax : stetson et al . 1998 ; carina : hurley keller , mateo & nemec 1998 ) are also much less extended in luminosity . the leo i rc is more like that observed in the cmds of the general field of the lmc ( vallenari et al . 1996 ; zaritzky , harris & thompson 1997 ) . a rc extended in luminosity is indicative of an extended sfh with a large intermediate age component . the older stars in the core he burning phase lie in the lower part of the observed rc , younger rc stars are brighter ( bertelli et al . 1994 , their figure 12 ; see also caputo , castellani & deglinnocenti 1995 ) . the brightest rc stars may be @xmath8 1 gyr old stars ( which start the core he - burning phase in non - degenerate conditions ) in their blue loop phase . the stars scattered above the rc , ( as well as the brightest yellow stars , see subsection [ yellow ] ) , could be a few hundred myr old in the same evolutionary phase ( see figure 1 in aparicio et al . 1996 ; gallart 1998 ) . the rc morphology depends on the fraction of stars of different ages , and will complement the quantitative information about the sfh from the distribution of sub - giant and ms stars ( paper ii ) . there are a number of bright , yellow stars in the cmd ( at @xmath54 mag and @xmath55 mag ) . l93 indicate that a significant fraction of these stars show signs of variability , and two of the stars in their sample were identified by hodge & wright ( 1978 ) to be anomalous cepheids ) estimated for them implies that they should be relatively young stars , or mass transfer binaries . since the young age hypothesis appeared incompatible with the idea of dsph galaxies being basically population ii systems , it was suggested that anomalous cepheids could be products of mass - transfer binary systems . nevertheless , we know today that most dsph galaxies have a substantial amount of intermediate - age population , consistent with anomalous cepheids being relatively young stars that , according to various authors ( gingold 1976 , 1985 ; hirshfeld 1980 ; bono et al . 1997 ) , after undergoing the he - flash , would evolve towards high enough effective temperatures to cross the instability strip before ascending the agb . ] . some of them also show signs of variability in our _ hst _ data . in figure [ leoi_isoclump ] however , it is shown that these stars have the magnitudes and colors expected for blue loop stars of few hundred myr . this supports our previous conclusion that the brightest stars in the ms have ages similar to these . given their position in the cmd , it is interesting to ask whether some of the variables found by hodge & wright ( 1978 ) in leo i could be classical cepheids instead of anomalous cepheids . from the bertelli et al . ( 1994 ) isochrones , we can obtain the mass and luminosity of a 500 myr blue - loop star , which would be a representative star in this position of the cmd . such a star would have a mass , @xmath56 , and a luminosity , l@xmath8 350 l@xmath57 . from eq . 8 of chiosi et al . ( 1992 ) we calculate that the period that corresponds to a classical cepheid of this mass and metallicity is 1.2 days , which is compatible with the periods found by hodge & wright ( 1978 ) , that range between 0.8 and 2.4 days . we suggest that some of these variable stars may be similar to the short period cepheids in the smc ( smith et al . 1992 ) , i.e. classical cepheids in the lower extreme of mass , luminosity and period . if this is confirmed , it would be of considerable interest in terms of understanding the relationship between the different types of cepheid variables . a new wide field survey for variable stars , more accurate and extended to a fainter magnitude limit ( both to search for cepheids and rr lyrae stars ) would be of particular interest in the case of leo i. the rgb of leo i is relatively blue , characteristic of a system with low metallicity . assuming that the stars are predominantly old , with a small dispersion in age , l93 obtained a mean metallicity [ fe / h]=2.02 @xmath5 0.10 dex and a metallicity dispersion of @xmath58 } < -1.8 $ ] dex . this estimate was based on the color and intrinsic dispersion in color of the rgb at @xmath59 using a calibration based on the rgb colors of galactic globular clusters ( da costa & armandroff 1990 ; lee , freedman & madore 1993b ) . for a younger mean age of about 3.5 gyr , they estimate a slightly higher metallicity of [ fe / h]=1.9 , based on the difference in color between a 15 and a 3.5 gyr old population according to the revised yale isochrones ( green et al . other photometric measurements give a range in metallicity of [ fe / h]= 1.85 to 1.0 dex ( see l93 and references therein ) . the metallicity derived from moderate resolution spectra of two giant stars by suntzeff ( 1992 , unpublished ) is [ fe / h]@xmath60 dex . since leo i is clearly a highly composite stellar population with a large spread in age , the contribution to the width of the rgb from such an age range may no longer be negligible compared with the dispersion in metallicity . therefore , an independent estimate of the age range from the ms turnoffs is relevant in the determination of the range in metallicity . in the following , we will discuss possible limits on the metallicity dispersion of leo i through the comparison of the rgb with the isochrones shown in figures [ leoi_isopa ] through [ leoi_isoz ] . as we noted in the introduction of section [ cmd ] , there are some differences between the padova and the yale isochrones , but their positions coincide in the zone about 1 magnitude above the rc . we will use only this position in the comparisons discussed below . we will first check whether the whole width of the rgb can be accounted for by the dispersion in age . in subsection [ ms ] above , we have shown that the ages of the stars in leo i range from 1015 gyr to less than 1 gyr . in figure [ leoi_isoz ] we have superimposed padova isochrones of z=0.0004 and ages 10 , 1 and 0.5 gyr on the leo i cmd . this shows that the full width of the rgb above the rc can be accounted for by the dispersion in age alone . a similar result is obtained for a metallicity slightly lower or higher . this provides a lower limit for the metallicity range , which could be negligible . the agb of the 0.5 gyr isochrone appears to be too blue compared with the stars in the corresponding area of the cmd . however , these agbs are expected to be poorly populated because a ) stars are short lived in this phase and b ) the fraction of stars younger than 1 gyr is small , if any . second , we will discuss the possible range in z at different ages from a ) the position of the rgb , taking into account the fact that isochrones of the same age are redder when they are more metal rich and isochrones of the same metallicity are redder when they are older and b ) that the extension of the blue - loops depends on metallicity : \a ) for stars of a given age , the lower limit of z is given by the blue edge of the rgb area we are considering : isochrones of any age and z=0.0001 have colors in the rgb above the rc within the observed range . therefore , by means of the present comparison only , we can not rule out the possibility that there may be stars in the galaxy with a range of ages and z as low as z=0.0001 . the oldest stars of this metallicity would be at the blue edge of the rgb , and would be redder as they are younger . the upper limit for the metallicity of stars of a given age is given by the red edge of the rgb : for old stars , the red edge of the observed rgb implies an upper limit of z @xmath7 0.0004 ( see figure [ leoi_isoz ] ) , since more metal rich stars would have colors redder than observed . for intermediate - age stars up to @xmath8 3 gyr old we infer an upper limit of z=0.001 , and for ages @xmath8 3 - 1 gyr old an upper limit of z=0.004 . \b ) we can use the position of the bright yellow stars to constrain z : the fact that there are a few stars in blueward extended blue loops implies that their metallicity is as low as [email protected] or even lower ( figure [ leoi_isoclump ] ) , because higher metallicity stars do nt produce blueward extended blue - loops at the observed magnitude . this does not exclude the possibility that a fraction of young stars have metallicity up to z=0.004 . these upper limits are compatible with z slowly increasing with time from z@xmath8 0 to [email protected] , on the scale of the padova isochrones . in summary , we conclude that the width of the leo i rgb can be accounted for the dispersion of the age of its stellar population and , therefore , the metallicity dispersion could be negligible . alternatively , considering the variation in color of the isochrones depending on both age and metallicity , we set a maximum range of metallicity of @xmath620.0010.004 : a lower limit of z=0.0001 is valid for any age , and the upper limit varies from z=0.0004 to z=0.004 , increasing with time . these upper limits are quite broad ; they will be better constrained , and some information on the chemical enrichment law gained , from the analysis of the cmd using synthetic cmds in paper ii . from the new _ hst _ data and the analysis presented in this paper , we conclude the following about the stellar populations of leo i : \1 ) the broad ms turnoff / subgiant region and the wide range in luminosity of the rc show that star formation in leo i has extended from at least @xmath8 1015 gyr ago to less than 1 gyr ago . a lack of obvious discontinuities in the ms turnoff / subgiant region suggests that star formation proceeded in a more or less continuous way in the central part of the galaxy , with possible intensity variations over time , but no big time gaps between successive bursts , through the life of the galaxy . \2 ) a conspicuous hb is not seen in the cmd . given the low metallicity of the galaxy , this reasonably implies that the fraction of stars older than @xmath8 10 gyr is small , and indicates that the beginning of a substantial amount of star formation may have been delayed in leo i in comparison to the other dsph galaxies . it is unclear from the analysis presented in this paper whether leo i contains any stars as old as the milky way globular clusters . \3 ) there are a number of bright , yellow stars in the same area of the cmd where anomalous cepheids have been found in leo i. these stars also have the color and magnitude expected for the blue - loops of low metallicity , few hundred myr old stars . we argue that some of these stars may be classical cepheids in the lower extreme of mass , luminosity and period . 4)the evidence that the stars in leo i have a range in age complicates the determination of limits to the metallicity range based on the width of the rgb . in one extreme , if the width of the leo i rgb is atributted to the dispersion of the age of its stellar population alone , the metallicity dispersion could be negligible . alternatively , considering the variation in color of the isochrones depending on both age and metallicity , we set a maximum range of metallicity of @xmath620.0010.004 : a lower limit of z=0.0001 is valid for any age , and the ( broad ) upper limit varies from z=0.0004 to z=0.004 , increasing with time . in summary , leo i has unique characteristics among local group galaxies . due to its morphology and its lack of detectable quantities of hi , it can be classified as a dsph galaxy . but it appears to have the youngest stellar population among them , both because it is the only dsph lacking a conspicuous old population , and because it seems to have a larger fraction of intermediate - age and young population than other dsph . the star formation seems to have proceeded until almost the present time , without evidence of intense , distinct bursts of star formation . important questions about leo i still remain . an analysis of the data using synthetic cmds will give quantitative information about the strength of the star formation at different epochs . further observations are needed to characterize the variable - star population in leo i , and in particular , to search for rr lyrae variable stars . this will address the issue of the existence or not of a very old stellar population in leo i. it would be interesting to check for variations of the star formation across the galaxy and to determine whether the hb is also missing in the outer parts of leo i. answering these questions is important not only to understand the formation and evolution of leo i , but also in relation to general questions about the epoch of galaxy formation and the evolution of galaxies of different morphological types . the determination of the strength of the star formation in leo i at different epochs is important to assess whether it is possible that during intervals of high star formation activity , leo i would have been as bright as the faint blue galaxies observed at intermediate redshift . in addition , the duration of such a major event of star formation may be important in explaining the number counts of faint blue galaxies . we want to thank allan sandage for many very useful discussions and a careful reading of the manuscript . we thank also nancy b. silbermann , shoko sakai and rebecca bernstein for their help through the various stages of the _ hst _ data reduction . support for this work was provided by nasa grant go-5350 - 03 - 93a from the space telescope science institute , which is operated by the association of universities for research in astronomy inc . under nasa contract nasa526555 . c.g . also acknowledges financial support from a small research grant from nasa administered by the aas and a theodore dunham jr . grant for research in astronomy . thanks the carnegie observatories for their hospitality . is supported by the ministry of education and culture of the kingdom of spain , by the university of la laguna and by the iac ( grant pb3/94 ) . m.g.l is supported by the academic research fund of ministry of education , republic of korea , bsri-97 - 5411 . the digitized sky surveys were produced at the space telescope science institute under u.s . government grant nag w-2166 . the images of these surveys are based on photographic data obtained using the oschin schmidt telescope on palomar mountain and the uk schmidt telescope . smecker - hane , t. a. , stetson , p. b. , hesser , j. e. & van den bergh , d. a. 1996 in _ from stars to galaxies : the impact of stellar physics on galaxy evolution _ ( ed . c. leitherer , u. fritze - van alvensleben & j. huchra ) . asp conf series , vol 98 , p. 328 .
we present deep @xmath0 f555w ( @xmath1 ) and f814w ( @xmath2 ) observations of a central field in the local group dwarf spheroidal ( dsph ) galaxy leo i. the resulting color - magnitude diagram ( cmd ) reaches @xmath3 and reveals the oldest @xmath4 gyr old turnoffs . nevertheless , a horizontal branch is not obvious in the cmd . given the low metallicity of the galaxy , this likely indicates that the first substantial star formation in the galaxy may have been somehow delayed in leo i in comparison with the other dsph satellites of the milky way . the subgiant region is well and uniformly populated from the oldest turnoffs up to the 1 gyr old turnoff , indicating that star formation has proceeded in a continuous way , with possible variations in intensity but no big gaps between successive bursts , over the galaxy s lifetime . the structure of the red - clump of core he - burning stars is consistent with the large amount of intermediate age population inferred from the main sequence and the subgiant region . in spite of the lack of gas in leo i , the cmd clearly shows star formation continuing until 1 gyr ago and possibly until a few hundred myrs ago in the central part of the galaxy . subject headings : galaxies : individual ( leo i ) ; galaxies : evolution ; galaxies : stellar content ; galaxies : photometry ; stars : hertzsprung - russell ( hr - diagram ) .
in 1987 , harris et al proposed that the combination of clinical features , including both venous and arterial occlusive events , recurrent spontaneous abortions and thrombocytopenia with antiphospholipid antibodies ( apl ) , should be termed the antiphospholipid antibody syndrome ( aps ) . here , we describe a case of secondary aps in a systemic lupus patient who presented with cerebral infarction in the presence of anticardiolipin antibodies . a 23-year - old korean man was admitted with a chief complaint of weakness and decreased sensation of left arm and leg for 7 days . he had suffered from polyarthralgia and generalized edema for 2 years , and took herbal medicine occasionally . he had a history of hair loss , but he did nt complain about allergies , photosensitivity , skin rash , oral ulcer , dry mouth , dry eye and raynaud s phenomenon . on examination , he was normotensive ( 110/70 mmhg ) but febrile ( 37.8 c ) . there were no pathologic lesions in his ears , eyes , nasal or oral mucosa . his joints were unremarkable , there was no cutaneous vasculitis and all peripheral pulses were present with no arterial bruits . neurologically , he had left - side paresthesia , hemiparesis and a brisk , deep tendon reflex , but other abnormal neurologic signs were not noted . on admission , hematocrit was 15% , hemoglobin 5.5gm / dl , and the reticulocyte count was 3.5% . urinalysis showed 100 mg / dl protein , 57 white cells and many red cell . renal function evaluation revealed blood urea nitrogen 26 mg / dl , creatinine 1.5 mg / dl , 24 hr urine protein 3.4 gm / day , and the creatinine clearance was 55 ml / dl . the ast was 34 iu / i , alt 42 iu / i , alkaline phosphatase 48 iu / i , total bilirubin 0.7 mg / dl , total protein 5.8 gm / dl and albumin 2.7gm / dl . the platelets were 2210/ml , pt 10.7/100 ( sec/% ) and aptt 34/28(sec , patient / control ) . rheumatoid factor was negative and ana was positive ( speckled pattern , titer 1:160 ) . the anti - dsdna antibody was above 100 u / ml ( n ; 025 u / ml ) . the anti - sm , anti - rnp , anti - ro and anti - la antibodies were all positive . complement levels were decreased at c3 26.5 mg / dl ( n ; 52.6120 mg / dl ) , c4 8.0 mg / dl ( n ; 20.549 mg / dl ) . anticardiolipin antibodies ( acl ) were positive for both igg and igm ( by elisa ) and vdrl was reactive , tpha nonreactive and fta - abs negative . the levels of protein c , protein s and antithrombin iii were within normal limits . plain radiology of chest was normal . abdominal ultrasonography revealed moderate amount of ascites , splenomegaly and diffusely increased renal parenchymal echogenicity . magnetic resonance imaging ( mri ) kidney biospy showed mixed class iii and v(focal and segmental proliferative glomerulonephritis and membranous lupus glomerulonephritis ) lesion with activity score 4/24 , and chronicity score 1/21 . lupus anticoagulant , which was described in the 1950s by conley and hartman , was first associated with thrombotic events by bowie et al in 1963 . in 1987 , harris et al proposed that the combination of clinical features , including both venous and arterial occlusive events , recurrent spontaneous abortions and thrombocytopenia with antiphospholipid antibodies , identified as moderate to high titers of igg or igm anticrdiolipin antibody or the lupus anticoagulant , should be termed as the antiphospholipid syndrome. minor manifestations continue to be described . cutaneous manifestations , including livedo reticularis and leg ulcers not related to venous insufficiency , are well described . our patient , who was compatible with diagnostic criteria for sle was found to have anticardiolipin antibody , false positive vdrl , prolonged partial thromboplastin time , thrombocytopenia and cerebral thrombosis . although the majority of thrombotic episodes in patients with apl are venous , when thrombosis occurs in the arterial circulation , the brain is affected most often , and also can result in ocular complications , peripheral arterial disease and livedo reticularis . the average prevalence of lupus anticoagulant in sle was reported as 34% , and anticardiolipin as 44% antiphospholipid antibodies ( apl ) are defined by solid - phase immunoassay and phospholpid - dependent coagulation tests . when detected by solid - phase immunoassay , they may be named for the specific negatively charged phospholipid , such as cardiolipin , that is used as the antigen . when detected by phospholipid - dependent coagulation tests , they are called lupus anticoagulant ( lac ) . a prolonged activated partial thromboplastin time ( aptt ) is the most useful screening test for lac , but it is considered to be an insensitive test . in general , coagulation tests with he least amount of phospholipid in the test system are the most sensitive . the protocols for performing solid - phase assays for apl most commonly utilize a standard elisa technique . however , the two antibodies appear to be distinct and may be directed against different epitopes . the consensus of opinion is that antiphospholipid antibodies have a pathogenetic role in the vasculopathy of the antiphospholipid syndrome , but the mechanism is unknown . any or all of the major components of the clotting system may be involved in apl pathogenicity , including the coagulation cascade ( many of these steps are phospholipid dependent ) , platelet activation and aggregation , and endothelial cell function . interference with each of these has been postulated as a possible mechanism . laboratory abnormalities that can be seen in patients with apl include a biologically false positive vdrl test , a prolonged activated partial thromboplastin time ( aptt ) and thrombocytopenia ; an elevated erythrocyte sedimentation rate , positive antinuclear antibody ( ana ) and elevated anti - dna titers occur occasionally . in a patient with thrombosis or fetal loss , the presence of any or all of these findings should be considered as clues that prompt evaluation for apl . there are no convincing data supporting the use of any specific treatment modality in patients with aps . for the treatment of patients who had thrombotic events , longterm coagulation may be preferable . a major question is whether or how to treat patients with sle ( or others ) , who have never had a clinical event , with apl . the clinical correlations of igg and igm acl in patients with aps in sle were reported that there is high correlation between igg acl titer and itp , recurrent venous thrombosis and fetal loss , also between igm acl and livedo reticularis and chronic leg ulceration . therefore , in patients with rheumatologic conditions , the occurrence of ischemic events , itp , renal abnormalities or hypertension should arouse the suspicion of aps and apl . consideration should be given to prophylactic aspirin therapy in patients with moderate or strongly positive igg acl , even in the absence of any symptomatology , because there is strong correlation ( in patients with sle ) of a moderately or strongly positive igg acl with thrombosis .
antiphospholipid antibody syndrome is a newly - defined clinical entity of arterial thrombosis , venous thrombotic events , recurrent spontaneous abortion and thrombocytopenia in the presence of antiphospholipid antibodies.we have experienced a 23-year - old male sle patient with positive anticardiolipin antibody who presented with left hemiparesis and paresthesia . the clinical and laboratory findings were compatible with the criteria for sle and he was found to have anticardiolipin antibody , thrombocytopenia , prolonged partial thromboplastin time and cerebral thrombosis . initially , he was treated with high dose steroid and warfarin and now he is being followed up with warfarin and steroid .
let @xmath31 be a compact manifold and @xmath3 be a @xmath1 flow , @xmath87 , @xmath88 . the flow is _ anosov _ if the tangent space to @xmath31 has a continuous decomposition @xmath89 which is invariant , @xmath90 , @xmath91 , and for some @xmath92 and @xmath93 fixed @xmath94 where @xmath95 is given by a smooth riemannian metric on @xmath96 . note that we do not assume that the dimensions of @xmath97 and @xmath98 are the same . fix a smooth volume form @xmath99 on @xmath100 . we present here some basic results : an upper bound on the number of closed trajectories of @xmath101 ( lemma [ l : dyn-4 ] ) and on the volume of the set of trajectories that return to a small neighbourhood of their originating point after a given time ( lemma [ l : dyn-3 ] ) . these bounds are used in the proof of lemma [ l : appr ] . see appendix [ s : dyn ] for the proofs . the constant @xmath102 is defined in . [ l : dyn-3 ] define the following measure on @xmath103 : @xmath104 and fix @xmath105 . then there exists @xmath62 such that for each @xmath106 , and @xmath107 , @xmath108 in particular , by letting @xmath109 , we get a bound on the number of closed trajectories : [ l : dyn-4 ] let @xmath110 be the number of closed trajectories of @xmath101 of period no more than @xmath111 . then @xmath112 let @xmath113 be as in [ dyns ] and @xmath114 be defined by @xmath115 on the vector bundle of differential forms of all orders on @xmath100 , see . let @xmath116 be the smooth invariant subbundle of @xmath117 given by all differential @xmath60-forms @xmath118 satisfying @xmath119 , where @xmath120 denotes the contraction operator by a vector field see also ( * ? ? ? * ( 3.5 ) ) . we recall the trace formula of guillemin ( * ? ? ? * theorem 8 , ( ii.22 ) ) which is valid for any flow with nondegenerate periodic trajectories see appendix [ s : guillemin ] for a self - contained proof in the anosov case . in our notation it says that @xmath121 where @xmath23 s are periodic orbits , @xmath122 is the linearized poincar map , @xmath24 is the period of @xmath11 , and @xmath25 is the primitive period . see [ tft ] for definition and properties of the flat trace @xmath123 . by the anosov property , and since we use negative times in the definition of @xmath124 , the eigenvalues of @xmath125 satisfy @xmath126 , therefore @xmath127 . similarly if @xmath98 is orientable , then @xmath129 ; since @xmath130 , @xmath131 that is holds with @xmath132 . we now assume for some integer @xmath133 . consequently we relate the expressions on the right hand side of to the ruelle zeta function using @xmath134 this is a standard argument going back to ruelle @xcite but the particular determinants here seem to be rather different than the one related to his transfer operators : @xmath135 we note that thanks to lemma [ l : dyn-4 ] the sums on the right hand side converge for @xmath8 . in this section we present concepts and facts from microlocal / semiclassical analysis which are needed in the proofs . their proofs and detailed references are provided in appendix [ a : wf ] . let @xmath31 be a manifold . for a distribution @xmath136 , a phase space description of its singularities is given by the wave front set @xmath137 , a closed conic subset of @xmath138 . a more general object is the semiclassical wave front set defined using a ( small ) asymptotic parameter @xmath139 for @xmath140-tempered families of distributions @xmath141 : @xmath142 where @xmath143 is the fiber - radially compactified cotangent bundle , a manifold with interior @xmath144 and boundary @xmath145 , the cosphere bundle . in addition to singularities , @xmath146 measures oscillations on the @xmath140-scale . the relation of the two wave front sets is the following : if @xmath147 is an @xmath140-independent distribution , then @xmath148 see [ a : wf-2 ] and for a more general statement , ( * ? ? ? * ( 8.4.8 ) ) . for operators we define the wave front set @xmath149 ( or @xmath150 for @xmath140-dependent families of operators ) using the schwartz kernel see . this way @xmath151 , the diagonal in @xmath152 , rather than @xmath153 , the conormal bundle to the diagonal in @xmath154 . the following result , proved in [ a : wf-2 ] , will allow us to calculate @xmath155 , and thus , by , @xmath156 . it states that away from the fiber infinity , the semiclassical wave front set of an operator is characterized using its action on distributions : [ l : wfs ] let @xmath157 be an @xmath140-tempered family of operators . a point @xmath158 does not lie in @xmath150 if and only if there exist neighbourhoods @xmath159 of @xmath160 and @xmath161 of @xmath162 such that @xmath163 for each @xmath140-tempered family of functions @xmath164 . we next state several semiclassical estimates used in [ micro ] . to be able to work with differential forms , we consider a semiclassical pseudodifferential operator @xmath165 acting on @xmath140-tempered families of distributions @xmath166 with values in a vector bundle @xmath117 over @xmath100 . for simplicity , we assume below that @xmath100 is a compact manifold . we provide estimates in semiclassical sobolev spaces @xmath167 ( denoted @xmath168 for simplicity ) and the corresponding restrictions on wave front sets . each of the estimates , , , is understood as follows : if the right - hand side is well - defined , then for @xmath140 small enough , the left - hand side is well - defined and the estimate holds . for example , in the case of , if @xmath169 and @xmath170 , then we have @xmath171 . see [ a : wf-3 ] for the proofs . [ l : elliptic ] ( elliptic estimate ) let @xmath166 be @xmath140-tempered . then : \1 . if @xmath172 ( acting on @xmath173 diagonally ) and @xmath174 is elliptic on @xmath175 , then for each @xmath176 , @xmath177 \2 . if @xmath178 denotes the elliptic set of @xmath174 , then @xmath179 , displaying the wave front sets of @xmath180 and the flow lines of @xmath181 . ] [ l : hyperbolic ] ( propagation of singularities ) assume that @xmath182 and the semiclassical principal symbol @xmath183 is diagonal with entries is some representative of the equivalence class @xmath184 satisfying the specified conditions . ] @xmath185 , with @xmath186 independent of @xmath140 and @xmath187 everywhere . assume also that @xmath188 is homogeneous of degree @xmath189 in @xmath190 , for @xmath191 large enough . let @xmath192 be the hamiltonian flow of @xmath188 on @xmath193 and @xmath166 be an @xmath140-tempered family of distributions . then ( see figure [ f : hyperbolic ] ) : \1 . assume that @xmath194 and for each @xmath195 , there exists @xmath196 with @xmath197 and @xmath198 for @xmath199 $ ] . then for each @xmath176 , @xmath200 \2 . if @xmath201 is a flow line of @xmath181 , then for each @xmath202 , @xmath203)\cap\wfh(\mathbf p\mathbf u)=\emptyset\ \longrightarrow\ \gamma(0)\not\in\wfh(\mathbf u).\ ] ] propagation of singularities states in particular that if @xmath204 and @xmath205 microlocally near some @xmath206 , then @xmath205 microlocally near @xmath207 for @xmath208 ; in other words , regularity can be propagated forward along the hamiltonian flow lines . ( if @xmath209 instead , then regularity could be propagated backward . ) we next state less standard estimates guaranteeing regularity of @xmath118 near sources / sinks , provided that @xmath118 lies in a sufficiently high sobolev space . denote by @xmath210 the natural projection map . let @xmath188 be a real - valued function on @xmath144 ; for simplicity , we assume that it is homogeneous of degree 1 in @xmath190 . assume that @xmath211 is a closed conic set invariant under the flow @xmath192 and there exists an open conic neighbourhood @xmath159 of @xmath212 with the following properties for some constant @xmath213 : @xmath214 we call @xmath215 a _ radial source_. a _ radial sink _ is defined analogously , reversing the direction of the flow . the following propositions come essentially from the work of melrose ( * ? ? ? * propositions 9,10 ) and vasy ( * ? ? ? * propositions 2.3,2.4 ) . the first one shows that for sufficiently regular distributions the wave front set at radial sources is controlled . . ( b ) the assumptions of proposition [ l : radial2 ] . here @xmath216 is the boundary of @xmath193 and the flow lines of @xmath181 are pictured.,title="fig : " ] . ( b ) the assumptions of proposition [ l : radial2 ] . here @xmath216 is the boundary of @xmath193 and the flow lines of @xmath181 are pictured.,title="fig : " ] to [ l : radial1 ] assume that @xmath217 is as in proposition [ l : hyperbolic ] and @xmath218 is a radial source . then there exists @xmath219 such that ( see figure [ f : radial](a ) ) \1 . for each @xmath220 elliptic on @xmath221 , there exists @xmath172 elliptic on @xmath222 such that if @xmath223 is @xmath140-tempered , then for each @xmath224 , @xmath225 \2 . if @xmath223 is @xmath140-tempered and @xmath220 is elliptic on @xmath222 , then @xmath226 the second result shows that for sufficiently low regularity we have a propagation result at radial sinks analogous to . [ l : radial2 ] assume that @xmath217 is as in proposition [ l : hyperbolic ] and @xmath218 is a radial sink . then there exists @xmath219 such that for each @xmath220 elliptic on @xmath222 , there exists @xmath172 elliptic on @xmath222 and @xmath227 with @xmath228 , such that if @xmath223 is @xmath140-tempered , then for each @xmath229 ( see figure [ f : radial](b ) ) @xmath230 * remarks*. ( i ) in the case @xmath231 , we can replace @xmath174 by @xmath232 in propositions [ l : radial1 ] and [ l : radial2 ] to make both of them apply to sources and sinks . \(ii ) the precise value of the threshold @xmath233 can be computed by being slightly more careful in the proofs ( using a regularizer @xmath234 for small @xmath235 in place of @xmath236 and an additional regularization procedure to justify ) see for example ( * ? ? ? * propositions 2.3,2.4 ) . we now consider an operator @xmath237 satisfying @xmath238 on a compact manifold @xmath100 , and define the flat trace @xmath239 here @xmath240 is the schwartz kernel of @xmath100 with respect to the density @xmath241 on @xmath100 ; the trace @xmath242 does not depend on the choice of the density . the pullback @xmath243 of the schwartz kernel @xmath244 is defined under the condition as in ( * ? ? ? * theorem 8.2.4 ) . to obtain a concrete expression for @xmath245 we use traces of regularized operators . for that we introduce a family of mollifiers . let @xmath246 be the geodesic distance for @xmath247 in a neighbourhood of @xmath248 with respect to some fixed riemannian metric . let @xmath249 ) $ ] be equal to 1 near 0 . we define @xmath250 , @xmath251 where @xmath252 is chosen so that @xmath253 and satisfies @xmath254 . we have @xmath255 the next lemma shows that the flat trace is well approximated by regular traces see [ a : wf-1 ] for a proof . [ l : appr1 ] for @xmath256 satisfying and @xmath257 given by we have @xmath258 where the trace on the right hand side is well - defined since @xmath259 is smoothing and thus trace class on @xmath260 . if an operator @xmath261 instead acts on sections of a smooth vector bundle , @xmath262 , and satisfies , then we can define the trace of @xmath261 by the formula @xmath263 if @xmath264 is a local frame of @xmath117 and @xmath261 is supported in the domain of the local frame the general case is handled by a partition of unity and the independence of the choice of the frame is easily verified . in this section we use the anisotropic sobolev spaces @xmath77 and the propagation results recalled in [ wfs ] to describe the microlocal structure of the meromorphic continuation of the resolvent . our proof is different that the argument in @xcite in the sense that we use a less refined weight to define anisotropic sobolev spaces and derive the fredholm property of @xmath265 from propagation of singularities . anisotropic sobolev spaces appeared in the study of anosov flows in the works of baladi @xcite , baladi tsujii @xcite , gouzel liverani @xcite , liverani @xcite , and other authors . however , the use of microlocally defined exponential weights allows a more direct study using pde methods . let @xmath266 be as in [ dyns ] and consider the vector bundle , @xmath117 , of differential forms of all orders on @xmath100 . ( the resolvents on forms of different degree are decoupled from each other , however we treat them as a single resolvent to simplify notation . ) consider the first order differential operator @xmath267 where @xmath161 is the generator of the flow @xmath101 , @xmath268 denotes the lie derivative , and @xmath118 is a differential form on @xmath100 . the principal symbol @xmath269 , as defined in [ a : wf-1 ] , is diagonal and homogeneous of degree 1 : @xmath270 , @xmath271 . this follows immediately from the fact that for any basis @xmath264 of @xmath117 , and all @xmath272 , @xmath273 where the second term in the sum is a differential operator of order 0 . the hamilton flow is @xmath274 . define the decomposition @xmath275 where @xmath276 are dual to @xmath277 . from it follows that @xmath278 here @xmath279 is the projection defined before . moreover , under the assumptions of we have @xmath280 , and the convergence in and the constant @xmath62 are locally uniform in @xmath160 . in particular implies that , in the sense of definition , the closed conic sets @xmath281 and @xmath282 are a radial source and a radial sink , respectively see figure [ f : dynamics ] below . anisotropic sobolev spaces have a long tradition in microlocal analysis going back to the work of duistermaat @xcite and unterberger @xcite . to define a version on which @xmath283 is a fredholm operator , we use a function @xmath284 ) $ ] , homogeneous of degree @xmath285 and such that @xmath286 a function with these properties , supported in a small neighbourhood of @xmath287 , can be constructed using part 1 of lemma [ l : radialesc ] . a more refined version , not needed here , can be found in ( * ? ? ? * lemma 1.2 ) . with @xmath288 in place we choose a pseudodifferential operator @xmath289 satisfying @xmath290 where @xmath291 is any smooth norm on the fibers of @xmath144 . then , using @xcite as in ( * ? ? ? * ( 3.9 ) ) , @xmath292 for any @xmath293 . the anisotropic sobolev spaces are defined using this exponential weight : @xmath294 note that @xmath295 . define the domain , @xmath296 , of @xmath297 as the set of @xmath298 such that the distribution @xmath299 is in @xmath300 . the hilbert space norm on @xmath301 is given by @xmath302 . here we state the properties of the resolvent of @xmath174 : [ l : resolvent - meromorphic ] fix a constant @xmath303 . then for @xmath304 large enough depending on @xmath305 , @xmath306 is a fredholm operator of index 0 in the region @xmath307 . [ l : our - stuff - actually - makes - sense ] let @xmath304 be fixed as in proposition [ l : resolvent - meromorphic ] . then there exists a constant @xmath308 depending on @xmath309 , such that for @xmath310 , the operator @xmath306 is invertible and @xmath311 where @xmath312 is the pullback operator by @xmath313 on differential forms and the integral on the right - hand side converges in operator norm @xmath314 and @xmath315 . the fredholm property and the invertibility of @xmath265 for large @xmath316 show that the resolvent @xmath317 is a meromorphic family of operators with poles of finite rank see for example ( * ? ? ? * proposition d.4 ) . note that ruelle pollicott resonances , the poles of @xmath318 in the region @xmath319 , are then the poles of the meromorphic continuation of the schwartz kernel of the operator given by the right - hand side of , and thus are independent of the choice of @xmath309 and the weight @xmath320 . microlocal structure of @xmath318 is described in [ l : resolvent - properties ] let @xmath33 and @xmath321 be as in proposition [ l : resolvent - meromorphic ] and assume @xmath322 . then for @xmath68 near @xmath323 , @xmath324 where @xmath325 holomorphic near @xmath323 , @xmath326 is the commuting projection onto the kernel of @xmath327 , and @xmath328 where @xmath329 is the diagonal and @xmath330 is the positive flow - out of @xmath192 on @xmath331 : @xmath332 in [ s : micro-1 ] , we construct a semiclassical nontrapping parametrix and study its @xmath140-wave front set . in [ s : micro-2 ] , we express @xmath318 via the parametrix and use the results of [ s : micro-1 ] to finish the proofs of propositions [ l : resolvent - meromorphic][l : resolvent - properties ] . we will modify @xmath333 by a complex absorbing potential which will eliminate trapping and guarantee invertibility of the modified operator . it is convenient now to introduce a semiclassical parameter @xmath140 and use the algebra @xmath334 of semiclassical pseudodifferential operators , see [ a : wf-2 ] . if @xmath174 is defined in , then @xmath335 is a semiclassical differential operator with principal symbol @xmath336 . the original operator @xmath174 is independent of @xmath140 . however , the parameter @xmath140 enters in the parametrix @xmath337 defined in proposition [ l : parametrix - properties ] below , which is a convenient tool to show the fredholm property of @xmath265 . moreover , the semiclassical wavefront set of @xmath337 can be computed by studying the dependence of @xmath338 on @xmath339 ; this is not possible for nonsemiclassical wavefront sets as we lose information on how the lengths of covectors in @xmath340 and @xmath341 are related . therefore , semiclassical methods are convenient for the proof of proposition [ l : resolvent - properties ] , which is the key component of the present paper . we need a semiclassical adaptation , @xmath342 , of the operator @xmath320 , such that @xmath343 where @xmath344 is equal to 1 near the zero section , and @xmath345 does not intersect the zero section . note that , since @xmath346 is homogeneous of degree zero , @xmath347 define the space @xmath348 . for each fixed @xmath349 , the operator @xmath350 lies in @xmath351 and @xmath352 ; therefore , @xmath353 is bounded as @xmath354 . by ( * theorem 8.8 ) , @xmath355 and the norms are equivalent , with the constant depending on @xmath140 . we also use the semiclassical analogue of the space @xmath296 , with the norm @xmath356 we modify @xmath357 by adding an @xmath140-pseudodifferential _ complex absorbing potential _ @xmath358 , which provides a localization to a neighbourhood of the zero section : @xmath359 here @xmath291 is a fixed norm on the fibers of @xmath144 . the action of @xmath360 on @xmath361 is equivalent to the action on @xmath362 of the conjugated operator @xmath363+\mathcal o(h^2)_{\psi^{-1+}_h } , \end{split}\ ] ] where the asymptotic expansion follows from @xcite see ( * ? ? ? * ( 3.11 ) ) . we note that @xmath364=\mathcal o(h^\infty)_{\psi^{-\infty}}$ ] for small enough @xmath365 , because @xmath345 does not intersect the zero section . on @xmath366 , projected onto the fibers of @xmath367 . the shaded region is the wave front set of @xmath368 . ] we now use the propagation of semiclassical singularities and the elimination of trapping due to the complex absorbing potential to establish existence and properties of the inverse of @xmath369 . the relation between propagation and solvability has a long tradition see @xcite although the details below may look complicated the idea is simple and natural , given the dynamics of the flow pictured on figure [ f : dynamics ] : given bounds on @xmath370 , we first establish bounds on @xmath118 microlocally near the sources @xmath371 by proposition [ l : radial1 ] . by ellipticity ( proposition [ l : elliptic ] ) we can also estimate @xmath118 on @xmath372 and in @xmath373 , where the latter is made possible by the potential @xmath368 . the resulting estimates can be propagated forward along the flow @xmath192 , using proposition [ l : hyperbolic ] , to the whole @xmath374 ; finally , to bound @xmath118 microlocally near @xmath375 , we use proposition [ l : radial2 ] . the spaces @xmath376 provide the correct regularity for propositions [ l : radial1 ] and [ l : radial2 ] . [ l : parametrix - properties ] fix a constant @xmath303 and @xmath377 . then for @xmath304 large enough depending on @xmath305 and @xmath140 small enough , the operator latexmath:[\[\mathbf p_\delta ( z ) : d_{sg(h)}\to h_{sg(h)},\quad -c_0h\leq \im z\leq 1,\quad @xmath337 , satisfies @xmath379 with @xmath380 defined in proposition [ l : resolvent - properties ] , and @xmath381 is defined in [ a : wf-2 ] . we first prove the bound @xmath382 without loss of generality , we assume that @xmath383 . by a microlocal partition of unity , it suffices to obtain bounds on @xmath384 , where @xmath172 falls into one of the following five cases : * case 1 * : @xmath385 . then @xmath386 is elliptic on @xmath175 . we have @xmath387 , where @xmath388 and @xmath389 . by proposition [ l : elliptic ] , @xmath390 where @xmath391 is microlocalized in a neighbourhood of @xmath175 . putting @xmath392 , we obtain @xmath393 * case 2 * : @xmath175 is contained in a small neighbourhood of @xmath371 , where @xmath394 is the natural projection . by ( * ? ? ? * theorem 8.6 ) , @xmath395 and @xmath396 near @xmath371 . therefore , @xmath397 is microlocally equivalent to the space @xmath398 near @xmath371 in the sense that @xmath399 for each @xmath227 with @xmath400 contained in a neighbourhood of @xmath371 and each @xmath140-tempered @xmath401 . since @xmath402 , we get @xmath403 . the set @xmath404 is a radial source ( see the discussion following ) and we can apply proposition [ l : radial1 ] and to obtain , for @xmath309 sufficiently large , @xmath405 where @xmath220 is some operator with @xmath406 in a neighbourhood of @xmath371 . * case 3 * : @xmath175 is contained in a small neighbourhood of some @xmath407 , where @xmath408 is the closure of @xmath409 in @xmath193 . then by and the discussion following it , @xmath410 in @xmath193 as @xmath411 . therefore , for any fixed neighbourhood @xmath159 of @xmath371 , there exists @xmath227 with @xmath412 and @xmath202 such that @xmath413 . from , and the fact that @xmath414 , @xmath415 applying proposition [ l : hyperbolic ] to the operator @xmath386 and arguing similarly to case 1 , we get @xmath416 , where @xmath417 is microlocalized in a small neighbourhood of @xmath418}e^{th_p}(\wfh(a))$ ] . now , @xmath419 can be estimated by case 2 , yielding @xmath420 where @xmath220 is microlocalized in a small neighbourhood of @xmath371 . * case 4 * : @xmath175 is contained in a small neighbourhood of some @xmath421 . then @xmath422 converges to the zero section as @xmath411 ; therefore , there exists @xmath202 such that @xmath423 . similarly to case 3 , by propagation of singularities we find @xmath416 , where @xmath424 and @xmath425 is contained in a small neighbourhood of @xmath418}e^{th_p}(\wfh(a))$ ] . estimating @xmath419 by case 1 , we get @xmath426 where @xmath427 is microlocalized in a small neighbourhood of @xmath428 . * case 5 * : @xmath175 is contained in a small neighbourhood of @xmath375 . note that the space @xmath397 is microlocally equivalent to the space @xmath429 near @xmath375 , similarly to case 2 . since @xmath409 is a radial sink , by proposition [ l : radial2 ] we get , for @xmath309 sufficiently large , @xmath430 , where @xmath431 are microlocalized in a small neighbourhood of @xmath375 and @xmath432 . then @xmath419 can be estimated by a combination of the preceding cases , using a microlocal partition of unity ; this gives @xmath433 combining , , we get . for for the dynamics of @xmath434 , @xmath404 is a sink and @xmath409 a source . hence the proof of applies to @xmath435 , and we obtain the adjoint bound @xmath436 we now show that @xmath437 is invertible @xmath438 . injectivity follows immediately from ; we also get the bound on the inverse once surjectivity is proved . to see surjectivity , note first that implies that if @xmath439 and @xmath440 is a cauchy sequence in @xmath397 , then @xmath441 is a cauchy sequence in @xmath397 as well ; since the operator @xmath437 is closed on @xmath397 with domain @xmath442 , we see that the image of @xmath437 is a closed subspace of @xmath397 . now , @xmath443 is the dual to @xmath397 under the @xmath362 pairing ( fixing an inner product on the fibers of @xmath117 ) see ( * ? ? ? * ( 8.3.11 ) ) . therefore , it suffices to show that if @xmath444 and @xmath445 for all @xmath446 , then @xmath447 . taking @xmath448 , we see that @xmath449 ; it remains to use . to show the restriction on the wave front set of @xmath337 , by lemma [ l : wfs ] it is enough to show that for each @xmath450 , there exist neighbourhoods @xmath159 of @xmath160 and @xmath161 of @xmath451 such that for each @xmath140-tempered @xmath452 and @xmath453 , if @xmath454 , then @xmath455 . this follows similarly to the proof of part 2 of proposition [ l : elliptic ] from the estimates , , , keeping in mind that @xmath456 . we assume that @xmath68 varies in some compact subset of @xmath307 and choose @xmath140 small enough so that @xmath457 satisfies @xmath458 , @xmath459 . proposition [ l : resolvent - meromorphic ] follows immediately from proposition [ l : parametrix - properties ] , given that @xmath460 are topologically isomorphic to @xmath461 and @xmath462 is smoothing and thus compact . to show proposition [ l : our - stuff - actually - makes - sense ] , we first note that since derivatives of the flow @xmath101 are bounded exponentially in @xmath463 , we have @xmath464 , where @xmath308 is a constant depending on @xmath309 . therefore , if @xmath310 , @xmath465 , and @xmath466 , then we see @xmath467 where the integrals converge in @xmath468 . this implies that @xmath265 is injective @xmath469 and thus invertible , and holds . for in proposition [ l : resolvent - properties ] we note that the fredholm property shows that , near a pole @xmath470 , @xmath471 , where @xmath472 are operators of finite rank see for instance @xcite . we have @xmath473 @xmath474 = 0 $ ] and , using cauchy s theorem , @xmath475 . equating powers of @xmath476 in the equation @xmath477 shows that @xmath478 , and @xmath479 . finally , to show we use the formula @xmath480 where @xmath481 , @xmath482 , and @xmath457 . now , by proposition [ l : parametrix - properties ] , and since @xmath368 is pseudodifferential , we get @xmath483 to handle the remaining term in , we first assume that @xmath68 is not a pole of @xmath484 . applying again proposition [ l : parametrix - properties ] , we see that @xmath485 therefore , @xmath486 . since @xmath318 does not depend on @xmath365 and @xmath140 , by , @xmath487 as claimed . in a neighbourhood of a pole @xmath323 of @xmath484 , we replace @xmath318 in by @xmath488 . arguing as before , we get @xmath489 uniformly in @xmath68 near @xmath323 . by taking @xmath490 derivatives at @xmath491 we obtain the first part of . by taking @xmath492 derivatives at @xmath491 , we get @xmath493 , which implies the second part of . the proof is based on which relates the resolvent and the propagator . the description of the wave front set of @xmath494 allows us to take the flat trace of the left hand side composed with @xmath495 and that formally gives the meromorphic continuation . to justify this we first use the mollifiers @xmath496 to obtain trace class operators to which lemma [ l : appr1 ] can be applied : [ l : appr ] suppose that @xmath496 is given by and that @xmath497 . then there exists a constant @xmath498 , independent of @xmath499 such that @xmath500 we replace @xmath501 by @xmath502 ( considering the flow in the opposite time direction ) . the first estimate follows from @xmath503 provided @xmath504 . here @xmath505 is any fixed riemannian metric on @xmath100 . for the second estimate in we use the definition of @xmath506 : @xmath507 where the last estimate comes from lemma [ l : dyn-3 ] . we now complete the proof of the meromorphic continuation of @xmath32 . thanks to formula we need to show that @xmath508 has a meromorphic continuation to @xmath509 for any @xmath33 , with poles that are simple and residues which are integral . fix @xmath510 such that @xmath511 for all @xmath50 and put @xmath512 where @xmath116 is defined in [ s : trace - identities ] . for large @xmath202 , take @xmath513 such that @xmath514 near @xmath515 $ ] and @xmath516 everywhere . integrating against the function @xmath517 , we get @xmath518 using the bound on the number of closed geodesics given in lemma [ l : dyn-4 ] together with , we see that for @xmath519 , @xmath520 we can change the order in which limits are taken by ; we can replace the domain of integration by @xmath521 since @xmath522 for @xmath523 small enough and @xmath524 $ ] . let @xmath525 , where @xmath318 is the inverse of @xmath265 on the anisotropic sobolev space @xmath526 , studied in [ stuff ] , and @xmath309 is large depending on @xmath305 . by proposition [ l : our - stuff - actually - makes - sense ] , we have for @xmath519 , @xmath527 because of the choice of @xmath528 ( @xmath529 for all @xmath11 ) , and as @xmath530 is contained in the graph of @xmath531 , proposition [ l : resolvent - properties ] shows that @xmath532 satisfies the assumptions of lemma [ l : appr1 ] with the poles handled as in . hence , by another application of , @xmath533 which is a meromorphic function . finally , to see that @xmath534 has simple poles and integral residues , we use the following elementary result based on the fact that traces of nilpotent operators are @xmath285 : [ l : fred ] suppose that that a linear map @xmath535 satisfies @xmath536 for some @xmath537 . then for @xmath538 holomorphic near @xmath539 we have @xmath540 where @xmath541 is defined by the power series expansion at @xmath542 ( which is finite ) . from we have near a pole @xmath323 of @xmath543 , @xmath544 where @xmath545 is holomorphic near @xmath546 and @xmath547 is given by : @xmath548 here we use the fact that @xmath549 and @xmath550 agree on finite rank operators ( as follows from an approximation statement and the fact that the trace of a smoothing operator is the integral of its schwartz kernel over the diagonal , see ) . we now apply lemma [ l : fred ] with @xmath551 and @xmath552 . in this appendix we provide proofs of statements made in [ dyns ] . it follows immediately from the anosov property that ( with @xmath553 denoting the identity operator ) @xmath554 indeed , if @xmath555 and @xmath556 , then @xmath557 for all @xmath558 , implying by that @xmath559 . the following lemma is a generalization of to the case when @xmath560 is close to @xmath561 . we fix a smooth distance function @xmath562 on @xmath100 and a smooth norm @xmath291 on the fibers of @xmath563 . [ l : dyn-1 ] let @xmath564 and @xmath565 be a continuous family of invertible linear transformations such that @xmath566 and @xmath567 maps @xmath568 onto @xmath569 . . then there exist @xmath570 and @xmath62 such that @xmath571 we first note that it suffices to prove for sufficiently large @xmath463 . indeed , if @xmath572 is a large fixed integer , @xmath555 , and @xmath573 and @xmath574 are both small , then @xmath575 and @xmath576 are small as well ; applying for @xmath577 in place of @xmath463 , we get that @xmath578 is small . assume that the conditions of are satisfied and put @xmath579 , where @xmath580 . for @xmath463 large enough , the anosov property implies latexmath:[\[|v_u|\leq \textstyle{1\over 2}|d\varphi_t(x)v_u|,\quad @xmath365 small enough , @xmath582 are close to 1 , we get @xmath583 where the last inequality is due to the fact that @xmath584 , @xmath585 . fix a constant @xmath586 such that for some choice of the norm on the space @xmath587 of twice differentiable functions on @xmath100 , there exists a constant @xmath62 such that @xmath588 such @xmath102 exists since @xmath100 is compact and @xmath101 is a one - parameter group . as a consequence of ( since it gives a bound on the lipschitz norm of @xmath101 ) , we get @xmath589 the next lemma in particular implies ( by letting @xmath109 ) that two different closed trajectories of nearby periods @xmath590 have to be at least @xmath591 away from each other , where @xmath365 is a small constant . [ l : dyn-2 ] fix @xmath105 . then there exist @xmath592 such that for each @xmath593 , @xmath594 without loss of generality , we may assume that @xmath595 is small depending on @xmath365 . by , we see that @xmath596 whenever @xmath597 . therefore , we may operate in a coordinate neighbourhood containing @xmath598 , identified with a ball in @xmath599 . we replace @xmath600 with @xmath601 for some @xmath602 so that @xmath603 by , we have for all @xmath604 , @xmath605 using the taylor expansion of @xmath560 in @xmath561 , we see that @xmath606 next , @xmath607 ; by taylor expanding @xmath608 in @xmath463 , we get @xmath609 together , these give @xmath610 since @xmath611 and @xmath612 , we get @xmath613 let @xmath567 be a family of transformations satisfying the conditions of lemma [ l : dyn-1 ] ; it can be defined for example using parallel transport along geodesics with respect to some riemannian metric and projectors corresponding to the decomposition @xmath614 . then @xmath567 maps @xmath615 onto @xmath616 . since @xmath617 , we get for @xmath595 small enough depending on @xmath365 , @xmath618 . since @xmath619 , we find @xmath620 . then @xmath621 now , by , @xmath622 ; since this space is transverse to @xmath623 , and by lemma [ l : dyn-1 ] , we get @xmath624 it remains to choose @xmath365 small enough so that @xmath625 . we now give a volume bound on the set of nearly closed trajectories : first of all , we can replace the range of values of @xmath463 in by @xmath626 , where @xmath365 is the constant from lemma [ l : dyn-2 ] . ( indeed , we can write @xmath627 $ ] as a union of such intervals . ) next , let @xmath628 , with @xmath572 depending on @xmath111 , be a maximal set of points in @xmath100 such that @xmath629 . since the metric balls of radius @xmath630 centered at @xmath631 are disjoint , by calculating the volume of their union we find @xmath632 . now , @xmath633 take some @xmath634 such that @xmath635 is nonempty and fix @xmath636 . then for each @xmath637 , we have @xmath638 , @xmath639 . by lemma [ l : dyn-2 ] , @xmath635 is contained in an @xmath640 sized tubular neighbourhood of the trajectory @xmath641 . therefore , we get @xmath642 , finishing the proof . let @xmath643 be a closed trajectory of period @xmath644 . then for each @xmath593 , we have by , @xmath645 moreover , for @xmath646 and @xmath595 small enough depending on @xmath111 , the tubular neighbourhoods on the right - hand side of for different closed trajectories do not intersect . the volume ( in @xmath647 ) of each tubular neighbourhood is bounded from below by @xmath648 ; it remains to let @xmath109 and apply lemma [ l : dyn-3 ] . in this appendix , we give a self - contained proof of guillemin s trace formula ( including the special case ) in the case of anosov flow @xmath113 on a compact manifold @xmath100 . the proof is somewhat simplified by the fact that @xmath615 is a subbundle of @xmath563 transversal to @xmath649 and invariant under the flow . if @xmath643 is a closed trajectory with period @xmath650 ( here @xmath644 need not be the _ primitive _ period ) , then the linearized poincar map is defined by @xmath651 note that @xmath652 is invertible by . the maps @xmath653 are conjugate to each other by @xmath654 for all @xmath309 , therefore the expressions @xmath655 and @xmath656 , used in , are independent of the choice of the base point on @xmath50 . fix a density @xmath241 on @xmath100 and let @xmath657 be the schwartz kernel of @xmath658 with respect to this density , that is for @xmath659 , @xmath660 to be able to define the flat trace of @xmath501 as a distribution in @xmath661 , we need to take some @xmath662 and show that the operator @xmath663 satisfies the condition , that is @xmath664 does not intersect the diagonal . by the formula for the wave front set of a pushforward ( * ? ? ? * theorem 8.2.12 ) , we know that @xmath665 and thus it suffices to show that @xmath666 note that is exactly the condition under which one can define the pullback @xmath667 of @xmath668 by the map @xmath669 , and @xmath670 now , @xmath657 is a delta function on the surface @xmath671 , therefore by ( * ? ? ? * theorem 8.2.4 ) its wave front set is contained in the conormal bundle to that surface : @xmath672 then to prove , we need to show that if @xmath673 , @xmath674 , @xmath675 , and @xmath676 , then @xmath677 ; this follows immediately from . the principal component of the proof of the trace formula is the following [ l : guillemin - local ] let @xmath678 and @xmath650 be such that @xmath679 . then there exists @xmath680 and a neighborhood @xmath681 of @xmath682 such that @xmath683 for @xmath684 and for each @xmath685 , we have @xmath686 where @xmath124 is defined in . we choose a local coordinate system @xmath687 , @xmath688 , where @xmath689 is a neighborhood of @xmath682 , such that @xmath690 we next choose small @xmath691 such that for @xmath692 and @xmath693 , we have @xmath694 . we define the maps @xmath695 and @xmath696 by the formulas @xmath697 , we have @xmath698 moreover , @xmath699 and @xmath700 . since the flat trace does not depend on the choice of density on @xmath100 , we may choose the density @xmath241 so that @xmath701 is the standard density on @xmath599 . then for @xmath693 and @xmath702 , we have @xmath703 the left - hand side of is @xmath704 integrating out @xmath463 , we get @xmath705 now , @xmath706 is conjugated by the map @xmath707 to the poincar map @xmath124 , therefore @xmath708 is invertible and for @xmath523 small enough and @xmath709 , the equation @xmath710 has exactly one root at @xmath711 . we then integrate out @xmath712 to get @xmath713 which finishes the proof . by lemma [ l : guillemin - local ] and a partition of unity , we see that for each @xmath714 , we have @xmath715 where the sum is over all closed trajectories @xmath50 with period @xmath49 and @xmath716 refers to the measure @xmath717 on @xmath643 . by taking @xmath718 , we obtain . to show the more general , it suffices to prove a local version similar to : @xmath719 where @xmath720 is the schwartz kernel of the operator @xmath721 , @xmath722 , and @xmath723 are the operators defined by @xmath724 here @xmath264 is a local frame of @xmath725 defined near @xmath682 . define the functions @xmath726 on @xmath727 by @xmath728 then @xmath729 , which means that @xmath730 with @xmath731 defined in . then by lemma [ l : guillemin - local ] , @xmath732 it remains to note that @xmath733 in this appendix , we provide details and references for the concepts and facts listed in [ wfs ] . all the proofs are essentially well known but we include them for the reader s convenience . in standard microlocal analysis the asymptotic parameter is given by @xmath734 , where @xmath735 is fiber variable ( here the norm is with respect to some smooth metric on the compact manifold @xmath2 ) . we start our presentation with the review of that theory . in the semiclassical setting a small parameter @xmath736 is added to measure the wave length of oscillations . we are then concerned in asymptotics as both @xmath737 and @xmath738 . that is one reason for which the fiber compactification is useful as that provides a uniform setting for such asymptotics . in specific applications the operators depend on additional parameters , in our case the spectral parameter @xmath739 or its rescaled version @xmath740 . if the classical objects ( symbols ) satisfy uniform estimates with respect to the parameters , so do their quantizations ( operators ) , as do the derivatives in @xmath68 . that is implicit in many statements but is not stated in order not to clutter the already complicated notation . let @xmath100 be a manifold with a fixed volume form . we use the algebra of pseudodifferential operators @xmath741 , @xmath742 , with symbols lying in the class @xmath743 : @xmath744 see for example @xcite for the basic properties of operators in @xmath745 . in particular , each @xmath746 is bounded between sobolev spaces @xmath747 , or simply @xmath748 if @xmath100 is compact . the wave front set @xmath749 of @xmath750 is a closed conic subset of @xmath138 , with @xmath751 denoting the zero section ; the complement of @xmath749 consists of points in whose conic neighbourhoods the full symbol of @xmath752 is @xmath753 , see the discussion following ( * ? ? ? * proposition 18.1.26 ) . the wave front set @xmath754 of a distribution @xmath755 is defined as follows : a point @xmath756 does not lie in @xmath137 if there exists a conic neighbourhood @xmath159 of @xmath160 such that @xmath757 for each @xmath758 with @xmath759 see ( * ? ? ? * ( 18.1.35 ) and theorem 18.1.27 ) . an equivalent definition ( see ( * ? ? ? * definition 8.1.2 ) ) is given in terms of the fourier transform : @xmath760 if and only if there exists @xmath761 with @xmath762 contained in some coordinate neighbourhood and @xmath763 such that @xmath764 for @xmath765 in a conic neighbourhood of @xmath190 ; here @xmath766 is considered a function on @xmath599 using some coordinate system and @xmath190 is accordingly considered as vector in @xmath599 . the wave front set @xmath767 of an operator @xmath157 is defined using its schwartz kernel @xmath768 : @xmath769 here we use the fixed smooth density on @xmath100 to define the schwartz kernel as a distribution on @xmath770 ; however , this choice does not affect the wave front set . if @xmath771 , then the set defined in is the image of the wave front set @xmath772 of @xmath256 as a pseudodifferential operator under the diagonal embedding @xmath773 , see ( * ? ? ? * ( 18.1.34 ) ) . we first show that @xmath774 with seminorm estimates independent of @xmath73 . for that we use melrose s characterization of pseudodifferential operators @xcite : it is enough to show that for any set of vector fields @xmath775 tangent to the diagonal , we have @xmath776 with norm bounded uniformly in @xmath595 . this can be done in local coordinates , writing @xmath777 , where @xmath778 is a smooth function on @xmath779 , compactly supported in the second argument . we have @xmath780 , where @xmath781 is the jacobian , and the support of the integrand lies @xmath640 close to @xmath561 . then @xmath782 ; indeed , one can rewrite the @xmath561 derivatives falling on the second argument of @xmath778 as derivatives in @xmath783 and integrate by parts . this implies that @xmath784 . locally , vector fields tangent to the diagonal are generated by @xmath785 and @xmath786 and we see that they preserve the class of smooth functions of @xmath787 . therefore , for @xmath788 where @xmath789 are smooth functions . the right hand side is in @xmath790 uniformly in @xmath595 which proves the claim . to obtain in @xmath1 for @xmath791 , and that @xmath496 is uniformly bounded in _ some _ @xmath792 . ] @xmath793 in @xmath794 we apply the same argument to @xmath795 . let @xmath796 and let @xmath797 be the complement of a small conic neighbourhood of the conormal bundle @xmath798 . since @xmath799 by we can choose @xmath797 so that @xmath800 . this means that @xmath801 where the last space consists of all distributions @xmath802 with @xmath803 . if we write @xmath804 then @xmath805 , and hence @xmath806 , @xmath807 since @xmath793 in @xmath808 , @xmath809 in @xmath810 for @xmath811 . hence @xmath812 , @xmath813 , and consequently @xmath814 in @xmath815 . to show that @xmath816 in @xmath817 , we adapt ( * ? ? ? * definition 8.2.2 ) and it suffices to show that for each @xmath818 with @xmath819 , @xmath820 is bounded in @xmath821 uniformly in @xmath822 . in fact , @xmath823 where @xmath824 and @xmath825 denote the operator @xmath257 acting on @xmath561 and @xmath783 variables in @xmath826 , and the superscript @xmath463 denotes the transpose . since @xmath496 is uniformly bounded in @xmath794 and @xmath827 is contained in a small neighbourhood of @xmath828 , @xmath829 is in @xmath830 with seminorms uniformly bounded with respect to @xmath73 , and with @xmath831 . are _ not _ pseudifferential operators on @xmath832 . however , the localization to a region where @xmath833 and @xmath834 are comparable makes the composition into a pseudodifferential operator . ] hence @xmath835 uniformly in @xmath836 and thus @xmath837 in @xmath838 . we now invoke ( * ? ? ? * theorem 8.2.4 ) to conclude that @xmath839 in @xmath840 . hence @xmath841 as @xmath842 , proving the lemma . if @xmath117 is a smooth @xmath843-dimensional vector bundle over @xmath100 ( see for example ( * ? ? ? * definition 6.4.2 ) ) , then we can consider distributions @xmath844 with values in @xmath117 . the wave front set @xmath845 , a closed conic subset of @xmath138 , is defined as follows : @xmath846 if and only if for each local basis @xmath847 of @xmath117 defined in a neighbourhood @xmath159 of @xmath561 , and for @xmath848 , @xmath849 , we have @xmath850 for all @xmath634 . similarly , one can define @xmath851 for an operator @xmath261 with values in some smooth vector bundle over @xmath770 . an operator @xmath852 is said to be pseudodifferential in the class @xmath741 , denoted @xmath853 , if @xmath854 for all @xmath855 and , for each local basis @xmath847 over some open @xmath681 , we have on @xmath159 , @xmath856 where @xmath857 . as before , the wave front set @xmath858 on @xmath159 is defined as the union of @xmath859 over all @xmath860 . the principal symbol @xmath861 is defined using the standard notion of the principal symbol @xmath862 ( see the discussion following ( * ? ? ? * definition 18.1.20 ) ) as follows : @xmath863 the operator @xmath864 is called _ elliptic _ in the class @xmath745 at some point @xmath756 , if @xmath865 is invertible ( as a homomorphism @xmath866 ) uniformly as @xmath867 for @xmath868 in a conic neighbourhood of @xmath160 ; equivalently , @xmath869 in a conic neighbourhood of @xmath160 . the ( open conic ) set of all elliptic points of @xmath864 is denoted @xmath870 . we now introduce the algebra @xmath871 of _ semiclassical _ pseudodifferential operators , depending on a parameter @xmath349 tending to zero @xcite . the corresponding symbols @xmath872 ( denoted @xmath873 ) satisfy @xmath874 uniformly in @xmath140 as @xmath875 , with the class @xmath876 defined in . each @xmath877 has a semiclassical wave front set @xmath175 , a closed ( and not necessarily conic ) subset of the fiber - radially compactified cotangent bundle @xmath193 ( see @xcite ) ; a point @xmath878 does not lie in @xmath175 if and only if the full symbol @xmath879 of @xmath752 satisfies @xmath880 for @xmath140 small enough and @xmath881 in a neighbourhood of @xmath160 in @xmath193 . the elements of @xmath871 act between semiclassical sobolev spaces @xmath882 with norm @xmath883 , see @xcite . using operators in @xmath871 , we define the semiclassical wave front set @xmath884 for an @xmath140-tempered family of distributions @xmath885 , see for example @xcite , @xcite . similarly to @xmath137 , the set @xmath886 can be characterized using the fourier transform as follows : @xmath887 if and only if there exists @xmath761 supported in some coordinate neighbourhood , with @xmath763 , and a neighbourhood @xmath888 of @xmath190 in @xmath193 , such that @xmath889 for @xmath890 . this characterization immediately implies . similarly , one can define the wave front set @xmath891 of an @xmath140-tempered family of operators @xmath892 . the semiclassical principal symbol of @xmath893 , denoted @xmath894 , lies in the space @xmath895 see ( * ? ? ? * theorem 14.1 ) . note that this encodes the behaviour of the full symbol of @xmath752 at @xmath896 everywhere on @xmath193 , as well as the behaviour at the fiber infinity @xmath897 for small , but positive , values of @xmath140 see @xcite . we can not use the more convenient space of classical operators , whose principal symbol is just a function on @xmath144 ( see @xcite ) because the symbol of the operator @xmath898 ( see [ micro ] ) has the form @xmath899 , with @xmath900 and @xmath901 narrowly missing the class @xmath902 . the ( open ) elliptic set @xmath903 is defined as follows : @xmath904 if @xmath905 for @xmath140 small enough and all @xmath881 in a neighbourhood of @xmath160 in @xmath193 . similarly to [ a : wf-1 ] , we can study operators and distributions with values in smooth vector bundles over @xmath100 . using local coordinates , we reduce to the case @xmath906 , @xmath907 . assume first that there exist neighbourhoods @xmath908 such that holds . take @xmath909 with @xmath910 , and neighbourhoods @xmath911 of @xmath912 , such that @xmath913 . let @xmath914 , and take arbitrary @xmath915 ( depending on @xmath140 ) . then @xmath916 where @xmath917 denotes the semiclassical fourier transform @xcite . we have @xmath918 ( see ( * ? ? ? * ( 8.4.7 ) ) ) and thus by , @xmath919 . it follows that @xmath920 and thus by the semiclassical analog of ( * ? ? ? * proposition 8.1.3 ) , @xmath921 for @xmath922 , yielding , by the characterization of @xmath146 via the fourier transform , @xmath923 . now , assume that @xmath923 . take @xmath909 such that @xmath924 on a neighbourhood @xmath925 of @xmath561 , @xmath926 on a neighbourhood @xmath927 of @xmath783 , and neighbourhoods @xmath911 of @xmath912 , such that @xmath928 put @xmath929 , @xmath930 , and assume that @xmath931 is an @xmath140-tempered family of distributions on @xmath100 such that @xmath918 . by fourier inversion formula together with the characterization of @xmath932 via the fourier transform , @xmath933 therefore , if @xmath914 , then for bounded @xmath934 , @xmath935 however , we have by , @xmath936 for @xmath937 ; therefore , @xmath938 for @xmath922 , implying that @xmath939 in this subsection , we denote by boldface letters distributions with values in @xmath117 or operators acting on such distributions , and with regular letters , scalar distributions and operators . note that any @xmath877 can be viewed as an element of @xmath940 via the diagonal action . part 2 follows immediately from part 1 and the definition of @xmath146 . indeed , assume that @xmath941 ; it suffices to prove that @xmath942 . take a neighbourhood @xmath159 of @xmath160 such that @xmath943 , and choose @xmath227 such that @xmath944 and @xmath945 . then @xmath946 is elliptic on @xmath159 and @xmath947 for all @xmath176 ; by part 1 , applied to the operator @xmath946 in place of @xmath174 , we get @xmath948 for all @xmath176 and all @xmath172 such that @xmath949 , as required . it remains to prove part 1 . similarly to the proof of ( * ? ? ? * theorem 18.1.9 ) ( reducing to local frames of @xmath117 and either using cramer s rule or repeatedly differentiating the equation @xmath950 ) , we see that the inverse @xmath951 of @xmath184 in @xmath952 is well - defined and lies in @xmath953 for @xmath140 small enough and @xmath954 . using a cutoff function in @xmath193 , we can then construct @xmath955 such that @xmath956 near @xmath175 . take @xmath957 such that @xmath958 , then @xmath959 microlocally near @xmath175 , where @xmath960 . using asymptotic neumann series exactly as in the proof of ( * ? ? ? * theorem 18.1.9 ) to invert @xmath961 , we construct @xmath962 such that @xmath963 then @xmath964 , implying . similarly to proposition [ l : elliptic ] , it is enough to prove part 1 . moreover , by a partition of unity , we may assume that @xmath175 is contained in a small neighbourhood of some fixed @xmath965 . let @xmath966 and take @xmath196 such that @xmath967 ; we may then assume that @xmath968.\ ] ] it is enough to prove the estimate @xmath969 indeed , without loss of generality we may assume that each for each @xmath970 , there exists @xmath199 $ ] such that @xmath971 ; one can then apply with @xmath752 replaced by @xmath972 and replace @xmath973 by @xmath974 for certain @xmath417 microlocalized near @xmath975)$ ] ; repeating this process , and recalling that @xmath118 is @xmath140-tempered , we can ultimately make this term @xmath976 . in addition to a smooth density on @xmath100 , we fix a smooth inner product on the fibers of @xmath117 ; this defines a hilbert inner product @xmath977 on @xmath978 . we denote @xmath979 so that @xmath980 are symmetric and @xmath981 . we will use an _ escape function _ @xmath982 , such that @xmath983 and @xmath984 here @xmath303 is a large constant to be chosen later . to construct such @xmath931 , we use and identify a tubular neighbourhood of @xmath975)$ ] contained in @xmath985 with @xmath986 for small @xmath235 , so that @xmath181 is mapped to @xmath987 . we then put @xmath988 , where @xmath989)$ ] satisfies @xmath514 on @xmath990 , and @xmath991 satisfies @xmath992 everywhere , @xmath993 , and @xmath994 outside of @xmath995 . ( to construct @xmath996 we first choose @xmath997 such that @xmath998 , @xmath999 , and @xmath1000 on @xmath1001 . we then put @xmath1002 . ) we now prove by a positive commutator argument , going back to @xcite . because @xmath175 might intersect the fiber infinity @xmath897 , we have to put in regularizing pseudodifferential operators . assume that @xmath1003 , @xmath1004 , quantizes the symbol @xmath1005 note that @xmath1006 is bounded uniformly in @xmath1007 for @xmath1008 . take @xmath1009 such that @xmath1010 and @xmath1011 , and put @xmath1012 , so that @xmath1013 . assume that @xmath1014 . for each @xmath593 @xmath1015\mathbf u,\mathbf u\rangle + { 1\over 2}\langle ( f_\epsilon^*f_\epsilon\im\mathbf p+(\im\mathbf p)f_\epsilon^*f_\epsilon)\mathbf u,\mathbf u\rangle,\ ] ] where the product on the left - hand side makes sense because @xmath1016 , @xmath1017 and @xmath1018 . we now estimate the terms on the right - hand side of . denote @xmath1019\in \psi_h^{2m-2}(x;\operatorname{hom}(\mathcal e ) ) , \ ] ] which is bounded in @xmath1020 , uniformly in @xmath595 . the principal symbol of @xmath1021 in @xmath1022 is independent of @xmath140 and diagonal with entries @xmath1023 since @xmath1024 , we get @xmath1025 uniformly in @xmath1026 . therefore , for @xmath305 large enough depending on @xmath176 , and some large constant @xmath62 , implies that @xmath1027 the sharp garding inequality ( * ? ? ? * theorem 9.11 ) applied to the operator @xmath1028 , where @xmath1029 , gives , uniformly in @xmath595 , @xmath1030 we next claim that , uniformly in @xmath595 , @xmath1031 where @xmath308 is a constant independent of the choice of @xmath931 . indeed , the left - hand side of can be written as @xmath1032-[f_\epsilon^*,\im\mathbf p]f_\epsilon)\mathbf u , \mathbf u\rangle.\ ] ] since @xmath1033 is diagonal and nonpositive , the first term is bounded from above by @xmath1034 by the sharp garding inequality . the second term is bounded by @xmath1035 , since the principal symbol calculus shows that @xmath1036-[f_\epsilon^*,\im\mathbf p]f_\epsilon\in h^2\psi^{2m-1}_h\ ] ] uniformly in @xmath595 . combining , , , taking @xmath1037 , we get uniformly in @xmath595 , @xmath1038 therefore , we have uniformly in @xmath595 , @xmath1039 now , @xmath1040 and @xmath1041 in @xmath1042 as @xmath109 ; therefore , @xmath1043 in @xmath1044 . since @xmath1045 is bounded uniformly in @xmath595 , by the compactness of the unit ball in @xmath362 in the weak topology we get @xmath1046 ; therefore , @xmath1047 , and @xmath1048 it remains to apply the elliptic estimate together with . to obtain part 1 we adapt the proof of ( * ? ? ? * lemma 2.1 ) . let @xmath1058 , where @xmath1059 is the natural projection . since @xmath1060 is homogeneous of degree @xmath1054 , @xmath1061 is a smooth vector field on @xmath1062 , and the closed set @xmath1063 is invariant under the flow @xmath1064 . we will construct @xmath1065 ) $ ] such that @xmath1066 , @xmath1067 and @xmath1068 on a neighbourhood of @xmath1063 . then @xmath1069 will be a function satisfying the condition in part 1 . to obtain @xmath1070 , fix @xmath1071)$ ] such that @xmath1072 near @xmath222 and @xmath1073 . by the first assumption in , we have for @xmath202 large enough , @xmath1074 and by the invariance of @xmath1075 by the flow , @xmath1076 for all @xmath1077 . furthermore , @xmath1078 for all @xmath1079 ; indeed , if @xmath1080 , then @xmath1081 and otherwise @xmath1082 , and @xmath1083 everywhere . then the function @xmath1084 satisfies the required assumptions . the proof of part 2 is `` orthogonal '' to the proof of part 1 in the sense that we are concerned about the radial component of @xmath76 . to find @xmath1085 , fix a smooth norm @xmath291 of the fibers of @xmath144 . by the second part of , we have for @xmath1086 large enough , @xmath1087 then the function @xmath1088 is homogeneous of degree 1 , @xmath1089 everywhere , and @xmath1090 for @xmath1091 . as before , it is enough to prove part 1 . similarly to , it suffices to prove that for each @xmath1092 elliptic on @xmath222 , there exists @xmath1093 elliptic on @xmath222 such that for each @xmath224 , @xmath1094 indeed , without loss of generality we may assume that @xmath1095 ; then by , each backward flow line of @xmath181 starting on @xmath406 reaches @xmath1096 . combining with propagation of singularities ( proposition [ l : hyperbolic ] ) , we see that for each @xmath1097 elliptic on @xmath222 , there exists @xmath1093 elliptic on @xmath222 such that for each @xmath224 , @xmath1098 iterating this estimate , we arrive to @xmath1099 and the @xmath1100 error term can be trivially removed provided that @xmath1101 . to prove , we shrink the conic neighbourhood @xmath159 of @xmath102 so that @xmath1102 ; here @xmath394 is the natural projection to the fiber infinity . let @xmath1103 be given by lemma [ l : radialesc ] and consider @xmath1104 large enough so that @xmath1105 . let @xmath1106)$ ] satisfy @xmath1107 , @xmath514 on @xmath1108 , and @xmath1109 everywhere . define @xmath1110 by @xmath1111 it follows from lemma [ l : radialesc ] that @xmath1112 , @xmath1113 near @xmath222 , and @xmath1114 everywhere . we now proceed as in the proof of proposition [ l : hyperbolic ] , putting @xmath1115 here @xmath1116 is positive everywhere and is equal to @xmath1117 for large @xmath191 , in particular for @xmath1118 . if @xmath1119 , then similarly to , we find @xmath1120 since @xmath1121 and @xmath1122 on @xmath1123 , we see that for any fixed @xmath303 , @xmath233 large enough depending on @xmath305 , and @xmath224 , @xmath1124 moreover , @xmath233 can be chosen independently of @xmath972 . for @xmath1125 defined by , the sharp garding inequality gives , uniformly in @xmath595 , @xmath1126 arguing as in the proof of proposition [ l : hyperbolic ] , we obtain with @xmath1127 . we proceed as in the proof of proposition [ l : radial1 ] , showing that for each @xmath1092 elliptic on @xmath222 , there exists @xmath172 elliptic on @xmath222 and @xmath227 with @xmath1128 such that for @xmath229 , @xmath1129 take @xmath1130)$ ] such that @xmath1112 and @xmath1113 near @xmath222 , and define @xmath1131 using lemma [ l : radialesc ] with the sign of @xmath188 reversed , so that @xmath1132 on @xmath1123 . we define @xmath1133 as in the proof of proposition [ l : radial2 ] and analyse the terms on the right - hand side of . the first term vanishes near @xmath222 since @xmath1113 there . using the second term , we see that for each @xmath305 , @xmath233 large enough depending on @xmath305 , and @xmath229 , @xmath1134 for some choice of @xmath1135 with @xmath1136 . by sharp garding inequality , we have uniformly in @xmath595 @xmath1137 arguing as in the proof of proposition [ l : hyperbolic ] , we obtain with @xmath1127 . we would like to thank colin guillarmou and frderic naud for helpful discussions and in particular for pointing out that our result holds under the condition and not just for contact flows . we would also like to thank the anonymous referees for corrections and valuable suggestions . partial support by the national science foundation under the grant dms-1201417 is also gratefully acknowledged .
the purpose of this paper is to give a short microlocal proof of the meromorphic continuation of the ruelle zeta function for @xmath0 anosov flows . more general results have been recently proved by giulietti liverani pollicott @xcite but our approach is different and is based on the study of the generator of the flow as a semiclassical differential operator . the purpose of this article is to provide a short microlocal proof of the meromorphic continuation of the ruelle zeta function for @xmath1 anosov flows on compact manifolds : * theorem . * _ suppose @xmath2 is a compact manifold and @xmath3 is a @xmath0 anosov flow with orientable stable and unstable bundles . let @xmath4 denote the set of primitive orbits of @xmath5 , with @xmath6 their periods . then the ruelle zeta function , @xmath7 which converges for @xmath8 has a meromorphic continuation to @xmath9 . _ in fact the proof applies to any anosov flow for which linearized poincar maps @xmath10 for closed orbits @xmath11 satisfy @xmath12 a class of examples is provided by @xmath13 where @xmath14 is a compact orientable negatively curved manifold with @xmath5 the geodesic flow see ( * ? ? ? * lemma b.1 ) . for methods which can be used to eliminate the orientability assumptions see ( * ? ? ? * appendix b ) . the meromorphic continuation of @xmath15 was conjectured by smale @xcite and in greater generality it was proved very recently by giulietti , liverani , and pollicott @xcite . another recent perspective on dynamical zeta functions in the contact case has been provided by faure and tsujii @xcite . our motivation and proof are however different from those of @xcite : we were investigating trace formul for pollicott ruelle resonances @xcite which give some lower bounds on their counting function . sharp upper bounds were given recently in @xcite . to explain the trace formula for resonances suppose first that @xmath16 is a compact riemann surface . then the selberg trace formula combined with the guillemin trace formula @xcite gives @xmath17 see @xcite for an accessible presentation in the physics literature and @xcite for the case of higher dimensions . on the left hand side @xmath18 is the set of resonances of @xmath19 where @xmath20 is the generator of the flow , @xmath21 where @xmath22 s are the zeros of the selberg zeta function included according to their multiplicities . on the right hand side @xmath23 s are periodic orbits , @xmath10 is the linearized poincar map , @xmath24 is the period of @xmath11 , and @xmath25 is the primitive period . the point of view of faure sjstrand @xcite stresses the analogy between analysis of the propagator @xmath26 with scattering theory for elliptic operators on non - compact manifolds : for flows , the fiber infinity of @xmath27 is the analogue of spatial infinity for scattering on non - compact manifolds . melrose s poisson formula for resonances valid for euclidean infinities @xcite and some hyperbolic infinities @xcite suggests that should be valid for general anosov flows but that seems to be unknown . in general , the validity of follows from the finite order ( as an entire function ) of the analytic continuation of @xmath28 the @xmath29 s appearing on the left hand side of are the zeros of @xmath30 see @xcite or @xcite for an indication of this simple fact . under certain analyticity assumptions on @xmath31 and @xmath5 , rugh @xcite and fried @xcite showed that the ruelle zeta function @xmath32 is a meromorphic function of finite order but neither @xcite nor our paper suggest the validity of such a statement in general . one reason to be interested in in the general case is the following consequence based on @xcite : the counting function for the pollicott ruelle resonances in wide strips can not be sublinear . more precisely , there exists a constant @xmath33 such that for each @xmath34 , latexmath:[\[\label{eq : low } \ # \ { \mu \in \operatorname{res } ( p ) \ ; : \ ; \im \mu > - { { c_0}/{\epsilon } } , \ see @xcite and comments below . we arrived at the proof of main theorem while attempting to demonstrate for @xmath0 anosov flows . we now indicate the idea of that proof in the case of analytic continuation of @xmath36 given by . it converges for @xmath8 see lemma [ l : dyn-4 ] for convergence and below for the connection to the ruelle zeta function . the starting point is guillemin s formula , @xmath37 where the trace is defined using distributional operations of pullback by @xmath38 and pushforward by @xmath39 : @xmath40 where @xmath41 denotes the distributional kernel of an operator . the pullback is well - defined in the sense of distributions @xcite because the wave front set of @xmath42 satisfies @xmath43 where @xmath44 is the diagonal and @xmath45 is the conormal bundle . see appendix [ s : guillemin ] and @xcite for details . since @xmath46 it is enough to show that the right hand side has a meromorphic continuation to @xmath47 with simple poles and residues which are non - negative integers . for that it is enough to take @xmath48 smaller than @xmath49 for all @xmath50 ( note that @xmath51 on @xmath52 ) and consider a continuation of @xmath53 we now note that @xmath54 with a justification provided by a simple approximation argument ( see the proof of ( * ? ? * theorem 19.4.1 ) for a similar construction ) it is then sufficient to continue @xmath55 meromorphically . as recalled in [ stuff ] , @xmath56 continues meromorphically so to check the meromorphy of we only need to check the analogue of the wave front set relation for the distributional kernel of @xmath57 , namely that this wave front set does not intersect @xmath58 . but that follows from an adaptation of propagation results of duistermaat hrmander @xcite , melrose @xcite , and vasy @xcite . the faure sjstrand spaces @xcite provide the a priori regularity which allows an application of these techniques . in fact , we use somewhat simpler anisotropic sobolev spaces in our argument and provide an alternative approach to the meromorphic continuation of the resolvent see [ on - forms ] , [ stuff ] . * remarks*. ( i ) if the coefficients of the generator of the flow are merely @xmath59 for large enough @xmath60 , then microlocal methods presented in this paper show that the ruelle zeta function can still be continued meromorphically to a strip @xmath61 , where @xmath62 is a constant independent of @xmath60 . that follows immediately from the fact that wavefront set statements in @xmath63 regularity depend only on a finite number of derivatives of the symbols involved . in @xcite a more precise estimate on the width of the strip was provided . \(ii ) one conceptual difference between @xcite and the present paper is the following . in ( * ? ? ? * ( 2.11 ) , ( 2.12 ) ) , the resolvent @xmath64 is decomposed into two pieces , one of which corresponds to resonances in a large disk and the other one to the rest of the resonances ; using an auxiliary determinant ( * ? ? ? * ( 2.7 ) ) , it is shown that it is enough to study mapping properties of large iterates of @xmath64 , which implies that resonances outside the disk can be ignored in a certain asymptotic regime . in our work , however , we show directly that @xmath64 lies in a class where one can take the flat trace . in terms of the expression , this requires uniform control of the wavefront set of @xmath65 as @xmath66 . such a statement does not follow from the analysis for bounded times and this is where the matters are considerably simplified by using radial source / sink estimates originating in scattering theory . \(iii ) in this paper we only provide analysis at bounded frequencies , but do not discuss the behavior of @xmath67 as @xmath68 goes to infinity . however , a high frequency analysis of the zeta function is possible using the methods of semiclassical analysis , which recover the structure of @xmath64 modulo @xmath69 , rather than just compact , errors . an example is provided by the bounds on the number of pollicott ruelle resonances in @xcite . since this paper was first posted related results have appeared . in @xcite the authors showed that pollicott ruelle resonances are the limits of eigenvalues of @xmath70 , as @xmath71 , where @xmath72 is any laplace beltrami operator on @xmath31 . in addition , for contact anosov flows the spectral gap is uniform with respect to @xmath73 . in @xcite , jin zworski proved that for any anosov flow there exists a strip with infinitely many resonances and a counting function which can not be sublinear . for weakly mixing flows the estimate for the size of that strip in terms of topological pressure was provided by naud in the appendix to @xcite . guillarmou @xcite used the methods of @xcite and of this paper to study regularity properties of cohomological equations and to provide applications . meromorphic continuation ( of @xmath74 and of zeta fucntions ) for flows on non - compact manifolds ( or manifolds with boundary ) with compact hyperbolic trapped sets was recently established by dyatlov guillarmou @xcite . that required a development of new microlocal methods as the escape on the cotangent bundle can occur both at fiber infinity ( as in this paper ) _ and _ at the manifold infinity . a surprising application was given by guillamou @xcite who established deformation lens rigidity for a class of manifolds including manifolds with negative curvature and strictly convex boundary . that is the first result of that kind in which trapping is allowed . in [ prelim ] we list the preliminaries from dynamical systems and microlocal analysis . precise definitions , references and proofs of the statements in [ prelim ] are given in the appendices . they are all standard and reasonably well known but as the paper is interdisciplinary in spirit we provide detailed arguments . except for references to texts @xcite , the paper is self - contained . in [ micro ] we simultaneously prove the meromorphic continuation and describe the wave front set of the schwartz kernel of @xmath75 . this is based on results about propagation of singularities . the vector field @xmath76 has _ radial - like sets _ , that is invariant conic closed sets which are sources / sinks for the flow they correspond to stable / unstable directions in the anosov decomposition . away from those sets the results are classical and due to duistermaat hrmander see for instance @xcite . at the radial points we use the more recent propagation results of melrose @xcite and vasy @xcite . the a priori regularity needed there is provided by the properties of the spaces @xmath77 . finally , in [ t1 ] we give our proof of the main theorem which is a straightforward application of the results in [ micro ] and the more standard results recalled in [ prelim ] . we use the following notation : @xmath78 means that @xmath79 where the norm ( or any seminorm ) is in the space @xmath80 , and the constant @xmath81 depends on @xmath82 . when either @xmath82 or @xmath83 are absent then the constant is universal or the estimate is scalar , respectively . when @xmath84 then the operator @xmath85 has its norm bounded by @xmath86 .