text
stringlengths
3
744k
summary
stringlengths
24
154k
between january 2001 and april 2005 , sonographically guided 14-gauge core needle biopsies were performed on 1,566 consecutive lesions at our institution , and 76 ( 4.9% ) of these were diagnosed as papillary lesions . using an outcomes audit of our biopsy database , we retrospectively identified all the cases of papillary lesions , including papilloma , papillomatosis , atypical papilloma , noninvasive papillary carcinoma and invasive papillary carcinoma ( 6 ) . seven of the 76 cases were lost to follow - up and so they were excluded from this study . the remaining 69 papillary lesions in 69 women ( age range : 25 - 74 years , mean age : 51.7 years ) constituted our study population . of the 69 patients , 29 ( 42% ) presented with palpable mass , three ( 4% ) with breast pain , nine ( 13% ) with nipple discharge , 11 ( 16% ) with screening mammographic abnormalities and 17 women ( 25% ) with mammographic dense breast had sonographic abnormality . surgical excision was performed for 44 ( 64% ) of 69 papillary lesions . the remaining 25 ( 36% ) lesions underwent imaging follow - up and the mean duration of follow - up was 17.9 months ( range : 6 - 46 months ) . all biopsies were guided using high - resolution sonography units with 10- or 12-mhz linear transducers ( voluson 730 , kretz , austria ; hdi 5000 , advanced technology laboratories , bothell , wa ) with the patient in the supine or supine oblique position . the biopsies were performed using a freehand technique with a 14-gauge automated needle ( bard peripheral technologies , convinton , ga ) and a spring - loaded biopsy gun ( pro - mag 2.2 , manan medical products , northbrook , il ) . the biopsy procedures were performed by four fellows and three staff radiologists ; the radiologists each had between 2 - 8 years of experience with breast imaging . the mean number of core samples obtained with the 14-gauge automated gun was five ( range : 3 - 10 ) . the imaging findings for all lesions were retrospectively reviewed by two radiologists working in consensus . the us appearances of the lesions were characterized according to the american college of radiology ( acr ) breast imaging reporting and data system ( bi - rads ) lexicon and the final assessment categories ( 11 , 12 ) . the pathology results were prospectively compared with the relevant imaging findings to determine concordance or discordance of the biopsy results . benign pathology was considered concordant if no imaging features that were highly suspicious for malignancy were present , accurate targeting of the needle was shown , adequate samples were obtained and the pathology results suggested a process known to manifest as a mass ( 7 , 8) . the findings were discordant when a bi - rads category of 5 ( highly suggestive of malignancy ) was given to the lesion at imaging and the corresponding histologic finding was benign . if a lesion 's imaging was determined to be concordant with benign pathology , then 6 , 12 or 24 months of follow - up all the pathology results that revealed atypia , intraductal or invasive carcinoma received recommendations for surgical excision . those benign lesions for which an operation was requested by the pathologist and radiologist , due to imaging - pathologic discordance , were also excised . " histologic underestimation " was defined as a lesion yielding atypical ductal hyperplasia ( adh ) upon percutaneous biopsy and carcinoma upon surgery ( adh underestimation ) , or a lesion that yielded ductal carcinoma in situ ( dcis ) upon core needle biopsy and invasive carcinoma upon surgery ( dcis underestimation ) ( 13 , 14 ) . the underestimation rate of adh or dcis was determined by dividing the total number of lesions , for which excisional biopsy was performed , by the number of lesions that proved to be dcis or invasive carcinoma upon excision . when repeat biopsy was performed , the time interval and the reason for the second biopsy were recorded . an immediate rebiopsy was considered when the repeat biopsy was performed before the first sonographic follow - up . a delayed rebiopsy was considered when the repeat biopsy was performed after sonographic follow - up . the repeat biopsy rate was determined by dividing the total number of core needle biopsies by the number of repeat biopsies . the false negative rate was determined by dividing the total number of carcinomas found upon subsequent biopsy by the number of those lesions that were previously benign upon core needle biopsy ( 15 - 17 ) . between january 2001 and april 2005 , sonographically guided 14-gauge core needle biopsies were performed on 1,566 consecutive lesions at our institution , and 76 ( 4.9% ) of these were diagnosed as papillary lesions . using an outcomes audit of our biopsy database , we retrospectively identified all the cases of papillary lesions , including papilloma , papillomatosis , atypical papilloma , noninvasive papillary carcinoma and invasive papillary carcinoma ( 6 ) . seven of the 76 cases were lost to follow - up and so they were excluded from this study . the remaining 69 papillary lesions in 69 women ( age range : 25 - 74 years , mean age : 51.7 years ) constituted our study population . of the 69 patients , 29 ( 42% ) presented with palpable mass , three ( 4% ) with breast pain , nine ( 13% ) with nipple discharge , 11 ( 16% ) with screening mammographic abnormalities and 17 women ( 25% ) with mammographic dense breast had sonographic abnormality . surgical excision was performed for 44 ( 64% ) of 69 papillary lesions . the remaining 25 ( 36% ) lesions underwent imaging follow - up and the mean duration of follow - up was 17.9 months ( range : 6 - 46 months ) . all biopsies were guided using high - resolution sonography units with 10- or 12-mhz linear transducers ( voluson 730 , kretz , austria ; hdi 5000 , advanced technology laboratories , bothell , wa ) with the patient in the supine or supine oblique position . the biopsies were performed using a freehand technique with a 14-gauge automated needle ( bard peripheral technologies , convinton , ga ) and a spring - loaded biopsy gun ( pro - mag 2.2 , manan medical products , northbrook , il ) . the biopsy procedures were performed by four fellows and three staff radiologists ; the radiologists each had between 2 - 8 years of experience with breast imaging . the mean number of core samples obtained with the 14-gauge automated gun was five ( range : 3 - 10 ) . the imaging findings for all lesions were retrospectively reviewed by two radiologists working in consensus . the us appearances of the lesions were characterized according to the american college of radiology ( acr ) breast imaging reporting and data system ( bi - rads ) lexicon and the final assessment categories ( 11 , 12 ) . the pathology results were prospectively compared with the relevant imaging findings to determine concordance or discordance of the biopsy results . benign pathology was considered concordant if no imaging features that were highly suspicious for malignancy were present , accurate targeting of the needle was shown , adequate samples were obtained and the pathology results suggested a process known to manifest as a mass ( 7 , 8) . the findings were discordant when a bi - rads category of 5 ( highly suggestive of malignancy ) was given to the lesion at imaging and the corresponding histologic finding was benign . if a lesion 's imaging was determined to be concordant with benign pathology , then 6 , 12 or 24 months of follow - up all the pathology results that revealed atypia , intraductal or invasive carcinoma received recommendations for surgical excision . those benign lesions for which an operation was requested by the pathologist and radiologist , due to imaging - pathologic discordance , were also excised . " histologic underestimation " was defined as a lesion yielding atypical ductal hyperplasia ( adh ) upon percutaneous biopsy and carcinoma upon surgery ( adh underestimation ) , or a lesion that yielded ductal carcinoma in situ ( dcis ) upon core needle biopsy and invasive carcinoma upon surgery ( dcis underestimation ) ( 13 , 14 ) . the underestimation rate of adh or dcis was determined by dividing the total number of lesions , for which excisional biopsy was performed , by the number of lesions that proved to be dcis or invasive carcinoma upon excision . when repeat biopsy was performed , the time interval and the reason for the second biopsy were recorded . an immediate rebiopsy was considered when the repeat biopsy was performed before the first sonographic follow - up . a delayed rebiopsy was considered when the repeat biopsy was performed after sonographic follow - up . the repeat biopsy rate was determined by dividing the total number of core needle biopsies by the number of repeat biopsies . the false negative rate was determined by dividing the total number of carcinomas found upon subsequent biopsy by the number of those lesions that were previously benign upon core needle biopsy ( 15 - 17 ) . forty - three benign papillomas ( 62% ) , 18 atypical papillomas ( 26% ) , seven intraductal papillary carcinomas ( 10% ) , and one invasive papillary carcinoma ( 2% ) were diagnosed upon core needle biopsy . all the lesions with a histologic diagnosis of malignancy ( n = 8) or atypia ( n = 17 ) , except one atypical papilloma , upon core needle biopsy were surgically excised . of the 43 lesions that were diagnosed upon core needle biopsy as benign the histologic findings upon core needle biopsy and surgical excision are given in tables 1 and 2 . at sonography , the mean size was 1.3 cm ( range : 0.5 - 3.8 cm ) for all the papillary lesions . these lesions manifested sonographically as hypoechoic solid masses in 40 patients , intracystic masses in three patients , complex masses with solid and cystic components in nine patients , and as a mass within a dilated duct in 17 patients . the final assessment of 69 lesions , based on the combined mammographic and sonographic findings , was category 3 ( probably benign lesions ) for nine masses ( 13% ) , category 4a ( low suspicion lesions ) for 47 masses ( 68% ) , category 4b or 4c ( moderate suspicion lesions ) for 12 masses ( 18% ) and category 5 ( highly suggestive of malignancy ) for one mass ( 1% ) ( table 2 ) . the imaging findings and the histopathologic findings for 42 of the 43 benign lesions ( fig . 1 ) , for all 18 atypical lesions , and for all eight malignant lesions were concordant . there was discordance between the imaging findings and the histopathologic findings for one of the benign lesions upon core needle biopsy ( fig . 2 ) . the mammograms and sonographic findings demonstrated a 1.6 cm irregular hypoechoic mass and it was assessed as category 5 , that is , highly suggestive of malignancy . the surgical biopsy results revealed benign findings without atypia in two lesions ( 12% ) , atypical papilloma in seven lesions ( 41% ) , intraductal papillary carcinoma in six lesions ( 35% ) , and invasive papillary carcinoma in two lesions ( 12% ) ( table 1 ) . of the seven lesions yielding intraductal papillary carcinoma upon core needle biopsy , surgical excision revealed intraductal papillary carcinoma in five lesions ( 71% ) , invasive papillary carcinoma in one lesion ( 14% ) and one atypia without carcinoma . repeat biopsy was performed for 19 of the 43 patients with benign papillary lesions and for 17 of the 18 patients with atypical papillary lesions . for repeat biopsy , surgical excision was done in all cases and this was performed because of the patient 's or physicians ' concern ( n = 18 ) , imaging - histologic discordance ( n = 1 ) , and atypia findings ( n = 17 ) . all the repeat biopsies , except one , were performed immediately after the first core needle biopsy . delayed repeat biopsy was performed for only one benign lesion because of the patient 's concern . the repeat biopsy rate was 52% ( 36/69 ) ; the false negative rate was 6.3% ( 1/19 ) . twenty - four ( 56% ) of the 43 lesions that were diagnosed as benign were not surgically excised and they underwent imaging follow - up ( range : 6 - 46 months ; mean : 17.9 months ) . one atypical papilloma that was not surgically excised underwent 24 months of imaging follow - up . all the lesions were stable ( n = 20 ) or they had decreased in size ( n = 5 ) . in our series , 4.9% ( 76/1,566 ) of the sonographically guided core needle biopsies performed at our institution revealed papillary lesions . ( 7 ) and it is higher than the 3.2% ( 34/1,077 ) by mercado et al . ( 18 ) . our study included 17 ( 25% ) sonographic abnormalities in patients with mammographic dense breast , while most of the cases in the other studies were detected by mammographic abnormalities . lesions that are neither palpable nor mammographically visible can be detected on sonography during the evaluation of unrelated mammographic or clinical findings ( 10 ) . recent studies have reported that women with mammographically dense breasts have a considerably increased rate for detecting breast cancer when whole - breast sonography is performed ( 19 - 21 ) . the results of the previously published series , including our own data , are shown in table 3 . the outcomes of sonographically guided 14-gauge core needle biopsy in our study were not inferior to the stereotactic guided 14- and 11-gauge needle biopsies in terms of repeat biopsy and the false negative rate . in a study by mercado et al . ( 18 ) , an 11-gauge vacuum - assisted biopsy device was used to perform all the percutaneous biopsies . previous studies have demonstrated that a larger tissue sample is obtained per core specimen with the 11-gauge stereotactic directional vacuum - assisted biopsy technique versus the 14-gauge core biopsy technique ( 14 , 16 ) . this provides for a larger lesion sample , which may help to more accurately identify lesions . therefore , mercado 's outcomes would seem to be better than some of the other studies in terms of the repeat biopsy rate and the adh underestimation rate . controversy still persists regarding the need for excision of papillomas that are diagnosed upon percutaneous breast biopsy ( 22 , 23 ) . in a recent study by liberman et al . ( 23 ) , cancer was found in five ( 14% ) of the 35 lesions that yielded a benign , concordant diagnosis of papilloma upon percutaneous biopsy . surgery also revealed six ( 17% ) high - risk lesions , including atypical ductal hyperplasia ( n = 3 ) , radial scar ( n = 2 ) , and lobular carcinoma in situ ( n = 1 ) . however , our findings support the previous reports that lesions diagnosed as benign papillomas upon core needle biopsy can be safely managed with clinical and imaging follow - up , and they do not necessarily require surgical excision ( 8) . in our series , one papillary lesion diagnosed as benign upon sonographically guided 14-gauge core needle biopsy was subsequently found to be malignant . immediate repeat biopsy was recommended in this case due to the imaging - histologic discordance . in a study by mercado ( 22 ) , two papillary lesions that were diagnosed as benign upon core needle biopsies were subsequently diagnosed as malignant by surgical excision . immediate repeat biopsy was recommended in these cases due to the imaging - histologic discordance . their results and our results confirm the importance of imaging - histologic correlation after core needle biopsy for managing papillary lesions that are diagnosed as benign . in our study , eight ( 47% ) of the 17 atypical biopsies were upgraded to intraductal or invasive carcinoma after surgical excision . hence , the diagnosis of atypical papilloma or atypical features upon sonographically guided 14-gauge core needle biopsy clearly calls for surgical excision . underestimation upon core biopsy is an inherent limitation of this technique and this has well been documented ( 16 , 17 ) . this understaging of malignancy upon sonographically guided core biopsy has been shown to decrease with the increasing number of tissue samples , with the use of larger biopsy needles and with the use of a vacuum - assisted biopsy device ( 14 , 15 ) . seven of the 76 papillary lesions were lost to follow - up and so they were excluded from this study . excluding these cases could have increased the accuracy of 14-gauge sonographically guided core needle biopsy . when diagnosing a lesion as a benign lesion with core needle biopsy , we recommended imaging follow - up and 56% ( 26/43 ) of the lesions in our study underwent imaging follow - up with the mean duration of 17.9 months ( range : 6 - 46 months ) . sonographically guided 14-gauge core needle biopsy of a benign papillary lesion can be reliable when the histological / pathological result is concordant with the imaging characteristics . surgical excision is indicated for atypical papillary lesions because performing only percutaneous biopsy may underestimate the degree of disease in these cases , yet further study is necessary to confirm our findings .
objectivewe wanted to assess the need for surgical excising papillary lesions of the breast that were diagnosed upon sonographically guided 14-gauge core needle biopsy.materials and methodssixty - nine women ( age range : 25 - 74 years , mean age : 51.7 years ) with 69 papillary lesions ( 4.9% ) were diagnosed and followed after performing sonographically guided 14-gauge core needle biopsies . surgical excision was performed for 44 ( 64% ) of 69 papillary lesions , and 25 lesions were followed with imaging studies ( range : 6 - 46 months , mean : 17.9 months ) . the histologic findings upon core biopsy were compared with the surgical , imaging and follow - up findings.resultscore needle biopsies of 69 lesions yielded tissue that was classified as benign for 43 lesions , atypical for 18 lesions and malignant for eight lesions . of the 43 lesions that yielded benign papilloma upon core needle biopsy , one had intraductal papillary carcinoma found upon surgery . an immediate surgical biopsy was recommended for this lesion because of the imaging - histologic discordance . no additional carcinoma was found during the imaging follow - up . surgical excision was performed for 17 atypical papillary lesions , and this revealed intraductal ( n = 6 ) or invasive ( n = 2 ) papillary carcinoma in 8 ( 47% ) lesions . of the seven intraductal papillary carcinomas , surgery revealed invasive papillary carcinoma in one ( 14%).conclusionour results suggest that papillary lesions of the breast that are diagnosed as benign upon sonographically guided 14-gauge core needle biopsy can be followed when the results are concordant with the imaging findings .
Just One More Thing... We have sent you a verification email. Please check your email and click on the link to activate your AJC.com profile. If you do not receive the verification message within a few minutes of signing up, please check your Spam or Junk folder. Close ||||| Atlanta officials are reassuring the public that operations will continue as normal as they deal with a cyberattack on the city's systems. While most of the city’s websites are working normally, a number of web pages that customers use to pay bills began to be affected Thursday morning. Access to court information was also affected. Whoever is behind the attack is asking for a $50,000 ransom. As the city struggled to contain the spread of the attack, city officials have been forced to take down web pages in other departments and literally unplug city computers. Some city workers aren’t even receiving email. Mayor Keisha Lance Bottoms said that her office is working with the FBI. “We are continuing to work with our federal partners and other stakeholders who continue to advise us on how best to navigate and approach this,” Bottoms said. STOCK PHOTO/Getty Images City leaders stress that there have been no impacts to police, water service, 911 and Atlanta’s Hartsfield International Airport. The city, they point out, was built before computers. As a protective measure, Wi-Fi at the airport has been turned off. Security wait time signs and flight information signs may not be accurate as a result, officials cautioned. The greatest impacts appear to be at municipal court and the city detention center -- with computers down, many taken down protectively, city employees are having to manually admit inmates, handle tickets and warrants. The city court currently cannot validate warrants or process ticket payments online or in person. Customers will not be penalized for late payments, the city said. The city government isn’t getting specific about who the demands are from, what kind of data has been stolen and what’s being held hostage, but it’s clear that city’s systems have been severely compromised. Bottoms did not say Friday whether the city planned to pay the $50,000 ransom, but already city council members are promising her millions if she needs to build a new secure system from the ground up. She referenced similar ransom attacks on corporations, and on other government agencies in Colorado and North Carolina. “What we know is that someone is in our system, and that there is a weakness there,” Bottoms said. “It is absolutely not what we wanted to have happened in the city of Atlanta. But to the extent that there are changes and upgrades that we need to make to our system, we need to do it now.” She added: “This is a massive inconvenience to the city.” ||||| (Reuters) - Atlanta is still struggling with its ability to collect online payments of bills and fees, officials said on Monday, four days after a ransomware attack snarled the computer system of Georgia’s capital city. Hackers caused outages of services offered through the city’s website and broader computer system while demanding a ransom of $51,000 paid in bitcoin to unlock the system. “This is much bigger than a ransomware attack, this really is an attack on our government,” Mayor Keisha Lance Bottoms told a news conference. “We are dealing with a (cyber) hostage situation.” She did not say whether Atlanta would pay the ransom. Atlanta officials said they have determined the hackers’ identity but declined to elaborate. City representatives were not immediately available for further comment. Bottoms said only that the hackers entered the city’s digital system remotely as opposed having had internal access. Ransomware is a type of malware that infects computers or computer networks and then freezes them, with the attackers demanding a ransom in order to restore services. The initial assault often comes via a phishing link that someone within the network opens on their email. As the disruption in Atlanta persists, the city is losing out financially, Bottoms told an earlier news conference on Friday. It was unclear how much it stands to lose or when the city expects to get its computer system fully operational again. ||||| Atlanta is the latest local government to have its computer system attacked by hackers. The criminals are demanding $51,000 in Bitcoin to remove their ransomware. It’s been a quiet few months on the ransomware front. The last notable attack targeted a hospital in Indiana back in January, but now a new target has been hit. The city government of Atlanta, Georgia, reports that their computer system was targeted by hackers in a ransomware attack. The City of Atlanta is currently experiencing outages on various customer facing applications, including some that customers may use to pay bills or access court-related information. We will post any updates as we receive them. pic.twitter.com/kc51rojhBl — City of Atlanta, GA (@Cityofatlanta) March 22, 2018 No Joy in Dixie The attack hit the city’s computer system early Thursday morning. The systems that were targeted were those that people use to pay bills as well as access data from the court system. City employees were handed printouts when they showed up for work, stating that they should not use their computers until the city’s IT department cleared them. City officials have warned employees and citizens who have used the computer system to monitor their bank accounts and to change their passwords. Officials do not know who was behind the ransomware attack, but they have received a demand from the hacker(s). The city of Atlanta can pay $6,800 in Bitcoin per unit to unlock them or pay a grand total of $51,000 to unlock everything. Working Hard to Fix City officials have no plans to pay off the hackers over the ransomware attack and are working to resolve the issue. In a statement, the local government said: At this time, our Atlanta Information Management team is working diligently with support from Microsoft to resolve the issue. We are confident that our team of technology professionals will be able to restore applications soon. Our city website, Atlantaga.gov, remains accessible and we will provide updates as we receive them. The city of Atlanta is working with the Department of Homeland Security, Microsoft, Cisco, and the FBI to determine exactly which data has been compromised and to, hopefully, find a solution. Atlanta is just the latest in a growing fraternity whose membership relies upon being the victim of a ransomware attack. As stated above, a hospital in the state of Indiana was hit by hackers early this year. The latter half of 2017 saw multiple ransomware attacks: credit agency Equifax targeted in September with a ransom demand for $2.3 million, the Sacramento Regional Transit system in November to the tune of a single bitcoin, and the county government of Mecklenburg, North Carolina, getting hit in December with hackers demanding a payment of two bitcoins. How long do you think it will take Atlanta to solve this ransomware attack? Will such hacks become more common in the future? Let us know your thoughts in the comments below. Images courtesy of Pixabay, [email protected], and Twitter/@Cityofatlanta.
– Atlanta is being held hostage, by computer hackers who want more than $50,000 in bitcoin to stop their siege. "This is much bigger than a ransomware attack, this really is an attack on our government," Mayor Keisha Lance Bottoms said at a Monday presser about the e-attack, per Reuters, adding, "We are dealing with a [cyberhostage] situation." Bitcoinist reports the hack began Thursday morning, and it has taken down Atlanta's online bill payment system from some remote location, says Bottoms, who's staying mum over whether the ransom will be paid. (Bitcoinist notes, however, the city has "no plans" to pay up.) The FBI, Homeland Security, Cisco, and Microsoft are all teaming up to help the city figure out what data has been breached and what steps to take next in what Bottoms has deemed a "massive inconvenience," reports ABC News. Ransomware is a form of malware that brings individual computers or entire networks to a halt until money is paid to unfreeze them. The city's Twitter account posted Monday that "at this time, there is no evidence to show that customer or employee data has been compromised," though it did warn people to take standard precautions to protect their info. In Cobb County, that included instructions to county workers not to open any emails from the City of Atlanta, reports the Atlanta Journal-Constitution. Per Reuters, the hackers have already been IDed, though no names have yet been given. Over the weekend, 11Alive posted a list of non-impacted services, and WSBTV reports that customers won't be charged late fees if they're unable to pay bills online.
supergiant fast x - ray transients ( sfxts ) are a recently discovered subclass of high mass x - ray binaries . these sources display sporadic outbursts lasting from minutes to hours with peak luminosities of @xmath010@xmath1 - 10@xmath2 erg s@xmath3 , and spend long time intervals at lower x - ray luminosities , ranging from @xmath010@xmath4 erg s@xmath3 to @xmath010@xmath5 erg s@xmath3 . proposed models to interpret the sfxt behavior generally involve accretion onto a neutron star ( ns ) immersed in the clumpy wind of its supergiant companion @xcite . the x - ray luminosity during lower activity states is likely due to accretion onto the ns at much reduced rate than in outburst @xcite . here we report on the first long , high sensitivity observation ( @xmath032 ks ) of the sfxt 16479 . this observation was carried out shortly after discovered a very bright outburst from this source @xcite , and was aimed at investigating the low level emission of this source and gaining insight in the physical mechanisms that drive this activity . @xcite observed 16479 between march 21 14:40:00ut and march 22 01:30:00ut . we show in figure [ fig : lcurve ] the 2 - 10 kev epic - pn light curve and spectra of the observation . in the first @xmath04 ks 16479 was caught during the decay from a higher ( `` a '' ) to a lower ( `` b '' ) flux state . the shape of the above light curves and evolution of the spectrum across the state a - state b transition presented remarkable similarities with the eclipse ingress of eclipsing x - ray sources such as oao1657 - 415 @xcite . moreover the slower decay of the soft x - ray light curve ( in turn similar to that observed in oao1657 - 415 ) suggested that 16479 is seen through an extended dust - scattering halo @xcite . to confirm this we analysed the radial distribution of the x - ray photons detected from 16479 and compared it with the point spread function ( psf ) of the telescope / epic - pn camera . we found that a 30 extended dust scattering halo located halfway between us and 16479 agrees well with the e - folding decay time of the soft x - ray light curves . motivated by the above findings we considered a spectral model that has been used in studies of eclipsing x - ray binaries seen through a dust - scattering halo , that is : @xmath6=@xmath7@xmath8@xmath9+@xmath10[@xmath11@xmath12 + @xmath13@xmath14+@xmath15@xmath16 . in practice we fitted a model with two continuum components , a single power law of slope @xmath17 ( with normalisation @xmath11 and column density @xmath18 ) and a power law of slope @xmath19 ( with normalisation @xmath8 and column density @xmath20 ) . the former component was taken to approximate the sum of two @xmath17-slope power laws : the first represents the direct component dominating before the eclipse ingress ( i.e. the component due to the accretion process onto the ns ) , whereas the second represents the wind scattered component dominating during the eclipse @xcite . the second power law component , with photon index @xmath19=@xmath17 + 2 , originates from small angle scattering of ( mainly ) direct photons off interstellar dust grains along the line of sight @xcite . the gaussians in the above equation represent iron features around @xmath06.4 kev and @xmath07.0 kev , arising from the reprocessing of the source radiation in the supergiant wind . we kept @xmath17 fixed at 0.98 , and thus @xmath19=2.98 @xcite . the best fit parameters are reported in the caption of fig . [ fig : lcurve ] . in state a the relatively high source flux and small ew of the iron fluorescence line testifies that ( most of ) the emission is likely due to the direct component . in state b the ratio @xmath11/@xmath8 is larger than the corresponding value obtained during state a , whereas the ew of the fe - line at @xmath21 kev increased from @xmath0150 ev to @xmath0770 ev . we also found evidence ( @xmath02@xmath22 ) for an additional fe - line at @xmath23 kev with a @xmath0300 ev ew , consistent with being the @xmath24 . the most natural interpretation of this is that in state b the direct emission component is occulted along our line of sight , while the spectrum we observe is the sum of a dust scattered component , dominating at lower energies , and a wind scattered component characterised by a high absorption . the marked increase in the ew of the fe - line at @xmath06.5 kev across the state a - state b transition testifies that the region where the line is emitted is larger than the occulting body ( the supergiant companion , if we are dealing with an eclipse ) ; this also provides further evidence that the uneclipsed emission in state b at hard x - ray energies ( the @xmath17-slope power law ) arises mostly from photons scattered by the wind in the immediate surrounding of the source . we conclude that 16479 is the first sfxt that displayed evidence for an x - ray eclipse . further observations of 16479 and other sfxts in quiescence , will improve our knowledge of the low level emission of these sources , and will help clarifying if x - ray eclipses are common in sfxts . in fig . [ fig : simbolx ] we show the x - ray spectrum of 16479 during state b that would be observed by simbol - x . we used an exposure time of 28 ks and the latest simbol - x simulation tool available ( see http://www.apc.univ-paris7.fr/simbolx2008/ ) .
we report on the first long ( @xmath032 ks ) pointed observation of the supergiant fast x - ray transient 16479 . results from the timing , spectral and spatial analysis of this observation show that the x - ray source 16479 underwent an episode of sudden obscuration , possibly an x - ray eclipse by the supergiant companion . we also found evidence for a soft x - ray extended halo around the source that is most readily interpreted as due to scattering by dust along the line of sight to 16479 . address = inaf - osservatorio astronomico di roma , via frascati 33 , 00044 rome , italy . address = inaf - osservatorio astronomico di roma , via frascati 33 , 00044 rome , italy . address = inaf - osservatorio astronomico di roma , via frascati 33 , 00044 rome , italy . address = cea saclay , dsm / irfu / service dastrophysique , f-91191 , gif sur yvette , france . address = inaf - osservatorio astronomico di brera , via emilio bianchi 46 , i-23807 merate ( lc ) , italy .
During the ongoing conflict in Syria, UNITAR’s UNOSAT programme has been supporting the humanitarian community with satellite imagery derived analysis. While conducting damage assessments to civilian infrastructure in Syria, it became evident that there was also wide-spread destruction and damage to cultural heritage locations. This report is the result of a dedicated effort to assess the current status of 18 larger cultural heritage areas, in which 290 locations were found to have been affected during the last three years, of which 24 destroyed and 104 severely damaged. As the conflict continues, it is of utmost importance to better protect the invaluable treasures these areas, including UNESCO World Heritage Properties, bring in terms of common heritage to human-kind. The world cannot afford to let the destruction and looting that UNOSAT has reported here continue. We call on all relevant institutions, both nationally and internationally to ensure the current damage and looting cease, with special attention and support to the work UNESCO carries out in Syria and the Middle-East region. ||||| Rebels and state forces blame each other for toppling tower of 12th century Umayyad mosque in Unesco world heritage site The minaret of a famed 12th-century Sunni mosque in the northern Syrian city of Aleppo was destroyed on Wednesday, leaving the once-soaring stone tower a pile of rubble and twisted metal scattered in the tiled courtyard. President Bashar al-Assad's regime and anti-government activists traded blame for the attack on the Umayyad mosque in the heart of Aleppo's walled Old City, a Unesco World Heritage site. It was the second time in just over a week that a historic Sunni mosque in Syria has been seriously damaged. Mosques served as a launching-pad for anti-government protests in the early days of the Syrian uprising, and many have been targeted. Syria's state news agency, Sana, said that rebels from the al-Qaida-linked Jabhat al-Nusra group blew it up, while Aleppo-based activist Mohammed al-Khatib said a Syrian army tank fired a shell that "totally destroyed" the minaret. The mosque fell into rebel hands earlier this year after heavy fighting that damaged the historic compound. The area around it, however, remains contested. Syrian state troops are about 200 metres away. An amateur video posted online by the anti-government Aleppo Media Centre activist group showed the mosque's archways, charred from earlier fighting, and a pile of rubble where the minaret used to stand. Standing inside the mosque's courtyard, a man who appears to be a rebel fighter says regime forces recently fired seven shells at the minaret, but failed to bring it down. He said that on Wednesday the shells hit their target. "We were standing here today and suddenly shells started hitting the minaret. They [the army] then tried to storm the mosque but we pushed them back," the man says. The video appeared to be genuine and corresponded with other Associated Press reporting. The destruction in Aleppo follows a similar incident in the southern city of Daraa, where the minaret of the historic Omari Mosque was destroyed more than a week ago. The Daraa mosque was built during the Islamic conquest of Syria in the days of Caliph Omar ibn al-Khattab in the 7th century. In that instance as well, the opposition and regime blamed each other for the damage. Sana also accused Jabhat al-Nusra of positioning cameras around the area to record the event in that case. Syria's civil war poses a grave threat to the country's rich cultural heritage. Last year, the medieval market in Aleppo, which is located near the Umayyad Mosque, was gutted by fire sparked by fighting. Both rebels and regime forces have turned some of Syria's significant historic sites into bases, including citadels and Turkish bath houses, while thieves have stolen artefacts from museums. Five of Syria's six World Heritage sites have been damaged in the fighting, according to Unesco, the UN's cultural agency. Looters have broken into one of the world's best-preserved Crusader castles, Crac des Chevaliers, and ruins in the ancient city of Palmyra have been damaged. The damage is just part of the wider devastation caused by the country's crisis, which began more than two years ago with largely peaceful protests but morphed into a civil war as the opposition took up arms in the face of a withering government crackdown. The fighting has exacted a huge toll on the country, killing more than 70,000 people, laying waste to cities, towns and villages and forcing more than a million people to flee their homes and seek refuge abroad. ||||| 23 December 2014, Geneva, Switzerland - UNITAR today highlighted a new and comprehensive report by its UNOSAT programme that has revealed large scale destruction and damage to cultural heritage sites in Syria, including UNESCO World Heritage Properties. The study, carried out by experts on Syria cultural heritage and UNOSAT satellite image analysts, reviewed 18 different areas inside which a total of 290 locations were found to be directly affected by the ongoing conflict. UNOSAT based its analysis on a combination of commercially available very high resolution satellite images, UNESCO reports, information from archaeological experts on Syria as well as traditional and social media. “At this point in time we found it important to issue a comprehensive status report to alert decision-makers and the public of deterioration to many of the rich cultural heritage areas in Syria. The wide-spread destruction and damage we have observed call for increased protection efforts and support to the ongoing work of UNESCO”, says Einar Bjorgo, UNOSAT’s manager. Satellite imagery is often one of a few sources for objective information over conflict-areas. Few independent experts have access to these areas due to ongoing fighting. The imagery taken from space therefore brings timely evidence of what is happening and covers large areas to ensure as comprehensive a study as possible. “Satellite technology and images such as those analysed by UNOSAT in this report are essential to assess the state of the cultural heritage in Syria and Iraq”, says Alfredo Peréz de Armiñán, Assistant Director-General for Culture at UNESCO in the foreword to the report. “UNESCO […] welcomes this report, which will help it fulfil its mandate for the protection of the cultural heritage, and commends UNOSAT on this timely initiative, paving the way for similar collaborative efforts across the UN system. “ A dedicated web-site has been set up where the report can be downloaded in full, or by chapter, and various imagery-samples for media illustrating the observed damage are available for use. Old City of Aleppo: Multiple historical sites can be seen destroyed in this image as of 22 October 2014, such as the Carlton hotel, where craters are present. Other damaged locations include the Great Umayd Mosque in the lower right corner of the image. The Great Mosque’s minaret has been destroyed, in addition to severe damages to the wall and courtyard. Details can be found in the Aleppo section of UNOSAT Syria Cultural Heritage Sites Report (click on the image to enlarge). Related links Press release (PDF, 263 KB) UNOSAT website on the report Image copyright: US Department of State, Humanitarian Information Unit, NextView License (DigitalGlobe) Satellite image analysis by UNITAR-UNOSAT Media coverage (as of 23 December 2014) http://www.reuters.com/article/2014/12/23/us-mideast-crisis-syria-heritage-idUSKBN0K10DK20141223?feedType=RSS&feedName=topNews Satellite images show 290 heritage sites in Syria damaged by war: U.N. http://www.voanews.com/content/major-damage-to-cultural-sites-in-syria/2568817.html Syrian Cultural Heritage Sites Endangered http://www.un.org/apps/news/story.asp?NewsID=49621#.VJlMBsAKw Syria: UN food agency's operations secure after funding drive; must raise more in 2015 http://hyperallergic.com/168469/un-maps-reveal-destruction-of-thousands-of-cultural-sites-across-syria/ UN Maps Reveal Destruction of Thousands of Cultural Sites Across Syria http://www.washingtonpost.com/blogs/worldviews/wp/2014/12/10/7-reasons-why-we-shouldnt-be-celebrating-on-human-rights-day/ 7 reasons why we shouldn’t be celebrating on Human Rights Day http://www.nytimes.com/2014/11/11/world/middleeast/syrian-leader-says-un-cease-fire-proposal-is-worth-considering.html?_r=0 Syrian Leader Says U.N. Cease-Fire Proposal Is Worth Considering https://www.facebook.com/FBNewswire/posts/823109507727172 http://english.al-akhbar.com/content/un-300-cultural-heritage-sites-destroyed-looted-syria UN: 300 cultural heritage sites destroyed, looted in Syria http://www.dw.de/syria-war-has-damaged-nearly-300-heritage-sites-un-says/a-18147985 Syria war has damaged nearly 300 heritage sites, UN says http://www.middleeasteye.net/news/un-report-shows-utter-devastation-syrias-heritage-296869147 UN report shows utter devastation of Syria's heritage - See more at: http://www.middleeasteye.net/news/un-report-shows-utter-devastation-syrias-heritage-296869147#sthash.3epy2jo8.dpuf http://news.yahoo.com/nearly-300-heritage-sites-hit-syria-war-un-113610205.html Nearly 300 heritage sites hit by Syria war: UN http://icom.museum/fileadmin/user_upload/pdf/ICOM_News/2012-3/ENG/p16-17_2012-3.pdf http://inagist.com/all/547322134977347584/ http://www.aa.com.tr/en/corporate-news/439570--diary SWITZERLAND, GENEVA - [The United Nations Institute for Training and Research] UNITAR's Satellite Applications Program UNOSAT to release a report revealing major damage to Syria's cultural heritage sites caused by the ongoing conflict. http://23on.com/satellite-photos-290-heritage-sttten-in-syria-damaged/ Satellite Photos: 290 Heritage Sttten in Syria damaged http://www.ctvnews.ca/world/satellite-images-show-damage-to-syrian-cultural-sites-1.2159580 Satellite images show damage to Syrian cultural sites Read more: http://www.ctvnews.ca/world/satellite-images-show-damage-to-syrian-cultural-sites-1.2159580#ixzz3MjDC9qNJ http://in.bukvar.mk/news/nearly-300-heritage-sites-hit-by-syria-war-un?newsid=Tzq Nearly 300 heritage sites hit by Syria war: UN http://therebel.website/en/al-manartv/824405-syria-war-hit-nearly-300-heritage-sites-un Syria War Hit Nearly 300 Heritage Sites: UN https://www.punjabupdate.com/nearly-300-heritage-sites-hit-by-syria-war-un/ Nearly 300 heritage sites hit by Syria war: UN http://www.dailystar.com.lb/News/Middle-East/2014/Dec-23/282036-nearly-300-heritage-sites-hit-by-syria-war-un.ashx Nearly 300 heritage sites hit by Syria war: UN http://www.panarmenian.net/eng/news/186527/ Satellite images show 290 heritage sites damaged in Syria: UNITAR http://www.gulfinthemedia.com/index.php?m=banners3&id=731443&lim=&lang=en&tblpost=2014_12 Nearly 300 heritage sites hit by Syria war: UN http://www.unmultimedia.org/radio/english/2014/12/destruction-to-syrias-cultural-riches-revealed/#.VJmGbsAKw Destruction to Syria’s cultural riches revealed http://www.unitednews.com.pk/2014/12/23/syria-war-has-damaged-nearly-300-heritage-sites-un-says/ Syria war has damaged nearly 300 heritage sites, UN says http://dippost.com/2014/12/23/satellite-reveals-destruction-of-syrian-heritage-sites/ Satellite reveals destruction of Syrian heritage sites http://www.mediafaxfoto.ro/Preview.aspx?Id=7200252 SYRIA-CONFLICT-ARCHAEOLOGY http://en.haberler.com/satellite-reveals-destruction-of-syrian-heritage-613471/ 12/23/2014 15:11 News >> Satellite Reveals Destruction Of Syrian Heritage Sites http://www.freenewspos.com/en/major-cities-news-article/c/2214645/cities/un-300-cultural-heritage-sites-destroyed-looted-in-syria UN: 300 cultural heritage sites destroyed, looted in Syria http://www.lemonde.fr/proche-orient/article/2014/12/23/l-onu-dresse-l-inventaire-des-sites-detruits-en-syrie_4545280_3218.html Près de 300 sites historiques gravement endommagés par la guerre en Syrie En savoir plus sur http://www.lemonde.fr/proche-orient/article/2014/12/23/l-onu-dresse-l-inventaire-des-sites-detruits-en-syrie_4545280_3218.html#k0M4FhUZOpmuWz47.99 http://www.itele.fr/monde/video/syrie-pres-de-300-sites-du-patrimoine-culturel-touches-par-la-guerre-105316 Syrie : près de 300 sites du patrimoine culturel touchés par la guerre http://www.rts.ch/info/monde/6406275-pres-de-300-sites-du-patrimoine-culturel-syrien-ont-ete-endommages.html Près de 300 sites du patrimoine culturel syrien ont été endommagés http://www.levif.be/actualite/international/syrie-7-000-ans-d-histoire-balayes-en-trois-ans-de-guerre/article-normal-358805.html Syrie: 7 000 ans d'Histoire balayés en trois ans de guerre http://fr.timesofisrael.com/syrie-pres-de-300-sites-du-patrimoine-touches-par-la-guerre/ Syrie : près de 300 sites du patrimoine touchés par la guerre
– Insurgents camping inside the 900-year-old Crac des Chevaliers Crusader medieval castle and snipers firing from atop Aleppo's Citadel illustrate the sad plight of Syria's oldest cultural heritage sites—some of them dating back to what Reuters calls the "dawn of civilization." Satellite images have revealed that 290 of these historic locations and buildings have sustained "large scale destruction and damage" in Syria's civil war, according to UNITAR, the United Nations' training and research institute. The most devastated areas include Damascus, Raqqa, Palmyra, and Aleppo, where some settlements have been around for more than 7,000 years, the UNITAR report released today notes. The 12th-century Umayyad Mosque (Great Mosque of Damascus) is one of the structures in that city that has been damaged, losing its famed minaret in April 2013. Examining commercially available images of 18 different areas, UNITAR discovered that 189 sites were moderately or severely damaged, while another 24 have been destroyed altogether; another 77 have been "possibly destroyed" (meaning debris is visible). The damage has been caused by fighting between rebels and Bashar al-Assad's forces, as well as by Sunni Muslim militants who believe some of the sites are "heretical," Reuters notes. The images are an "alarming testimony of the ongoing damage that is happening to Syria's vast cultural heritage," the report states. "National and international efforts for the protection of these areas need to be scaled up in order to save as much as possible of this important heritage to human-kind." A dedicated website has been set up that shows which sites have been damaged. (The "world's worst Nazi" reportedly died in Syria four years ago.)
SECTION 1. SHORT TITLE. This Act may be cited as the ``Officer Dale Claxton Bulletproof Police Protective Equipment Act of 2001''. SEC. 2. FINDINGS; PURPOSE. (a) Findings.--Congress finds that-- (1) Officer Dale Claxton of the Cortez, Colorado, Police Department was shot and killed by bullets that passed through the windshield of his police car after he stopped a stolen truck, and his life may have been saved if his police car had been equipped with bullet-resistant equipment; (2) the number of law enforcement officers who are killed in the line of duty would significantly decrease if every law enforcement officer in the United States had access to additional bullet-resistant equipment; (3) according to studies, between 1990 and 2000, 1,700 law enforcement officers in the United States were shot and killed in the line of duty; (4) the Federal Bureau of Investigation estimates that the risk of fatality to law enforcement officers while not wearing bullet-resistant equipment, such as an armor vest, is 14 times higher than for officers wearing an armor vest; and (5) the Executive Committee for Indian Country Law Enforcement Improvements reports that violent crime in Indian country has risen sharply despite a decrease in the national crime rate, and has concluded that there is a ``public safety crisis in Indian country''. (b) Purpose.--The purpose of this Act is to save lives of law enforcement officers by helping State, local, and tribal law enforcement agencies provide officers with bullet-resistant equipment and video cameras. SEC. 3. MATCHING GRANT PROGRAM FOR LAW ENFORCEMENT BULLET-RESISTANT EQUIPMENT. (a) In General.--Part Y of title I of the Omnibus Crime Control and Safe Streets Act of 1968 is amended-- (1) by striking the part designation and part heading and inserting the following: ``PART Y--MATCHING GRANT PROGRAMS FOR LAW ENFORCEMENT ``Subpart A--Grant Program for Armor Vests''; (2) by striking ``this part'' each place that term appears and inserting ``this subpart''; and (3) by adding at the end the following: ``Subpart B--Grant Program for Bullet-Resistant Equipment ``SEC. 2511. PROGRAM AUTHORIZED. ``(a) In General.--The Director of the Bureau of Justice Assistance is authorized to make grants to States, units of local government, and Indian tribes to purchase bullet-resistant equipment for use by State, local, and tribal law enforcement officers. ``(b) Uses of Funds.--Grants awarded under this section shall be-- ``(1) distributed directly to the State, unit of local government, or Indian tribe; and ``(2) used for the purchase of bullet-resistant equipment for law enforcement officers in the jurisdiction of the grantee. ``(c) Preferential Consideration.--In awarding grants under this subpart, the Director of the Bureau of Justice Assistance may give preferential consideration, if feasible, to an application from a jurisdiction that-- ``(1) has the greatest need for bullet-resistant equipment based on the percentage of law enforcement officers in the department who do not have access to a vest; ``(2) has a violent crime rate at or above the national average as determined by the Federal Bureau of Investigation; or ``(3) has not received a block grant under the Local Law Enforcement Block Grant program described under the heading `State and Local Law Enforcement Assistance' of the Departments of Commerce, Justice, and State, the Judiciary, and Related Agencies Appropriations Act, 2001 (Public Law 106-553). ``(d) Minimum Amount.--Unless all eligible applications submitted by any State or unit of local government within such State for a grant under this section have been funded, such State, together with grantees within the State (other than Indian tribes), shall be allocated in each fiscal year under this section not less than 0.50 percent of the total amount appropriated in the fiscal year for grants pursuant to this section except that the United States Virgin Islands, American Samoa, Guam, and the Northern Mariana Islands shall each be allocated 0.25 percent. ``(e) Maximum Amount.--A qualifying State, unit of local government, or Indian tribe may not receive more than 5 percent of the total amount appropriated in each fiscal year for grants under this section, except that a State, together with the grantees within the State may not receive more than 20 percent of the total amount appropriated in each fiscal year for grants under this section. ``(f) Matching Funds.--The portion of the costs of a program provided by a grant under subsection (a) may not exceed 50 percent. Any funds appropriated by Congress for the activities of any agency of an Indian tribal government or the Bureau of Indian Affairs performing law enforcement functions on any Indian lands may be used to provide the non-Federal share of a matching requirement funded under this subsection. ``(g) Allocation of Funds.--At least half of the funds available under this subpart shall be awarded to units of local government with fewer than 100,000 residents. ``SEC. 2512. APPLICATIONS. ``(a) In General.--To request a grant under this subpart, the chief executive of a State, unit of local government, or Indian tribe shall submit an application to the Director of the Bureau of Justice Assistance in such form and containing such information as the Director may reasonably require. ``(b) Regulations.--Not later than 90 days after the date of enactment of this subpart, the Director of the Bureau of Justice Assistance shall promulgate regulations to implement this section (including the information that must be included and the requirements that the States, units of local government, and Indian tribes must meet) in submitting the applications required under this section. ``(c) Eligibility.--A unit of local government that receives funding under the Local Law Enforcement Block Grant program, described under the heading `State and Local Law Enforcement Assistance' of the Departments of Commerce, Justice, and State, the Judiciary, and Related Agencies Appropriations Act, 2001 (Public Law 106-553), during a fiscal year in which it submits an application under this subpart shall not be eligible for a grant under this subpart unless the chief executive officer of such unit of local government certifies and provides an explanation to the Director that the unit of local government considered or will consider using funding received under the block grant program for any or all of the costs relating to the purchase of bullet-resistant equipment, but did not, or does not expect to use such funds for such purpose. ``SEC. 2513. DEFINITIONS. ``In this subpart-- ``(1) the term `equipment' means windshield glass, car panels, shields, and protective gear; ``(2) the term `State' means each of the 50 States, the District of Columbia, the Commonwealth of Puerto Rico, the United States Virgin Islands, American Samoa, Guam, and the Northern Mariana Islands; ``(3) the term `unit of local government' means a county, municipality, town, township, village, parish, borough, or other unit of general government below the State level; ``(4) the term `Indian tribe' has the same meaning as in section 4(e) of the Indian Self-Determination and Education Assistance Act (25 U.S.C. 450b(e)); and ``(5) the term `law enforcement officer' means any officer, agent, or employee of a State, unit of local government, or Indian tribe authorized by law or by a government agency to engage in or supervise the prevention, detection, or investigation of any violation of criminal law, or authorized by law to supervise sentenced criminal offenders.''. (b) Authorization of Appropriations.--Section 1001(a) of the Omnibus Crime Control and Safe Streets Act of 1968 (42 U.S.C. 3793(a)) is amended by striking paragraph (23) and inserting the following: ``(23) There are authorized to be appropriated to carry out part Y-- ``(A) $25,000,000 for each of fiscal years 2002 through 2004 for grants under subpart A of that part; and ``(B) $40,000,000 for each of fiscal years 2002 through 2004 for grants under subpart B of that part.''. SEC. 4. SENSE OF CONGRESS. In the case of any equipment or products that may be authorized to be purchased with financial assistance provided using funds appropriated or otherwise made available by this Act, it is the sense of Congress that entities receiving the assistance should, in expending the assistance, purchase only American-made equipment and products. SEC. 5. TECHNOLOGY DEVELOPMENT. Section 202 of title I of the Omnibus Crime Control and Safe Streets Act of 1968 (42 U.S.C. 3722) is amended by adding at the end the following: ``(e) Bullet-Resistant Technology Development.-- ``(1) In general.--The Institute is authorized to-- ``(A) conduct research and otherwise work to develop new bullet-resistant technologies (i.e., acrylic, polymers, aluminized material, and transparent ceramics) for use in police equipment (including windshield glass, car panels, shields, and protective gear); ``(B) inventory bullet-resistant technologies used in the private sector, in surplus military property, and by foreign countries; and ``(C) promulgate relevant standards for, and conduct technical and operational testing and evaluation of, bullet-resistant technology and equipment, and otherwise facilitate the use of that technology in police equipment. ``(2) Priority.--In carrying out this subsection, the Institute shall give priority in testing and engineering surveys to law enforcement partnerships developed in coordination with high-intensity drug trafficking areas. ``(3) Authorization of appropriations.--There is authorized to be appropriated to carry out this subsection $3,000,000 for fiscal years 2002 through 2004.''.
Officer Dale Claxton Bulletproof Police Protective Equipment Act of 2001 - Amends the Omnibus Crime Control and Safe Streets Act of 1968 to authorize the Director of the Bureau of Justice Assistance to make grants to States, local governments, and Indian tribes to purchase bullet resistant equipment for use by law enforcement officers.Sets forth provisions regarding permissible uses of grant funds, preferential consideration, minimum and maximum allocations, matching funds, awards to local governmental units with fewer than 100,000 residents, and application requirements.Expresses the sense of Congress that entities receiving assistance under this Act should purchase only American-made equipment and products.Authorizes the National Institute of Justice (NIJ) to: (1) conduct research and otherwise work to develop new bullet resistant technologies for use in police equipment; (2) inventory bullet resistant technologies used in the private sector, in surplus military property, and by foreign countries; and (3) promulgate relevant standards for, and conduct technical and operational testing and evaluation of, bullet resistant technology and equipment, and otherwise facilitate the use of that technology in police equipment.Directs NIJ to give priority in testing and engineering surveys to law enforcement partnerships developed in coordination with high intensity drug trafficking areas.
physically interesting superconducting loops with junctions are the superconducting quantum interference device ( squid ) and the superconductor / normal / superconductor loop with long normal sector ( @xmath0 loop ) . both are mesoscopic size and yield persistent currents when a magnetic flux threads the loop . the thickness of the josephson junctions in squid loop is much smaller than the superconducting coherence length , but the thickness of normal segment @xmath1 of @xmath0 loop can be larger than that . the persistent current in the former system flows by tunneling the josephson junction , while that in the latter by the long range proximity effect . since these two loops are very different in nature , we can anticipate a different current - phase relation for @xmath0 loop , but it was not explicitly demonstrated in previous studies for the superconductor / normal / superconductor hybrid junction @xcite and for the @xmath0 loop @xcite in connection with the andreev reflection process @xcite . the current described by the motion of a pair of electron and hole in the normal sector changes into that by a cooper pair in the superconducting sector . there is an intermediate region near the edge of superconductor , where the current is described by quasiparticles and quasiholes.then the current carried by the electrons and the holes in the normal sector should be the same as that carried by quasiparticles and quasiholes in the intermediate region . this current conservation condition can be satisfied by considering that two wave vectors of the quasiparticle and the quasihole can be different from each other like those of electron and hole in the normal sector . in solving the bogoliubov - de gennes ( bdg ) equation , we consider the average energy of a pair of particles is a dynamic variable rather than a constant chemical potential . thus the values of the dynamic variables are determined by minimizing the free energy of the @xmath0 loop . recently an experiment for the high-@xmath2 supercondutor ( hts ) junction with interlayer much thicker than the superconducting correlation length has been reported @xcite , where the hts junction was incorporated in a superconducting loop with threading external magnetic flux @xmath3 . in this single - junction interferometer experiment , the current - phase relation shows the highly non - sinusoidal behaviors so that the slope of the current for zero external flux becomes larger than that for @xmath4 with the superconducting unit flux quantum @xmath5 as temperature goes down . the tunneling currents are supposed to take place through the long range proximity effect across the thick @xmath6 ( pbco ) interlayer . since the @xmath7 chains in the pbco layer are metallic at low temperature , the above junction can be considered as a @xmath0 junction incorporated in a superconduction loop . in this study we calculate the persistent current of the above @xmath0 loop considering the current conservation and free energy minimum conditions and show that the experimentally observed non - sinusoidal type current can emerge through the long - range proximity effect in the @xmath0 loop . the quasiparticles in an sns loop with threading external magnetic flux can be described by the bogoliubov - de gennes ( bdg ) equation @xmath8 where @xmath9 with the electron mass @xmath10 , @xmath11 is the vector potential with the circumference of loop @xmath12 , and @xmath13 is the pair potential as a function of spatial coordinate @xmath14 . for a superconducting loop _ without _ junction threaded by an aharonov - bohm flux , the quasiparticle wave functions @xmath15 and @xmath16 contain extra factor due to gauge invariance such as @xmath17 and @xmath18 , where @xmath19 . uniform flow of persistent current is derived by the bdg equation with the pair potential , @xmath20 @xcite . if the superconducting loop is interrupted by a normal sector as shown in fig . [ fig : loop ] , the wave function @xmath21 of a pair of electron and hole in normal sector @xmath22 and of a pair of quasiparticle and quasihole in the intermediate region of superconducting sector @xmath23 is given by @xmath24 where @xmath25 is the phase shift due to andreev reflection at the ns interface @xcite and , when a pair of quasiparticles passes through the ns interface , each quasiparticle acquires the additional phase equal to @xmath25 . since we do not make any assumption about the sign of the wave vectors @xmath26 and @xmath27 , this wave function can describe both the excitations moving clockwise and counterclockwise . here a notable point is that the wave vectors of the quasiparticle and the quasihole in the previous work @xcite are set to be the same . it is , however , natural to discriminate the wave vectors like those of particles @xmath26 and @xmath27 in the normal sector in order to satisfy the current conservation condition . therefore , we introduce different wave vectors , @xmath28 and @xmath29 , for the quasiparticle and the quasihole in the intermediate region of superconducting sector . this equation can be solved easily for the normal sector in which @xmath30 , and yields the relations for @xmath31 and @xmath32 such that @xmath33 in the intermediate region of superconducting sector , however , the bdg equation must be solved with the pair potential @xmath34 then the bdg equation for @xmath35 becomes @xmath36 and @xmath37 representing @xmath38 and @xmath39 such as @xmath40 and @xmath41 we get an expression for @xmath31 and @xmath32 , @xmath42 with @xmath43 , @xmath44 and @xmath45 . for @xmath46 , we can also solve the bdg equation and find that @xmath47 and @xmath48 . the phase matching conditions for the wave function of eq . ( [ wfns ] ) at @xmath49 and @xmath50 are given by @xmath51 and @xmath52 , where @xmath53 and @xmath54 represent the phase part of the coefficients , @xmath55 and @xmath56 , respectively . the condition for the existence of a solution leads to the boundary condition @xmath57 when an electron becomes andreev - reflected at the superconductor / nomal interface , the transmitted quasiparticle pair obtains the additional phase @xmath58 . the phase @xmath59 in the boundary condition in eq . ( [ contns ] ) is the sum of two phase changes @xmath58 due to the andreev reflections at each interface . the condition of current conservation at ns interface will be given by using the representation of the flux @xmath60 $ ] . in the wave function @xmath61 in eq . ( [ wfns ] ) , @xmath15 is the electronlike wave function and @xmath16 is the holelike wave function which is the complex conjugate of the electronlike wave function . the current representation for a pair of electrons thus should be obtained with the wave function @xmath62 and we can get the relation @xmath63 the cooper pairs in superconducting sector then should carry the current @xmath64 with the density of cooper pairs @xmath65 and @xmath66 . when we calculate the current through the @xmath0 junction , we may be able to set the average energy @xmath31 of the particle and the hole in eq . ( [ normalu ] ) equal to the chemical potential . but , in the @xmath0 loop , @xmath31 in eq . ( [ normalu ] ) need not be the constant chemical potential . for example , we consider a simple normal loop with a threading flux . the average energy of two particles @xmath31 at fermi level is different from the chemical potential @xmath67 such that @xmath68 . since @xmath26 and @xmath27 depend on the dynamic variable @xmath25 as well as @xmath69 in the @xmath0 loop , @xmath31 can not be set as a constant chemical potential but should also be a dynamic variable to be determined . since an extra variable @xmath31 is introduced , we need one more independent relation . that is given by the requirement of free energy minimum @xcite . since the intermediate region of superconducting sector is so thin that we neglect the energy of this region in the free energy expression and consider only the energy of cooper pairs . in superconducting sector the cooper pairs carry the persistent current corresponding to the cooper pair wave vector @xmath70 and the energy of a cooper pair can be written as @xmath71 with @xmath66 . therefore the total free energy per particle @xmath72 can be written as @xmath73 using eqs . ( [ normalu ] ) and ( [ u ] ) , the total free energy can be represented in @xmath74 and @xmath75 . solving the coupled equations ( [ normalu ] ) , ( [ normale ] ) , ( [ u ] ) , ( [ e ] ) , ( [ contns ] ) , ( [ current ] ) and ( [ ce ] ) numerically , we obtain the energy levels corresponding to the two solutions of the bdg equation in eq . ( [ bdg ] ) as a function of external flux @xmath86 as shown in fig . [ fig : endiagram](a ) , where the solid and the dashed lines denote the lower levels and the other lines the higher levels . the ground state corresponds to the solid ( dashed ) line for @xmath87 . the persistent currents in fig . [ fig : endiagram](b ) are represented by the same lines as those of the corresponding states in fig . [ fig : endiagram](a ) . the persistent current , @xmath64 , can be written by @xmath88 , where @xmath89 and @xmath90 is a gap potential chosen arbitrary . since the cooper pairs in the superconductor are in the coherent condensate state , each cooper pair carries the same superconducting current . a cooper pair changes into a pair of normal electrons in the normal sector via a pair of quasiparticles in the intermediate region . since the current in the loop should be conserved , the current equal to the macroscopic persistent current in the superconducting sector flows in the normal sector . in fig . [ fig : endiagram](b ) , we can see several persistent currents , of which the persistent current at ground state corresponds to the solid line for @xmath91 and the dashed line for @xmath92 , the saw - tooth type current . recently a single - junction interferometer experiment was done on hts junction @xcite ( @xmath93 /@xmath94/@xmath93 ) which is incorporated into a superconducting loop with penetrating magnetic flux . since there is no misorientation angle between the two d - wave superconductors across the interlayer in this experiment , the phase difference across the interlayer can be brought about only by the threading magnetic flux . they obtained the current - phase relation at finite temperature , where the slope for zero external flux is larger than that for the external flux @xmath95 . in this experiment , the thickness of the pbco interlayer of the junction is as long as hundreds @xmath96 which is much larger than the high-@xmath2 superconducting correlation length by an order of magnitude and thus the cooper pairs can not directly tunnel the junction . these long range proximity effects have been observed in many experiments on hts junctions @xcite and considered as a characteristic of @xmath0 junctions . thus the experimental results may be explained by calculating the currents of the @xmath0 loop as a function of the threading external flux . here we can raise a question that whether it is a real proximity effect , i.e. , there are filaments of pinholes or microshots so that the cooper pairs can be transported by the resonant tunneling through localized states in filaments . recently an experiment on the trilayer hts junction @xcite has been reported , where they synthesized atomically smooth films of hts and uniform trilayer junction so that they conclude that the long - range proximity effect do not originate from the resonant tunneling through the energy - aligned states in the microshots , but it is an intrinsic property of the interlayer . in fact , the supercurrents are known to flow via metallic @xmath97 chains in the pbco interlayer @xcite . from the currents in fig . [ fig : endiagram](b ) corresponding to the states in fig . [ fig : endiagram](a ) we can obtain the persistent currents of the @xmath0 loop at finite temperatures . at finite temperature , since the current state with energy @xmath98 has the probability proportional to @xmath99 , we can obtain the thermally averaged persistent currents from those current states in fig . [ fig : endiagram ] as shown in fig . [ fig : pcsin ] . we can observe highly non - sinusoidal currents in fig . [ fig : pcsin ] , where the amplitude maxima approach the point @xmath100 and the slope of current at @xmath100 becomes larger than that at @xmath95 as temperature goes down . the current takes the sinusoidal form only after the temperature goes up such that @xmath101 which corresponds to the temperature near @xmath2 . actually the similar behavior can also be seen in the loop interrupted by a grain boundary josephson junction not by an @xmath0 junction , which however comes from the phase difference across the grain boundary josephson junction due to the misorientation angle between two d - wave superconductors @xcite . in @xmath0 junctions the wave functions acquire the andreev reflection phase shift @xmath25 which appears in the boundary condition of eq . ( [ contns ] ) . the andreev phase shift takes the place of the misorientation angle of the grain boundary josephson junction and thus results in the non - sinusoidal current - phase relation as shown in the manuscript . in ref . @xcite the authors explained their own experimental results by assuming thermal fluctuations which induced the highly non - sinusoidal current - phase relation at low temperatures . at higher temperatures near the critical temperature they can recover the sinusoidal current - phase relations . in the present study , however , we can show that the non - sinusoidal current - phase relation emerges naturally at low temperature without assuming thermal fluctuations . we have studied the current - phase relation of the @xmath0 loop with threading magnetic flux . the cooper pairs in the superconducting sector become the electron pairs in the normal sector via the quasiparticle states in the intermediate region . the net current in the @xmath0 loop should be preserved and thus we introduce different wave vectors of the pair of quasiparticles in order to satisfy the current conservation condition . furthermore , since the average energy of a pair of particles should be a dynamic variable , we also consider the free energy minimum condition . we obtained the persistent currents of the ground and excited states and found that we can explain the experimentally observed highly non - sinusoidal current - phase relation with maxima near the zero external flux at low temperature .
we study the current - phase relation of the superconductor / normal / superconductor ( sns ) junction imbedded in a superconducting loop . considering the current conservation and free energy minimum conditions , we obtain the persistent currents of the sns loop . at finite temperature we can explain the experimentally observed highly non - sinusoidal currents which have maxima near the zero external flux .
recent data have provided convincing evidence for multiple interacting genetic factors as the main causative determinants of autism . tuberous sclerosis complex ( tsc ) is an autosomal dominant hamartomatosis with multisystem involvement including the brain , skin , heart , and kidney . the disease is characterized by a broad physical phenotypic spectrum with epilepsy , mental retardation , renal dysfunction , and dermatologic abnormalities . if autism and tsc were independently occurring disorders , it was unlikely to identify many patients with both diseases , given the individual prevalence rates of 1/110 and 1/6000 . the basis for this association is not well understood , although elucidation of the mechanisms would throw light on the brain basis of idiopathic autism where there is strong evidence for a genetically determined neurobiological abnormality , but little understanding of its nature . awareness of the relationship between autism spectrum disorder and tuberous sclerosis complex is important during the evaluation of individuals with either disorder . we report on a cohort of children with autism , to determine the prevalence rate of tuberous sclerosis complex in autistic disorder in our region . all patients in this study came from this registry between june 1 , 2005 , and may 31 , 2009 . all the patients who were eligible to participate in this study had to have the typical triad of symptoms of autism : social deficits , communication impairment , and rigid ritualistic interests . diagnosis of autism spectrum disorder was based on the criteria of the diagnostic and statistical manual of mental disorders - iv and the autistic disorder diagnostic interview - revised . in all children with autistic disorder , we routinely examined for any features of tuberous sclerosis complex by looking for neurocutaneous markers such as depigmented spots . in those with infantile spasm or epilepsy , the clinical features of tuberous sclerosis complex cases were ascertained by one of us ( wen - jun tu ) during the course of study . estimates of the children 's abilities were made using standardized cognitive tests appropriate for age and/or ability ( mullens scales , the wechsler scales , the ravens coloured matrices and the british picture vocabulary scale ) . unfortunately , none of the cases had genetic analyis because genetic analysis for tuberous sclerosis complex is not available in beijing . a low mental age was seen as an exclusion criterion as it precluded confident diagnosis of an autism spectrum disorder and parents had poor compliance for following up in this study . the details of the study were explained to the parents of the participating children , and written informed consent was obtained from the parents . statistical analyses were performed using spss version 17.0 ( spss inc , usa ) for windows ( microsoft corporation usa ) . during the 4-year period ( 20052009 ) , 632 patients were admitted to our center . we collected a database of 429 children ( 390 boys and 39 girls ; male to female ratio 10:1 ) with autistic disorder and pervasive developmental disorders . of these , only 4 cases were non - chinese , one was american , one was korean , and two were europeans . demographic characteristics of the autistic and tsc patients sd : standard deviation we routinely examined all children with autistic disorder or pervasive developmental disorders for any features of tuberous sclerosis complex by looking for neurocutaneous markers , such as depigmented spots , which appear in 43% of children with tuberous sclerosis complex by the age of 30 months . of these , five had tuberous sclerosis complex . four out of the five tsc patients had bilateral lesions , while 23 of the 424 non - tsc patients had temporal lobe lesions . our china rehabilitation research center receives referrals from mainland of china for children with developmental problems such as autistic and speech delay as young as 4 months . thus , our registry for any disorder , such as autistic disorder and pervasive developmental disorders , can only represent the profile within a single region . 1.17% of tuberous sclerosis complex in autistic disorder can be a reliable estimate of the genuine association of autistic disorder and tuberous sclerosis complex in our study . however , as this was a hospital - based study , we might be underestimating the really incidence of tuberous sclerosis complex in autistic disorder , since not all autistic children were routinely examined in order to detect signs for diagnosing tsc . this was also reported in other population studies of autistic disorder[911 ] . in a hospital based survey of autistic disorder and tuberous sclerosis complex , from the tuberous sclerosis complex registry in the hong kong island , autistic disorder as defined by the autistic disorder diagnostic interview - revised and diagnostic and statistical manual of mental disorders - iv , autistic disorder were defined in 7 children among 44 cases with tuberous sclerosis complex ; seven had tuberous sclerosis complex among 753 children with autistic disorder . smalley reported that among autistic populations , the frequency of tsc is 1 - 4% and perhaps as high as 8 - 14% among the subgroup of autistic individuals with a seizure disorder , and gillberg and coleman estimated that 9% of children with autistic disorder have tuberous sclerosis complex . the prevalence rate of tuberous sclerosis complex in children with autistic disorder in our region was relatively low at 1.17% . bias between different studies might be introduced depending on whether the researches are cohort or population based . different authors might come from different subspecialties and have a different background ; thus , the age at which these children were diagnosed varies as well . in addition , study population and number also could produce a considerable bias . according to our study and other researches , autism spectrum disorder was a condition that might be associated with the development of tuberous sclerosis complex . the results of a population - based survey among 152,732 finnish children and adolescents showed that autistic disorder was associated with tuberous sclerosis . tuberous sclerosis complex results from mutations in one of two genes , tsc1 and tsc2 , coding for hamartin and tuberin , respectively . chopra and lawson reported that there was a trend towards greater severity for patients with tsc2 mutations compared with their tsc1 counterparts , particularly for autistic spectrum disorder , but this did not reach statistical significance . can autism spectrum disorder introduce the mutation in the tsc1 or tsc2 gene ? more work is needed to clarify the mechanisms that underlie this unique association and the variability in expression before one can understand the biologic processes involved in the etiology of both diseases . the disparity in the patients sex ( male to female ratio 10:1 ) could have biased results . it is possible that in our region a boy can obtain parents more attention . in additional , the prevalence rate of tuberous sclerosis complex in autistic disorder in our region was 1.17% . autism spectrum disorder is a condition that might be associated with the development of tuberous sclerosis complex . it is unfortunate that we did not have performed genetic analysis for our cohort because this test is not available locally . furthermore , this survey was just a small cohort study , a large multicenter approach would be necessary to obtain the necessary knowledge .
objectiveto study the prevalence rate of tuberous sclerosis complex in autistic disorder.methodswe studied one cohort of children followed up since 2005 until 2009 , with autistic disorder , to determine the incidence of tuberous sclerosis complex . we established an autistic disorder registry in 2005 at china rehabilitation research center . during the 4-year period ( 20052009 ) , we collected a database of 429 children ( 390 boys and 39 girls ; male to female ratio 10:1 ) with autistic disorder and pervasive developmental disorders . we routinely examined all children with autistic disorder for any features of tuberous sclerosis complex by looking for neurocutaneous markers such as depigmented spots . in those with infantile spasm or epilepsy , the clinical features of tuberous sclerosis complex were monitored regularly during follow-up.findingsof these , five had tuberous sclerosis complex . thus , the prevalence rate of tuberous sclerosis complex in autistic disorder is 1.17% . all of these children were mentally retarded with moderate to severe grades . their iq or developmental quotient was less than 70.conclusionthe prevalence rate of tuberous sclerosis complex in autistic disorder was 1.17% in our region ; autism spectrum disorder is a condition that might be associated with development of tuberous sclerosis complex .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Arlington National Cemetery Burial Eligibility Act''. SEC. 2. PERSONS ELIGIBLE FOR BURIAL IN ARLINGTON NATIONAL CEMETERY. (a) In General.--Chapter 24 of title 38, United States Code, is amended by adding at the end the following new section: ``Sec. 2412. Arlington National Cemetery: persons eligible for burial ``(a) Primary Eligibility.--The remains of the following individuals may be buried in Arlington National Cemetery: ``(1) Any member of the Armed Forces who dies while on active duty. ``(2) Any retired member of the Armed Forces and any person who served on active duty and at the time of death was entitled (or but for age would have been entitled) to retired pay under chapter 1223 of title 10, United States Code. ``(3) Any former member of the Armed Forces separated for physical disability before October 1, 1949, who-- ``(A) served on active duty; and ``(B) would have been eligible for retirement under the provisions of section 1201 of title 10 (relating to retirement for disability) had that section been in effect on the date of separation of the member. ``(4) Any former member of the Armed Forces whose last active duty military service terminated honorably and who has been awarded one of the following decorations: ``(A) Medal of Honor. ``(B) Distinguished Service Cross, Air Force Cross, or Navy Cross. ``(C) Distinguished Service Medal. ``(D) Silver Star. ``(E) Purple Heart. ``(5) Any former prisoner of war who dies on or after November 30, 1993. ``(6) The President or any former President. ``(b) Eligibility of Family Members.--The remains of the following individuals may be buried in Arlington National Cemetery: ``(1) The spouse, surviving spouse (which for purposes of this paragraph includes any remarried surviving spouse, section 2402(5) of this title notwithstanding), minor child, and, at the discretion of the Superintendent, unmarried adult child of a person listed in subsection (a), but only if buried in the same gravesite as that person. ``(2)(A) The spouse, minor child, and, at the discretion of the Superintendent, unmarried adult child of a member of the Armed Forces on active duty if such spouse, minor child, or unmarried adult child dies while such member is on active duty. ``(B) The individual whose spouse, minor child, and unmarried adult child is eligible under subparagraph (A), but only if buried in the same gravesite as the spouse, minor child, or unmarried adult child. ``(3) The parents of a minor child or unmarried adult child whose remains, based on the eligibility of a parent, are already buried in Arlington National Cemetery, but only if buried in the same gravesite as that minor child or unmarried adult child. ``(4)(A) Subject to subparagraph (B), the surviving spouse, minor child, and, at the discretion of the Superintendent, unmarried adult child of a member of the Armed Forces who was lost, buried at sea, or officially determined to be permanently absent in a status of missing or missing in action. ``(B) A person is not eligible under subparagraph (A) if a memorial to honor the memory of the member is placed in a cemetery in the national cemetery system, unless the memorial is removed. A memorial removed under this subparagraph may be placed, at the discretion of the Superintendent, in Arlington National Cemetery. ``(5) The surviving spouse, minor child, and, at the discretion of the Superintendent, unmarried adult child of a member of the Armed Forces buried in a cemetery under the jurisdiction of the American Battle Monuments Commission. ``(c) Disabled Adult Unmarried Children.--In the case of an unmarried adult child who is incapable of self-support up to the time of death because of a physical or mental condition, the child may be buried under subsection (b) without requirement for approval by the Superintendent under that subsection if the burial is in the same gravesite as the gravesite in which the parent, who is eligible for burial under subsection (a), has been or will be buried. ``(d) Family Members of Persons Buried in a Group Gravesite.--In the case of a person eligible for burial under subsection (a) who is buried in Arlington National Cemetery as part of a group burial, the surviving spouse, minor child, or unmarried adult child of the member may not be buried in the group gravesite. ``(e) Exclusive Authority for Burial in Arlington National Cemetery.--Eligibility for burial of remains in Arlington National Cemetery prescribed under this section is the exclusive eligibility for such burial. ``(f) Application for Burial.--A request for burial of remains of an individual in Arlington National Cemetery made before the death of the individual may not be considered by the Secretary of the Army or any other responsible official. ``(g) Register of Buried Individuals.--(1) The Secretary of the Army shall maintain a register of each individual buried in Arlington National Cemetery and shall make such register available to the public. ``(2) With respect to each such individual buried on or after January 1, 1998, the register shall include a brief description of the basis of eligibility of the individual for burial in Arlington National Cemetery. ``(h) Definitions.--For purposes of this section: ``(1) The term `retired member of the Armed Forces' means-- ``(A) any member of the Armed Forces on a retired list who served on active duty and who is entitled to retired pay; ``(B) any member of the Fleet Reserve or Fleet Marine Corps Reserve who served on active duty and who is entitled to retainer pay; and ``(C) any member of a reserve component of the Armed Forces who has served on active duty and who has received notice from the Secretary concerned under section 12731(d) of title 10, of eligibility for retired pay under chapter 1223 of title 10, United States Code. ``(2) The term `former member of the Armed Forces' includes a person whose service is considered active duty service pursuant to a determination of the Secretary of Defense under section 401 of Public Law 95-202 (38 U.S.C. 106 note). ``(3) The term `Superintendent' means the Superintendent of Arlington National Cemetery.''. (b) Publication of Updated Pamphlet.--Not later than 180 days after the date of the enactment of this Act, the Secretary of the Army shall publish an updated pamphlet describing eligibility for burial in Arlington National Cemetery. The pamphlet shall reflect the provisions of section 2412 of title 38, United States Code, as added by subsection (a). (c) Clerical Amendment.--The table of sections at the beginning of chapter 24 of title 38, United States Code, is amended by adding at the end the following new item: ``2412. Arlington National Cemetery: persons eligible for burial.''. (d) Technical Amendments.--(1) Section 2402(5) of title 38, United States Code, is amended by inserting ``, except section 2412(b)(1) of this title,'' after ``which for purposes of this chapter''. (2) Section 2402(7) of such title is amended-- (A) by inserting ``(or but for age would have been entitled)'' after ``was entitled''; (B) by striking out ``chapter 67'' and inserting in lieu thereof ``chapter 1223''; and (C) by striking out ``or would have been entitled to'' and all that follows and inserting in lieu thereof a period. (e) Effective Date.--(1) Except as provided in paragraph (2), section 2412 of title 38, United States Code, as added by subsection (a), shall apply with respect to individuals dying on or after the date of the enactment of this Act. (2) In the case of an individual buried in Arlington National Cemetery before the date of the enactment of this Act, the surviving spouse of such individual is deemed to be eligible for burial in Arlington National Cemetery under subsection (b) of such section, but only in the same gravesite as such individual. SEC. 3. PERSONS ELIGIBLE FOR PLACEMENT IN THE COLUMBARIUM IN ARLINGTON NATIONAL CEMETERY. (a) In General.--Chapter 24 of title 38, United States Code, is amended by adding after section 2412, as added by section 2(a) of this Act, the following new section: ``Sec. 2413. Arlington National Cemetery: persons eligible for placement in columbarium ``The cremated remains of the following individuals may be placed in the columbarium in Arlington National Cemetery: ``(1) A person eligible for burial in Arlington National Cemetery under section 2412 of this title. ``(2)(A) A veteran whose last period of active duty service (other than active duty for training) ended honorably. ``(B) The spouse, surviving spouse, minor child, and, at the discretion of the Superintendent of Arlington National Cemetery, unmarried adult child of such a veteran.''. (b) Clerical Amendment.--The table of sections at the beginning of chapter 24 of title 38, United States Code, is amended by adding after section 2412, as added by section 2(c) of this Act, the following new item: ``2413. Arlington National Cemetery: persons eligible for placement in columbarium.''. (c) Conforming Amendment.--Section 11201(a)(1) of title 46, United States Code, is amended by inserting after subparagraph (B), the following new subparagraph: ``(C) Section 2413 (relating to placement in the columbarium in Arlington National Cemetery).''. (d) Effective Date.--Section 2413 of title 38, United States Code, as added by subsection (a), and section 11201(a)(1)(C), as added by subsection (c), shall apply with respect to individuals dying on or after the date of the enactment of this Act. SEC. 4. MONUMENTS IN ARLINGTON NATIONAL CEMETERY. (a) In General.--Chapter 24 of title 38, United States Code, is amended by adding after section 2413, as added by section 3(a) of this Act, the following new section: ``Sec. 2414. Arlington National Cemetery: authorized headstones, markers, and monuments ``(a) Gravesite Markers Provided by the Secretary.--A gravesite in Arlington National Cemetery shall be appropriately marked in accordance with section 2404 of this title. ``(b) Gravesite Markers Provided at Private Expense.--(1) The Secretary of the Army shall prescribe regulations for the provision of headstones or markers to mark a gravesite at private expense in lieu of headstones and markers provided by the Secretary of Veterans Affairs in Arlington National Cemetery. ``(2) Such regulations shall ensure that-- ``(A) such headstones or markers are of simple design, dignified, and appropriate to a military cemetery; ``(B) the person providing such headstone or marker provides for the future maintenance of the headstone or marker in the event repairs are necessary; ``(C) the Secretary of the Army shall not be liable for maintenance of or damage to the headstone or marker; ``(D) such headstones or markers are aesthetically compatible with Arlington National Cemetery; and ``(E) such headstones or markers are permitted only in sections of Arlington National Cemetery authorized for such headstones or markers as of January 1, 1947. ``(c) Monuments.--(1) No monument (or similar structure as determined by the Secretary of the Army in regulations) may be placed in Arlington National Cemetery except pursuant to the provisions of this subsection. ``(2) A monument may be placed in Arlington National Cemetery if the monument commemorates-- ``(A) the service in the Armed Forces of the individual, or group of individuals, whose memory is to be honored by the monument; or ``(B) a particular military event. ``(3) No monument may be placed in Arlington National Cemetery until the end of the 25-year period beginning-- ``(A) in the case of commemoration of service under paragraph (1)(A), on the last day of the period of service so commemorated; and ``(B) in the case of commemoration of a particular military event under paragraph (1)(B), on the last day of the period of the event. ``(4) A monument may be placed only in those sections of Arlington National Cemetery designated by the Secretary of the Army for such placement.''. (b) Clerical Amendment.--The table of sections at the beginning of chapter 24 of title 38, United States Code, is amended by adding after section 2413, as added by section 3(b) of this Act, the following new item: ``2414. Arlington National Cemetery: authorized headstones, markers, and monuments.''. (c) Effective Date.--The amendment made by subsection (a) shall apply with respect to headstones, markers, or monuments placed in Arlington National Cemetery on or after the date of the enactment of this Act. SEC. 5. PUBLICATION OF REGULATIONS. Not later than one year after the date of the enactment of this Act, the Secretary of the Army shall publish in the Federal Register any regulation proposed by the Secretary under this Act. Passed the House of Representatives March 23, 1999. Attest: JEFF TRANDAHL, Clerk.
Arlington National Cemetery Burial Eligibility Act - Allows the remains of the following persons to be interred at Arlington National Cemetery: (1) any member of the armed forces who dies while on active duty; (2) any retired member and any person who served on active duty and at the time of death was entitled to retired pay (or would have been so entitled but for his or her age); (3) any former member who was separated for physical disability before October 1, 1949, who served on active duty, and who would have been eligible for disability retirement if such provisions had been in effect on such date; (4) any former member whose last active military service was terminated honorably and who has been awarded one of a number of specified military decorations; (5) any former prisoner of war who dies on or after November 30, 1993; (6) the President or any former President; (7) the spouse, surviving spouse, minor child, and, in the discretion of the Cemetery's Superintendent, unmarried adult child of an interred member (but only if buried in the same gravesite); (8) the spouse, minor child, and unmarried adult child (discretionary) of a member on active duty if such person dies while the member is on active duty; (9) the individual whose spouse, minor child, and unmarried adult child (discretionary) is eligible under (8), above, but only if buried in the same gravesite; (10) the parents of a minor child or unmarried adult child whose remains, based on the parent's eligibility, are already buried in the Cemetery, but only if buried in the same gravesite; (11) the surviving spouse, minor child, and unmarried adult child (discretionary) of a member who was lost, buried at sea, or officially determined to be permanently absent in a status of missing or missing in action; and (12) the surviving spouse, minor child, and unmarried adult child (discretionary) of a member buried in a cemetery under the jurisdiction of the American Battle Monuments Commission.
la demande dune prise en charge dfinitive des maladies organiques terminales chez les canadiens infects par le vih est en hausse . jusqu tout rcemment , malgr des donnes internationales faisant foi de rsultats cliniques positifs , les canadiens atteints dune maladie hpatique terminale infects par le vih ntaient pas admissibles une transplantation , sauf en colombie - britannique ( c .- b . ) , o le programme de transplantations de bc transplant les accepte en vue dun aiguillage , dune valuation , de linscription sur la liste dattente et de lexcution dune allogreffe du foie . simpose pour dterminer les enjeux entourant la transplantation hpatique chez les patients infects par le vih . les chercheurs ont procd ltude des dossiers des 28 patients infects par le vih qui ont t orients vers bc transplant pour subir une transplantation hpatique entre 2004 et 2013 . ils ont collig les donnes sur ltat du vih et de la maladie hpatique , lvaluation initiale de la transplantation et les rsultats cliniques . la majorit prsentait des charges virales indtectables du vih , prenaient des antirtroviraux et taient infects par le virus de lhpatite c ( n=16 ) . les comorbidits les plus courantes taient lanxit et les troubles des humeurs ( n=4 ) , ainsi que lhmophilie ( n=4 ) . parmi les patients admissibles la transplantation , quatre ont subi une transplantation conscutive une hpatite auto - immune ( 5,67 ans aprs la transplantation ) , une statose hpatique non alcoolique ( 2,33 ans ) , un virus de lhpatite c ( 2,25 ans ) et une co - infection par lhpatite b et le virus delta ( transplantation rcente ) . un patient est dcd dune insuffisance rnale aigu alors quil tait en attente de transplantation . dix sont dcds pendant la prvaluation et dix ntaient pas des candidats adquats pour la transplantation . la principale raison de ne pas tre un candidat adquat tait une maladie stable ne ncessitant pas de transplantation ( n=4 ) . jusqu prsent , les soins interdisciplinaires et une slection attentive des patients permettent dobtenir des rsultats positifs , y compris la prsence au canada du greff hpatique infect par le vih ayant vcu le plus longtemps depuis sa transplantation . the present study was a retrospective chart review of 28 hiv - infected candidates between 2004 and 2013 . all patients were seen by both the transplant social worker and the transplant psychologist , in addition to the transplant hepatology and surgical staff , and the transplant coordinators . although it may appear that the candidates misled the assessment team in certain respects , there is no proof of this and , therefore , from an ethical perspective , the patients disclosures were assumed to be truthful . the reasons for referral for transplantation were reviewed , in addition to the status of the patients and their clinical outcomes . the present study was approved by the university of british columbia clinical research ethics board ( vancouver , bc ) . the liver transplant program s policy with regard to placement of hiv - infected patients on the transplant waiting list after successful assessment include the following : all candidates must meet the general criteria for transplantation that applies to all non - hiv - infected candidates ( eg , no active infection , no active malignancy , abstinence from substance abuse , etc).candidates should be on art supervised by an hiv specialist.candidates must be free from opportunistic infection.the hiv viral load should be undetectable.although there is no absolute threshold cd4 count , the minimum accepted cd4 count is approximately 150 cells / mmpatients who did not meet these criteria were assessed on an individual basis . all candidates must meet the general criteria for transplantation that applies to all non - hiv - infected candidates ( eg , no active infection , no active malignancy , abstinence from substance abuse , etc ) . although there is no absolute threshold cd4 count , the minimum accepted cd4 count is approximately 150 cells / mm since 2004 , there have been 28 hiv - infected patients referred for liver transplant assessment . the majority ( 23 of 28 [ 82% ] ) of these patients were men and most were assessed as outpatients of the pre - assessment clinic , with a small minority referred to the transplant program as inpatients ( including hospital to hospital transfer for assessment ) . the mean age of these patients was 47 years and the mean follow - up time was 4.1 months ( table 1 ) . the majority of these patients were bc residents ; some were from other provinces and one was from the united states . all patients had chronic liver disease and there were no patients with acute liver failure assessed . one patient in an out - of - province hospital was referred on an urgent basis to the liver transplant program with acute drug hepatotoxicity secondary to highly active art ( haart ) medications . the program accepted the patient in transfer for the purposes of expedited transplant assessment but the patient died en route to bc . there have been no other referrals of hiv - infected patients with acute liver failure . five patients were coinfected with hbv and seven patients had nonviral causes of liver failure . of the non - liver and non - hiv - related medical comorbidities , the majority of these patients also had hemophilia , anxiety and depression ( table 2 ) . one patient had autoimmune hepatitis , one had nonalcoholic hepatosteatosis , one had hepatitis c , one had hbv and hepatitis delta virus coinfection , and one died waiting ( table 3 ) . this patient was from out of province and referred to our centre specifically because of hiv infection . the most common reason for transplant unsuitablility was stable liver disease not requiring transplantation ( n=4 ) . the patients had undetectable hiv viral loads and were on haart at the time of transplantation . currently , three patients with hcv are being assessed to determine whether they are suitable transplant candidates . his immunosuppression induction consisted of low - dose tacrolimus ( ie , 0.5 mg every third day ) , mycophenolate mofetil and tapered corticosteroids because pharmacokinetic interactions with the haart medications occurred . this patient sustained acute renal injury while on tacrolimus ; however , renal function has recovered and his graft is now receiving mycophenolate mofetil monotherapy . this patient has not experienced any rejection episodes ; however , he has required regular endoscopic retrograde cholangiopancreatography to manage and drain biliary sludge due to recurrent biliary anastomotic stricture . as a result , he has a persistent increase in hepatobiliary liver biochemistry ( ie , alkaline phosphatase , gammaglutamyl transferase ) . patient 2 was transplanted at 55 years of age for nonalcoholic steatohepatitis ( nash ) . this patient s hiv is controlled with abacavir / lamivudine ( kivexa , viiv healthcare , canada ) and raltegravir , and because there were no drug interactions , the standard induction tacrolimus , mycophenolate mofetil and tapered corticosteroid dosing was used . this patient also has not experienced any graft rejection , but developed persistent increases in his serum alanine aminotransferase ( alt ) level , which was previously normal ( ie , alt 120 u / l to 150 this was initially attributed to recurrent nash at one year , four months post - transplant . he was subsequently discovered to have acquired acute hcv genotype 1b infection that is now chronic . recent elastography ( fibroscan , echosens , france ) revealed only mild fibrosis ( ie , metavir score f1 , where f4 is cirrhosis ) . patient 3 was transplanted at 49 years of age for hcv coinfection and is two years , six months post - transplant . this patient s hiv infection is controlled with kivexa and raltegravir , and also received standard induction dosing with tacrolimus , mycophenolate mofetil and tapered corticosteroids . at three months post - transplant , it was demonstrated biochemically and with a liver biopsy that there was evidence of graft hepatitis and metavir stage 2 fibrosis ( ie , f2 ) . two years later , this patient has experienced decreasing alt and aspartate aminotransferase ( ast ) levels , with an increased serum bilitrubin level but no other evidence of decompensation . at two years , three months post - transplant , the patient s ascites is being managed with diuretics with the hope that he will be suitable for a non - interferon - based antiviral regimen in the future ( table 3 ) . the final transplanted patient is 50 years of age and has hbv and hepatitis delta virus coinfection , and hepatocellular carcinoma . the immediate post - transplant complications included postoperative bleeding , acute renal dysfunction and delayed surgical biliary anastomosis . pretransplant , the patient s hiv antiviral agents included tenofovir df / emtricitabine and raltegravir . his hbv viral load was undetectable and locoregional therapy of his hepatocellular carcinoma included transarterial chemoembolization . post - transplant , he has received hepatitis immunoglobulin in addition to tenofovir . despite a prolonged hospitalization for convalescence and physical rehabilitation , his hepatitis b surface antigen and hepatitis d nuclei acid tests have remained negative ( table 3 ) . overall , the post - transplant quality of life in these transplant recipients has been excellent , although the recipient with pretransplant hcv infection has developed ascites more than two years post - transplant . since 2004 , there have been 28 hiv - infected patients referred for liver transplant assessment . the majority ( 23 of 28 [ 82% ] ) of these patients were men and most were assessed as outpatients of the pre - assessment clinic , with a small minority referred to the transplant program as inpatients ( including hospital to hospital transfer for assessment ) . the mean age of these patients was 47 years and the mean follow - up time was 4.1 months ( table 1 ) . the majority of these patients were bc residents ; some were from other provinces and one was from the united states . all patients had chronic liver disease and there were no patients with acute liver failure assessed . one patient in an out - of - province hospital was referred on an urgent basis to the liver transplant program with acute drug hepatotoxicity secondary to highly active art ( haart ) medications . the program accepted the patient in transfer for the purposes of expedited transplant assessment but the patient died en route to bc . there have been no other referrals of hiv - infected patients with acute liver failure . five patients were coinfected with hbv and seven patients had nonviral causes of liver failure . of the non - liver and non - hiv - related medical comorbidities , the majority of these patients also had hemophilia , anxiety and depression ( table 2 ) . one patient had autoimmune hepatitis , one had nonalcoholic hepatosteatosis , one had hepatitis c , one had hbv and hepatitis delta virus coinfection , and one died waiting ( table 3 ) . this patient was from out of province and referred to our centre specifically because of hiv infection . the most common reason for transplant unsuitablility was stable liver disease not requiring transplantation ( n=4 ) . the patients had undetectable hiv viral loads and were on haart at the time of transplantation . currently , three patients with hcv are being assessed to determine whether they are suitable transplant candidates . his immunosuppression induction consisted of low - dose tacrolimus ( ie , 0.5 mg every third day ) , mycophenolate mofetil and tapered corticosteroids because pharmacokinetic interactions with the haart medications occurred . this patient sustained acute renal injury while on tacrolimus ; however , renal function has recovered and his graft is now receiving mycophenolate mofetil monotherapy . this patient has not experienced any rejection episodes ; however , he has required regular endoscopic retrograde cholangiopancreatography to manage and drain biliary sludge due to recurrent biliary anastomotic stricture . as a result , he has a persistent increase in hepatobiliary liver biochemistry ( ie , alkaline phosphatase , gammaglutamyl transferase ) . patient 2 was transplanted at 55 years of age for nonalcoholic steatohepatitis ( nash ) . this patient s hiv is controlled with abacavir / lamivudine ( kivexa , viiv healthcare , canada ) and raltegravir , and because there were no drug interactions , the standard induction tacrolimus , mycophenolate mofetil and tapered corticosteroid dosing was used . this patient also has not experienced any graft rejection , but developed persistent increases in his serum alanine aminotransferase ( alt ) level , which was previously normal ( ie , alt 120 u / l to 150 this was initially attributed to recurrent nash at one year , four months post - transplant . he was subsequently discovered to have acquired acute hcv genotype 1b infection that is now chronic . recent elastography ( fibroscan , echosens , france ) revealed only mild fibrosis ( ie , metavir score f1 , where f4 is cirrhosis ) . patient 3 was transplanted at 49 years of age for hcv coinfection and is two years , six months post - transplant . this patient s hiv infection is controlled with kivexa and raltegravir , and also received standard induction dosing with tacrolimus , mycophenolate mofetil and tapered corticosteroids . at three months post - transplant , it was demonstrated biochemically and with a liver biopsy that there was evidence of graft hepatitis and metavir stage 2 fibrosis ( ie , f2 ) . two years later , this patient has experienced decreasing alt and aspartate aminotransferase ( ast ) levels , with an increased serum bilitrubin level but no other evidence of decompensation . at two years , three months post - transplant , the patient s ascites is being managed with diuretics with the hope that he will be suitable for a non - interferon - based antiviral regimen in the future ( table 3 ) . the final transplanted patient is 50 years of age and has hbv and hepatitis delta virus coinfection , and hepatocellular carcinoma . the immediate post - transplant complications included postoperative bleeding , acute renal dysfunction and delayed surgical biliary anastomosis . pretransplant , the patient s hiv antiviral agents included tenofovir df / emtricitabine and raltegravir . his hbv viral load was undetectable and locoregional therapy of his hepatocellular carcinoma included transarterial chemoembolization . post - transplant , he has received hepatitis immunoglobulin in addition to tenofovir . despite a prolonged hospitalization for convalescence and physical rehabilitation , his hepatitis b surface antigen and hepatitis d nuclei acid tests have remained negative ( table 3 ) . overall , the post - transplant quality of life in these transplant recipients has been excellent , although the recipient with pretransplant hcv infection has developed ascites more than two years post - transplant . nearly a decade after the establishment of an hiv liver transplant program in bc , we demonstrated that successful transplantation in hiv - infected and hiv - hcv coinfected patients is possible . the success of these candidates is due to a multidisciplinary team that includes a transplant surgical team , transplant medicine team , hiv specialty team and psychosocial support . these patients were selected for transplantation based on their failing liver disease . there is absolutely no bias against the hiv - infected individual ; in fact , we have been their advocates . in the first year of the program , and in the time leading up to the establishment of the program , there was uncertainty as to what subgroup of hiv - infected patients would be suitable for transplantation . within a short time , however , the attitude of the liver transplant program became one of advocacy . with the realization that patients outside of bc did not have any option for transplantation within canada , it was decided that no referred patient with hiv would be declined without assessment , both inside bc and outside of the province . information about death during preassessment is often not accurate because if a patient dies during the preassessment , the liver transplant program is not informed immediately by the community physicians and hospitals . in general , death during the assessment is often a reflection of an untimely late referral because the patient should have been referred earlier , or is a reflection of a patient s nonadherence to clinic appointments ; however , it is not the aim of the present article to criticize the referring physician or the patient . we note that our three long - term transplant recipients are the longest - surviving hiv - infected transplant recipients in canada and were the second , third and fourth hiv - infected patients to receive liver transplants in this country . to date the hiv - hcv coinfected patient has experienced graft hepatitis from hcv recurrence but has not yet developed graft loss . this patient may be a candidate for non - pegylated interferon - based direct antiviral therapy in the future . the patient who acquired hcv post - transplant remains stable and , if advanced fibrosis develops , he would be considered a candidate for antiviral therapy . these patients also did not experience more intraoperative nor postoperative surgical complications compared with their non - hiv - infected recipients . our experience , as well as the experience of others , confirms that the post - transplant management of hiv - infected recipients is complex . not only is management of coinfected patients necessary , one must also address the management and appropriate dosing of haart medications . immunosuppressive medications , specifically the calcineurin inhibitors , tacrolimus and cyclosporine , must be dose reduced and drug trough levels followed extremely carefully because pharmacokinetic interactions may lead to toxic levels . our first patient was on nelfinavir and required a single , small dose of tacrolimus every third day , whereas the usual dose is twice daily at higher doses . nelfinavir is metabolized by cytochrome p450 3a4 , which also metabolizes tacrolimus and is well known to cause significant interactions ; therefore , the dose of tacrolimus must be reduced by a factor of 70 compared with normal in the setting of nelfinavir therapy ( 10 ) . eventually , our first patient was able to have tacrolimus discontinued , with maintenance immunosuppression consisting of mycophenolate mofetil monotherapy , and remains free from acute graft rejection . the other art medications used in our transplant recipients were abacavir / lamivudine , raltegravir and emtricitabine / tenofovir . none of these drugs are reported to have significant interactions with tacrolimus ( 1113 ) and we have not encountered any . drug - drug interactions with art medications and immunosuppressive therapy may appear challenging a priori , but are easily overcome , and the special management issues regarding hiv and liver transplantation become routine thereafter . we consider the multidisciplinary post - transplant approach to these patients involving allied health care professionals , including post - transplant clinic nurses and transplant pharmacists , essential to the immunosuppressive management of these patients . currently , the management of hiv - hcv coinfection remains challenging and it is acknowledged that overall , decreased post - transplant survival is observed ( 7 ) . nonetheless , it is clearly a subgroup within the cohort of post - transplant hiv - hcv infected patients who can achieve a substantial survival benefit ( 7 ) . for patients with decompensated allograft cirrhosis , there is the potential of possible hcv viral clearance with noninterferon - based protocols , although more post - transplant clinical trials will need to be conducted to determine whether this is feasible . therefore , we do not see any reason to restrict the liver transplant process to only hiv - infected individuals without hcv coinfection . we note that our hiv - hbc - hcv tri - infected patient is very stable nine months post - transplant with appropriate anti - hbv prophylaxis . it should be noted that before 1996 , hbv was a contraindication in canada for transplant ; currently , however , hbv is considered to be a prime indication for transplantation . in the time period leading up to the decision to offer liver transplantation to hiv - infected patients , there was a great deal of concern , both within and outside of the transplant program , of the risk to operating room personnel from viral transmission of hiv during an occupational injury such as a needlestick accident . the risk for transmission of hiv is 0.3% ( 95% ci 0.2% to 0.5% ) and is lower than the risk for transmission of hcv ( 1.8% ; range 0% to 7% ) ( 14,15 ) . in the setting of hiv , as in all health care practices , universal precautions should be practiced routinely ; after the first liver transplant surgery was performed using standard precautions , the concerns regarding intraoperative viral transmission dissipated as the very minimal threat of inadvertent viral transmission became apparent . the life - span of an hiv - infected individual receiving haart is now near that of their noninfected counterparts ( 15 ) . hiv is recognized as a chronic disease controlled with haart ( 16 ) . the demand for liver transplantion as definitive management for esld will only increase . despite increasing public awareness , developing living - related - donor programs and exploring other potential donors until better treatments for esld are developed and implemented , both in the hiv patient population and the non - hiv patient population , the demand for liver transplant for definitive management of esld will only increase . although our single - centre experience in canada is noteworthy , we acknowledge that any single - centre experience is limited and we are hopeful that a canadian national database will be established as transplantation of hiv - infected patients becomes more common .
historically , hiv - positive individuals have not been considered to be candidates for liver transplantation due to the need for further immunosuppresion of these patients post - transplant , as well as other factors such as pharmacokinetic interactions between the necessary antiretroviral and immunosuppressant drugs . however , hiv - positive individuals with end - stage liver disease are now eligible for liver transplantation in british columbia . the purpose of this study was to summarize the outcomes of hiv - positive individuals referred for liver transplanation in british columbia .
In a fiery speech delivered to 18,000 at Joe Louis Arena, Minister Louis Farrakhan blasted the U.S. judicial system as being biased against African Americans, calling upon the community to set up its own courts. “We want equal justice under the law,” Farrakhan said on the last day of the Nation of Islam’s annual convention, held in Detroit this year. “Our people can’t take much more. We have to have our own courts. You failed us.” With U.S. Rep. John Conyers, a Detroit Democrat, and Detroit City Council President BrendaJones sitting behind him, Farrakhan spoke for nearly three hours. He urged unity among Muslim and Christian leaders, saying that “Jesus and Mohammed would be arm in arm,” and he reiterated the Nation of Islam’s view that the U.S. is a land headed for destruction unless it starts to obey the word of God. ■ Related: Nation of Islam convention returns to Detroit with message of discipline, self-reliance ■ Related: Nation of Islam convention to bring 30,000 visitors to Detroit The crowd often clapped and roared in approval during his talk, which included a discussion of African-American civil rights leaders over the past century. Farrakhan suggested that African Americans rely on the Quran and Bible to help set up their own legal system that would be more fair to African Americans. “Has America been just to us?” he asked the crowd. “No,” the crowd responded “So ... if we retaliate, you can bring out your soldiers. We got some, too.” Also on stage during Farrakhan’s talk Sunday were Christian pastors, including the Rev. Jim Holley of Little Rock Baptist Church in Detroit. Farrakhan railed against Christian pastors who endorse gay marriage. “God has never sanctioned that kind of behavior,” Farrakhan said. Farrakhan’s talk came on the last day of a four-day convention of the Nation of Islam, the 84-year-old black nationalist group based in Chicago that was started in Detroit by Fard Muhammad. The Nation believes that separation of the races is needed to better the lives of African Americans, a point stressed during the gathering’s workshops. On Thursday night, Farrakhan spokeswoman Ava Muhammad said that African Americans needed to separate because eventually, “planes are going to destroy every area that is not dominated by Islam.” She said Detroit might be the city Nation of Islam members choose to migrate to in order to form their own community. She was referring to planes that the Nation of Islam believes are in a wheel hovering in the sky. Farrakahn referred to the wheel in his talk Sunday. During his talk, Farrakhan denied he was anti-Semitic, saying: “Did Jesus have a problem with the Jews of his day? He’s not a hater. Neither am I. I don’t hate Jewish people ... what I hate is evil.” Farrakhan noted that both he and Henry Ford have been accused of being anti-Semitic: “I feel like I’m in good company.” He said “Satan is in control of Hollywood,” TV, media and money. Farrakhan also blasted Muslims for fighting each other in the Middle East. You’re “slaughtering your own people for America” and the “European infidel,” Farrakhan said. He also told the crowd that if the U.S. launched a war on Iran, “we ain’t fighting. We’re not killing no Muslims for these infidels.” Noting that the Nation of Islam started in Detroit in 1930, Farrakhan said: “I want Detroit to know we’re back to stay. This is a great city.” During the past year, Farrakhan has talked about reinvesting in Detroit. Farrakhan spoke about Detroit Mayor Mike Duggan, urging him to take care of neighborhoods, not just downtown. “First time in a long time you’ve had a white mayor. We hope he’ll be successful.” ||||| DETROIT — The black community must unite across Christian-Muslim lines and recognize the common goals among the diverse approaches of its past leaders, from Malcolm X to W.E.B. DuBois, because they all "wanted our liberation," Nation of Islam leader Louis Farrakhan told thousands of supporters Sunday in Detroit. Farrakhan spoke to a packed Detroit Joe Louis Arena during his keynote address during the annual four-day Saviours' Day convention. He touched on a range of topics, including problems facing the bankrupt host city, where the National of Islam started. He spoke of the common reverence for Jesus that that Muslims and Christians share, and praised the work of Christian ministers in spreading the word of God. Farrakhan went through what he called his "Pantheon" of black leaders, describing how Martin Luther King, Booker T. Washington, DuBois and Malcolm X were part of a common struggle. "All of them wanted our liberation," Farrakhan told the crowd. "Can you hold onto the common thread that binds them all together as one?" Farrahkan, who exchanged bitter words with Malcolm X shortly before his 1965 assassination following a break with Nation of Islam founder Elijah Muhammad, said Malcolm X would have only positive things to say about other black leaders. During the two-hour speech, he addressed problems just outside the arena's walls. Acknowledging that the majority-black city recently elected a white mayor, Mike Duggan, Farrakhan said the mayor needs to help resurrect Detroit's blighted neighborhoods and not just promote its reviving downtown. "We hope he'll be successful," Farrakhan said. The Honorable Minister Louis Farrakhan delivers the keynote address "How Strong is Our Foundation: Can We Survive?" at the Joe Louis Arena in Detroit, Mich., on Sunday, Feb. 23, 2014. (AP Photo/ Detroit Free Press, Romain Blanquart) As he has done in the past, he also lashed out at Jews, saying they fostered division among blacks as well as misrepresentations of black leaders through what he said was their control of the publishing industry. Farrakhan also compared himself to Henry Ford, founder of Ford Motor Company who actively promoted the idea of a worldwide Jewish conspiracy through his local newspaper. Ford was "a great man who was called an ant-Semite," Farrakhan said, praising the auto pioneer's measures to improve the living conditions of his employees through higher pay. "I feel like I'm in good company." His comments Sunday drew quick criticism from Heidi Budaj, Michigan regional director of the Jewish rights advocacy group the Anti-Defamation League. "Expressing pride for being called anti-Semitic is shameful," she said. "A person in this day and age should be ashamed to say that." Budaj said religious bigotry and dividing people along racial or ethnic lines was the last thing a struggling area like Detroit needed. Whatever positive things Farrakhan may have to say about black solidarity, "those are negated by the hatred he spews from the pulpit," she said. The Nation of Islam is now based in Chicago. Follow David N. Goodman at http://twitter.com/davidngoodman
– Controversial minister Louis Farrakhan offered up a controversial idea yesterday at the Nation of Islam's annual convention in Detroit: The black community should have its own court system, because the existing system is biased against African Americans. "We want equal justice under the law," Farrakhan said, according to the Free Press. He specifically called out controversial "Stand Your Ground" laws, the Detroit News reports. "How long must we let people stand their ground, shooting us and getting away with it? ... Our people can’t take much more. We have to have our own courts. You failed us." He suggested the new legal system be set up using the Koran and the Bible as guides. "Has America been just to us?" he asked, and the crowd replied, "No." "So," he continued, "if we retaliate, you can bring out your soldiers. We got some, too." Farrakhan, who was joined onstage by the Detroit City Council president, US Rep. John Conyers, and a few Christian pastors, also denied claims that he is anti-Semitic, saying, "Did Jesus have a problem with the Jews of his day? He’s not a hater. Neither am I. I don’t hate Jewish people ... what I hate is evil." Yet he also slammed Jews, claiming they control the publishing industry and use it to misrepresent black leaders, the AP reports.
a long standing problem in astrophysics is a result of our inability to determine the three - dimensional structure of distant objects . this limitation has often inhibited our understanding of the internal structure of even relatively well defined and isolated astronomical objects , such as molecular cloud cores . assuming that such an object is spherically symmetric , or has some other simple geometry , often permits us to describe the object s internal structure using one or more radial profile functions . such radial profiles are frequently used to examine or model the physics and chemistry which govern such objects . it is relatively safe to assume that stars and planets are spherical or that a spiral galaxy has a disk and a bulge . these assumptions become problematic when studying objects without obvious symmetry . molecular cloud cores exhibit a wide variety of shapes that very rarely resemble any simple geometry . hence , determining their internal structure while using a geometric assumption will always yield some bias in any derived radial profile . in this paper we describe a technique which may be used to obtain limited , yet useful information about an object s radial profile function without making any assumptions about the object s shape , orientation , or the nature of the radial profile function . this is done using a single two - dimensional column density map as the entire available data on the source . a variety of such techniques have been used to determine the radial density distribution in molecular cloud cores ( also referred to as dense cores ) in studies over more than three decades . early work employed emission @xcite . optical extinction was utilized by @xcite to determine the density distribution within a number of dark clouds in taurus . these techniques were recognized to have weaknesses ( inability to trace high column density regions for optical extinction , and variable abundance due to e.g. freezeout for carbon monoxide ) . subsequent efforts have largely moved to measument of stellar reddening in the near - infrared , allowing accurate probing of the extinction to much greater columns @xcite . @xcite employed measurement of infrared colors and stellar densities to obtain the density structure of 10 dense cores . the infrared color excess technique was utlized by @xcite to derive the density distribution in cloud cores in taurus using 2mass data . continuum emission may also be used to study the temperature , and density distribution of dust in the ism . with herschel data , @xcite were able to probe the dust within 12 molecular cloud cores . @xcite used both dust extinction as well as emission to model the column density and temperature distribution of cb244 . to introduce this new technique , we will constrain ourselves to dust extinction as it is simpler , and temperature - independent . in some of the previous work , a singe power law radial density profile was fitted to the data @xcite , with exponents typically between 1 and 2 found . other studies used a bonner ebert sphere @xcite to model the density profile , which characteristically has a flat density profile in the central region transitioning to a @xmath0 radial dependence towards the edge of the core @xcite . a function with a similar form gave a good fit to the data of @xcite . our technique improves on previous methods by eliminating any geometric assumptions , as well as any a priori assumptions about the nature of the radial profile function . there are certain limitations to the technique as well as criteria which must be fulfilled . these are discussed in detail in section [ cassumptions ] . the most important limitation and constraints may be summarized as follows . * since a two dimensional projection can not uniquely define a three - dimensional object without additional information , it is impossible to obtain absolute values for the radial profile function without additional information or assumptions . it is however possible to obtain the form of the function which differs from the original profile by two unknown , geometry - dependent scalars . * the internal structure of the object in question must be describable using a radial profile function . in theory the technique may be used to study spectral line emission , absorption , continuum emission , extinction , etc . it may be applied to any object provided it is consistent with the assumptions described in section [ cassumptions ] . we will demonstrate that the technique is useful even in cases where only a portion of an object exhibits contour self - similarity . to illustrate and validate the technique we have chosen to apply it to maps of the dust extinction in molecular cloud core column density maps derived from 2mass data on stellar reddening . this paper is formulated so as to introduce a novel methodology by presenting an analytical derivation , testing it against simulated data , and finally applying it to real data . section [ cassumptions ] describes the initial assumptions which must be fulfilled in order for the technique to be applicable to a given object . the assumptions yield critical relationships which illustrate key aspects of this technique . section [ cderivation ] derives the technique analytically using two different methods . section [ cnumeric ] applies the technique to a set of simulated data designed to test its validity as well as to expose its performance under a variety of circumstances . section [ creal ] discusses the use of 2mass dust extinction maps and applies the technique to several clouds . we make a comparison with previous methods for measuring radial profiles in section [ ccomparison ] . we discuss the results and the performance of this new technique in section [ cdiscussion ] . the goal of this research is to extract the maximum available information regarding the internal volume density structure of an object using a single column density map observed from one line of sight direction , while making the fewest possible assumptions . we show how it is possible under certain conditions to obtain the form of an object s volume density profile function without assuming a specific geometry , or making any assumptions about the function that governs the radial density profile . to this end it is necessary to detail the assumptions used in this work . the method described here only relies on the three assumptions below which are made for all cases . assumption 1 : : : the object studied must be optically thin in whatever observable quantity is being measured in the sense that @xmath1 where @xmath2 is the measured column density at position @xmath3 and @xmath4 represents the volume density at position @xmath5 . throughout this paper , the x axis is arbitrarily chosen to represent the line of sight direction . assumption 2 : : : the volume density of the object can be entirely characterized using a single function that describes the volume density profile . the following can be considered to follow from assumption 2 . assumption 2a : : : any object which satisfies assumption 2 must be described using two functions ; one describes the cloud s geometry , while the second describes its radial volume density profile . we define the object s shape using a core function @xmath6 where @xmath7 has units of length and describes the size of the object s core along each direction originating from the object s center . @xmath8 is a constant with units of length , and @xmath9 is a dimensionless function which scales the core radius along each @xmath10 to produce a shape for the object . in the case of a sphere , @xmath11 while @xmath8 represents the radius . spherical coordinates are chosen here to emphasize the fact that the core function depends only on direction from the object s center , and not on distance . when working with arbitrary shapes it is convenient to define a new , dimensionless parameter @xmath12 which is equal to the ratio between the distance from some point @xmath5 to the object s center , and the core radius @xmath7 along the same direction . with the object s center located at the origin of the coordinate system @xmath13 , @xmath14 for a sphere , @xmath15 . we commonly refer to the surface described by @xmath16 as the core . in order to fully describe the geometry of an object which meets assumption 2 , @xmath9 must describe a closed surface such that a vector from the object s center along any direction will cross the surface exactly once . this permits the definition of a radial volume density function that is dependent on @xmath12 and governs the volume density distribution of the entire object . we define @xmath17 where @xmath4 represents the volume density at position @xmath5 , and @xmath18 is a dimensionless function that governs the radial volume density profile . @xmath19 is a constant representing the volume density where @xmath20 . @xmath21 and @xmath4 can fully characterize any object which satisfies assumption 2 . neither @xmath21 , nor @xmath4 can ever be fully determined from a single column density map using only one observable quantity without additional information , since a column density map in and of itself can not uniquely define a three - dimensional object . it is possible to determine the function @xmath22 as well as certain properties of @xmath7 by taking advantage of the self - similarity imposed on the object by assumption 2 . any object which satisfies assumption 2 satisfies the implied assumptions below . for an arbitrary object which satisfies assumption 2 . each surface shares the same shape and orientation , while differing only in scale . ( right ) projections of the three surfaces along the line of sight ( los ) . each surface is characterized by a specific value of @xmath12 , its projected area ( @xmath23 ) , and its volume density ( @xmath24 ) as in equation [ corin ] . projected areas have the same shape and orientation , while differing only in scale . since the object is assumed to be optically thin , the projected column densities from each surface add linearly to produce the total column density . [ cfig1 ] ] assumption 2b : : : specific values of @xmath12 describe three - dimensional surfaces of equal volume density . the left panel of figure [ cfig1 ] illustrates three such surfaces belonging to an arbitrary object , and having three distinct values of @xmath12 . all volume density surfaces share the same shape , orientation , and center position . the only differences between surfaces of different @xmath12 are in their sizes , and volume densities . assumption 2c : : : each volume density surface , when projected onto a plane perpendicular to the line of sight produces a two - dimensional boundary whose area ( @xmath23 ) is directly proportional to @xmath25 . the right panel of figure [ cfig1 ] describes such projections of three surfaces with independent values of @xmath12 . all such surface boundaries are identical except in their size and the corresponding volume densities they represent . self - similarity between different volume density surfaces is a critical aspect of assumptions 2b and 2c . aside from certain constants , the only parameters which differentiate the projected boundaries of different volume density surfaces are their areas ( expressable in terms of @xmath25 ) , and the corresponding volume density ( determined by @xmath18 ) . therefore there must be a relationship between the area of each projected boundary and its volume density which is dependent on @xmath18 , but is , aside from some constants , independent of @xmath9 . _ @xmath9 determines the shape and orientation which are identical for each surface , while the relationship between the projected area and volume density of each surface is governed by the radial density profile function ( @xmath18)_. the observable column density map is a superposition of all the volume density surfaces projected onto a plane perpendicular to our line of sight . the column density map should thus exhibit the same self - similarity seen among the individual volume density surfaces . if assumptions 1 and 2 hold for a given object , then the following must be valid as well assumption 3 : : : comparing the column densities and areas of different column density contours should yield a relationship which , aside from some constants , is independent of the object s geometry . assumption 3 is confirmed analytically by equation [ cnarea ] in section [ canalyticder ] . the following section shows how the function @xmath18 may be derived using that relationship . no truly general proof that applies to all possible shapes is evident at this time . therefore it is necessary to restrict this analytic derivation to those geometries which can be described by a quadratic definition of @xmath12 . geometries which do not conform to equation [ crrc ] are tested numerically in section [ cnumeric ] . a useful form for @xmath12 is @xmath26 where @xmath27 is a constant and @xmath28 and @xmath29 are any functions that conform to assumption 2 . the above quadratic representation , while not universal , can describe a wide variety of geometries encountered in nature including triaxial ellipsoids . no specific values for @xmath30 and @xmath29 are invoked in the following derivation except where noted for purposes of illustration . in such cases , a spheroid model with axial ratio @xmath31 , inclined by an angle @xmath32 through a rotation about the y axis will be used . a spheroid is chosen because it is mathematically tractable yet versatile enough to demonstrate changes in shape and orientation by varying @xmath31 and @xmath32 respectively . such a spheroid may be described by the relationships @xmath33 values of @xmath12 describe individual surfaces of fixed volume density . since the line of sight is chosen to be along the x axis it is useful to express the x positions of each surface with a specific @xmath12 as @xmath34 defining a new function @xmath35 yields @xmath36 @xmath37 denotes the two line of sight ( @xmath38 ) positions for a surface defined by a particular value of @xmath12 at sky position @xmath3 . functions @xmath27 , @xmath28 , and @xmath29 are defined by the geometry of the object in question . equations [ cadef ] , [ cbdef ] , and [ ccdef ] describe the appropriate functions representing a spheroid with axial ratio @xmath31 and inclination @xmath32 . @xmath39 is a function that is entirely dependent on the object s shape ; the following section discusses its conceptual meaning further . it is possible to define an object as a discrete series of shells , each of which is defined as the region between an inner ( @xmath40 ) and an outer ( @xmath41 ) surface with an average volume density ( @xmath42 ) within the shell . the depth along the x axis of each such shell at different ( y , z ) positions will vary according to @xmath43 where @xmath44 is the total depth along the line of sight at position @xmath3 for the shell made up of two surfaces defined by @xmath40 and @xmath41 . substituting equation [ cxpos ] into equation [ cdori ] yields @xmath45 equation [ cdinc ] describes the depth of each shell , however this representation is of limited use since @xmath12 is not an observable quantity . similarly , @xmath39 is a function that is directly dependent on the object s unknown shape . equation [ cdinc ] must be put in terms of observable quantities : the observed column density , and the area within each column density contour . each surface as described by equation [ cxpos ] , when projected onto the line of sight , produces a closed boundary composed of those @xmath3 positions where @xmath46 . in view of equation [ cxpos ] the projected boundaries of each shell are defined by @xmath47 solving for @xmath39 using the spheroid model above yields a familiar relation , @xmath48 which is a simple ellipse that results from projecting a three dimensional spheroid surface onto a two dimensional plane . equation [ ceboundary ] makes clear that each contour of equal @xmath39 corresponds to the boundary of a particular volume density surface , with a specific value of @xmath12 and possesses a unique projected area . in general , the projected area of a surface of a particular @xmath12 can be expressed as @xmath49 where @xmath50 is a geometry - dependent unknown constant(@xmath51 ) . all positions with equal @xmath39 correspond to the projected boundary of a surface with @xmath52 with corresponding area @xmath53 . the additional subscript c denotes a specific contour . therefore , equation [ cdinc ] can be reformed in terms of areas as @xmath54 where @xmath55 represents the depth along the line of sight of the shell between surfaces defined by @xmath56 and @xmath57 at all positions defined by the contour formed by the projected boundary of the surface defined by @xmath52 . the observed column density can then be defined as @xmath58 @xmath59 represents the column density at all positions @xmath3 defined by the projected boundary of the @xmath52 surface . @xmath60 represents the mean volume density within the shell whose surfaces are defined by @xmath61 and @xmath62 . the column density and area are observable quantities , however @xmath63 are unknowns . the relationship between column density ( @xmath64 ) and area ( @xmath23 ) can be obtained through contouring the observed map , yielding a discrete series of contour column densities and associated areas . using such data it should be possible to obtain information on the quantity @xmath63 . it is useful to define two new variables which will represent the derived volume density profile function . @xmath65 permitting equation [ cncori ] to be rewritten as @xmath66 the observed column density map thus yields a series of contours denoted by their column density ( @xmath59 ) and area ( @xmath67 ) . beginning with the outermost contour with the largest area , and moving recursively inward it is possible to derive a series of @xmath68 measurements for the object using equation [ cdiscfinal ] . equation [ crn ] yields a series of @xmath69 measurements derived from the contour areas ( @xmath53 ) , yielding @xmath70 . equation [ cn ] shows that @xmath70 is related to @xmath71 and @xmath18 through a series of constants ( @xmath72 ) that are all unknown . knowledge of the object s geometry would yield values for @xmath50 and @xmath27 allowing the determination of @xmath19 and the full definition of the object s radial volume density profile @xmath71 . conversely , knowledge of @xmath19 could yield information on the object s geometry . without such a priori knowledge there are limits to the information which may be obtained from a single column density map , however @xmath18 can be determined to within 2 unknown scalars so as to obtain the form of the volume density profile function . the nature of those two scalars ( g and @xmath73 ) is best elucidated through a non - discrete derivation as discussed in the following two sections . this is done without assuming a specific geometry for the object , or the nature of @xmath18 . this derivation is dependent on obtaining valid @xmath74 vs. @xmath53 measurements from the column density map which may be a non - trivial process when working with real data . methods for obtaining such measurements are discussed in section [ creal ] along with examples of the derivation applied to simulated data . equation [ cdiscfinal ] is useful for deriving @xmath70 from real data , and is used in all practical examples in this paper with both simulated and real data . however , it does not necessarily give the most insight into the problem . any practical application of this theorem requires a strict understanding of the relation between @xmath70 and @xmath71 with respect to the two scalars which separate them . to this end an analytic derivation is invoked in this section which is equivalent to that in section [ cdiscretesec ] , yet is qualitatively different in that it illustrates different aspects of the derived @xmath70 function . this derivation does not invoke discreteness , but instead uses integration . the integrals prohibit the use of a truly general form for @xmath18 , thus two radial density profiles are invoked for illustrative purposes along with the same spheroid geometry from section [ cdiscretesec ] . a gaussian and an attenuated power law are selected as mathematically tractable profiles that are frequently observed in nature . they may be described as @xmath75 where @xmath76 and @xmath77 represent the gaussian and attenuated power law functions respectively . @xmath78 is a constant greater than 1 . this attenuated power law function can be viewed as a form of the well - studied type iv pareto distribution . it is inspired by , and represents a more generalized form of the king profile @xcite . the king profile was also used by @xcite , and @xcite when addressing the problem of density distributions within molecular cloud cores , however they each utilized geometric assumptions which we do not invoke . if assumption 1 holds then the observed column density map for each profile can be written as @xmath79 since specific radial density profile functions are used , it is possible to directly perform each integral , yielding @xmath80 where @xmath81 is the binomial coefficient . similarly to section [ cdiscretesec ] , the preceding equation may be used to express the column density in terms of the area covered by each column density contour resulting in @xmath82 since no specific shape has yet been invoked , equation [ cnarea ] verifies assumption 3 by showing that the relationship between column density and the area of its contours is , aside from some constants ( @xmath83 ) , independent of the object s geometry . alternatively , using equations [ cndirect ] and [ cxpos ] along with the relation @xmath84 yields the following expression for the column density which is equivalent to the derivation in section [ cdiscretesec ] @xmath85 equation [ cnrrc ] specifies the observed column density at all positions @xmath3 described by the projected boundary of the shell defined by @xmath12 . @xmath86 is the integration variable , where the e subscript denotes that the integration is performed over all surfaces exterior to @xmath12 . solving equation [ cnrrc ] for the gaussian and attenuated power law profiles yields @xmath87 equation [ cnrrc2 ] , when converted to areas using equation [ careaeq ] , is identical to equation [ cnarea ] , thus confirming that the discrete derivation in section [ cdiscretesec ] is equivalent to integrating the volume density along the line of sight . deriving @xmath70 through the method defined above yields a function with the same form as the original @xmath71 . it is important to understand the relation between the derived and actual density profile functions . this relationship may be defined as @xmath88 where @xmath89 and @xmath73 are unknown constants . applying the above method to the spheroid model with gaussian and power - law profiles yields derived volume density functions described by @xmath90 equations [ crn ] and [ cgepsilon ] in conjunction with equation [ cnprimegnprimep ] show that for a spheroid model , @xmath91 @xmath89 is dimensionless , while @xmath73 has dimensions of length . it is important to note that @xmath89 and @xmath73 are completely geometry dependent and thus identical in both the power law and gaussian cases . neither parameter can be fully determined by the method described here without knowledge of the object s geometry , further data , or assumptions . as evidenced by equation [ cgchi ] , @xmath89 and @xmath73 are not independent quantities due to their dependence on @xmath92 . aside from scalars @xmath89 and @xmath73 , the derived @xmath70 , and the original @xmath71 are identical . for a sphere , @xmath93 and @xmath94 . these scalars contain all of the unknown geometric information about the observed object . we may derive @xmath70 from an observed column density map , however this function will differ from the object s volume density profile ( @xmath71 ) by the two unknown scalars @xmath89 and @xmath73 . the form of @xmath70 will however be identical to @xmath71 regardless of the two scalars . if an object s geometry is known , @xmath89 and @xmath73 may be calculated ( values for the spheroid are shown in equation [ cgchi ] ) . in the most general terms , @xmath89 may be viewed as the ratio between the depth and the width of an object along the line of sight , though this interpretation will rarely be strictly true . if @xmath89 is greater than 1 , then the object is deeper than it is wide , and the derived @xmath95 will be greater than the actual @xmath19 . numerical models using simulated data can validate the technique described in section [ cderivation ] , as well as illustrate the behaviour of @xmath89 and @xmath73 under various conditions . models of several objects are constructed using known geometries and volume density profiles in order to create simulated column density maps . these maps are then used to derive @xmath70 which are finally compared to the original ( known ) volume density functions used to construct the model . ) , and a radial volume density profile described by a gaussian ( @xmath96 ) . a ) a simulated column density map with sample contours . gaussian noise is added equivalent to 1% of the maximum column density . @xmath97 and @xmath98 coordinates are represented in units of @xmath8 . b ) column density ( @xmath64 ) and corresponding area ( @xmath23 ) for each contour ( not displayed ) used in the analysis . c ) a contour diagnostic plot for the object , as described in section [ cnumeric ] . d ) the derived volume density profile function ( @xmath70 ) . black points represent the values derived from each @xmath64 and @xmath23 contour pair . the red line represents the original function @xmath71 used by the model as scaled by @xmath89 and @xmath73 . [ cfig2 ] ] figure [ cfig2 ] illustrates how such a model is constructed and analyzed using the simplest case of a sphere with a gaussian volume density profile and minimal noise . to produce a column density map such as in figure [ cfig2]a it is necessary to first choose a geometry ( in this case a sphere ) and a volume density profile function ( in this case a gaussian ) and construct a three - dimensional array whose elements represent the object s volume density this array is then integrated along the line of sight to produce a column density map . normally distributed noise with mean zero and a certain standard deviation ( 1% of the maximum column density in the case of figure [ cfig2]a ) is then added to the column density map . selected contour levels are drawn for illustration purposes to produce a map as in figure [ cfig2]a . such column density maps are the only source of information for further model analysis , as knowledge of model scalars such as @xmath8 and @xmath19 is used only for the purposes of scaling the plots . many column density contours ( not drawn ) are measured on the map in order to produce a plot of column density ( @xmath64 ) versus contour area ( @xmath23 ) as in figure [ cfig2]b . it is impossible to properly sample the whole range of column densities without a priori knowledge of the volume density profile function . we found it most appropriate to measure the same number of contours as the number of pixels that span the object , and to space them equally in column density . this choice often results in oversampling , as discussed in section [ cuncertainty ] , but has been experimentally found to be the most useful . the technique described here requires implicitly that all column density contours exhibit self - similarity , sharing the same shape , orientation , and center position . the suitability of an object to this analysis technique may be verified by comparing the contours . similarly , it is necessary to remove from consideration any contours which are created by noise in the column density map . figure [ cfig2]c illustrates how these requirements are satisfied . each contour is scaled to the same size , translated to the same center position , and then plotted so as to overlap as in figure [ cfig2]c . the innermost third of the contours with the smallest area are colored red , while the outermost third are colored blue , and intermediate contours are colored green . in the case of figure [ cfig2]c the simulated noise is quite low and thus only the innermost ( smallest ) contours display any deviations from a circle . these variations are due to the small number of pixels within the smallest contours . this representation is useful in that any deviations from a single contour shape and orientation may be easily identified . a simple method for numerically filtering out noise - induced contours from consideration is to compare the geometric centers of each contour to the geometric center of mass for the object from the column density map . any contours whose centers exceed some small distance from the center of mass are excluded . figure [ cfig2]c represents the object s center of mass as the dashed cross in the center . the center positions of each contour are plotted in relation to the center of mass with the green , red , and blue colors representing the contours with the smallest , intermediate , and largest areas respectively . the solid black circle represents the radius used to filter out questionable contours . the square represents the relative position and scale of the pixel from the column density map which contains the object s center of mass . this diagnostic plot is useful in determining how well a given object complies with assumption 2 , as well as which contours are suitable for analysis . once all unsuitable column density contours are removed from consideration it is possible to apply equations [ crn ] and [ cdiscfinal ] recursively to the @xmath64 and @xmath23 pairs in order to derive @xmath70 as in figure [ cfig2]d . since modeled data is used here it is possible to directly determine the values of @xmath89 and @xmath73 as well as to scale @xmath99 and @xmath70 using the known values of @xmath8 and @xmath19 as in figure [ cfig2]d . figure [ cfig2 ] verifies that the derived @xmath70 has the same form as the original @xmath71 function to a very high degree for the low - noise sphere with a gaussian profile function . as expected , @xmath100 and @xmath94 . the technique described in section [ cderivation ] should apply to any profile function , as well as to any geometries which fulfill the requirements described in section [ cassumptions ] . to that end , figures [ cfig3]a - b , [ cfig3]c - d , and [ cfig4]a - b present three cases beyond the simple gaussian sphere in figure [ cfig2 ] . figures [ cfig3]a - b , and [ cfig3]c - d represent the spheroid defined in section [ cderivation ] with two different orientations , while figure [ cfig4]a - b represents a tri - axial ellipsoid . the derived and original volume density profile function forms agree to a great extent , verifying that the technique is valid for tri - axial ellipsoids of any shape and orientation . several other geometries ( not shown ) which satisfy the assumptions in section [ cassumptions ] were tested and were all shown to produce valid results . , [ cbdef ] , and [ ccdef ] with @xmath101 , @xmath102 , and 3% noise added . b ) actual ( red line ) and derived ( black dots ) volume density profile for the object in a. an attenuated power - law as in equation [ cngnp ] with @xmath103 is used to construct the object in a and b. c ) simulated column density map of an object using the same geometry as in a , except that the object is rotated by @xmath104 about the y axis and 5% noise is added . d ) actual ( red line ) and derived ( black dots ) volume density profile for the object in c. the radial volume density profile used in c and d is given by @xmath105 with @xmath106 . [ cfig3 ] ] and 10% noise added . b ) actual ( red line ) and derived ( black dots ) volume density profile for the object in a. a triple gaussian volume density profile function is used to construct the object in a and b. c ) simulated column density map of an object with nonuniform @xmath27 ( equation [ cadef ] ) . d ) actual ( red line ) and derived ( black dots ) volume density profile for the object in c. since contour self - similarity is not present throughout , the object is not expected to be adequately modelled by this technique . the radial volume density profile used in c and d is described by the same triple gaussian as in a and b. [ cfig4 ] ] the derivation in section [ cderivation ] requires that @xmath27 be a constant ( equation [ crrc ] ) . any geometry which involves a definition of @xmath27 that is dependent on spatial coordinates @xmath107 and @xmath108 results in an object which does not conform to the assumptions in section [ cassumptions ] , and thus is not suitable to the analysis presented here this is due to the variation in the depth of each shell resulting from an inhomogeneous @xmath109 as in equation [ cd ] . if the object has a non - constant value of @xmath109 then the relationship between the area and depth of each shell is no longer a constant , and the technique described by equation [ cncori ] fails as @xmath27 would be dependent on the shell number @xmath110 . it is possible to determine from the column density map whether the object in question has a geometry which is dependent on a constant value of @xmath27 . equation [ careaeq ] shows that the projected area of each shell is dependent on @xmath50 . from the definition of the spheroid it can be shown that @xmath111 . if @xmath27 is nonuniform then so is @xmath50 , meaning that the relationship between a shell s area and @xmath25 is no longer constant . this implies that the projected boundary of each shell , and thus each column density contour , has a different shape . _ inhomogeneities in contour shape invalidate this technique_. figures [ cfig4]c - d show such a geometric shape which utilizes the same quadratic definition for @xmath12 as in equation [ crrc ] where @xmath112 in this case , @xmath27 is not a constant , the object does not produce self - similar contours , and our technique fails to reproduce the correct radial profile as seen in figure [ cfig4]d . it is important to note however , that the inner - most and outer - most regions ( where there contours are self - similar ) do correctly reproduce the original profile function . fortunately , the influence of each individual contour of the overall density profile is limited . each contour only affects contours interior to itself , and has the greatest influence on adjacent contours . thus , in our derivation , as one moves from the outermost shells to the innermost , a change in the value of @xmath50 will only begin to have an effect at the contour where the change first occurs , and its influence will decline as we move further into the interior . we refer to this change in contour shape as an @xmath50 discontinuity . assuming only one such discontinuity occurs within a given map ( so that there are only two contour shapes present ) , the derived volume density profile should still be correct in the region outside of the discontinuity . the discontinuity will invalidate the derived profile interior to itself , but as its influence weakens the innermost region of the derived profile may still be accurate , as in figure [ cfig4]d . this phenomenon is frequently present when dealing with real data , and is illustrated further in section [ creal ] , but it only prohibits us from accurately obtaining @xmath70 in those portions of the profile where @xmath50 discontinuities occur . @xmath50 discontinuities are simple to identify visually through plots such as figure 2c , and numerically by calculating the values of @xmath50 for each contour . sections [ cderivation ] and [ cnumeric ] have proven that our technique can accurately derive the form of the radial volume density profile under idealized conditions . such circumstances rarely occur in nature so it is important to be able to distinguish real data from noise and systematic effects . ideally , one would assign an uncertainty to each measurement in the derived profile to determine which points are likely to represent real measurements . however , a serious shortcoming of our method here is that we are unable to assign such uncertainties . many methods were tried , but the root problem is that we are unable to properly assign an accurate uncertainty to the @xmath64 vs. @xmath23 measurements for the contours . even assuming that all contours ultimately have the same shape , systematic noise affects them in ways that are difficult to quantify . each of the above simulated data sets , are made with different levels of systematic noise in order to illustrate its effects on the solutions . figure [ cfig2 ] has minimal noise equivalent to @xmath113 of the peak intensity , resulting in a near perfect derived profile . figures [ cfig3]a - b contain @xmath114 noise which is sufficient to add some irregularity to the shapes of the observed contours . those irregularities are most evident in the smallest contours , and are seen as slight offsets in the innermost regions of the radial density profile . the @xmath115 noise map in figures [ cfig3]c - d shows a new phenomenon . here , the noise is such that the innermost contour is broken into two pieces which are unusable . as a result , the derived profile has no measurements interior to @xmath116 . further , there are larger irregularities in the derived profile . these are caused by individual pixels with a significantly different signal compared to their surroundings . contours will tend to bend around such pixels , until a certain column density threshold is reached and the contours snap onto the other side of the pixel . this phenomenon may be recognized in that all the irregularities will have roughly the same width in the derived profile , corresponding roughly to the width of each pixel in the map as evident in figure [ cfig3]d . figures [ cfig4]a - b show a more difficult case in which the noise is @xmath117 of the peak signal . the object in the map can be difficult to discern under such conditions . the systematic noise will prevent the construction of the innermost contours , but may also prevent the formation of contours in other regions leading to gaps in the derived profile such as in figure [ cfig4]b . it is important to note that the gaps did not prevent the derivation of the correct profile in the regions where contours could be formed . changes in contour shape introduce a bias to the resulting profiles , as seen in figure [ cfig4]c - d . no reliable method for removing such bias is apparent . however , the bias seems to be localized to only those regions of the derived profile immediately interior to the @xmath50 discontinuity . as a result , the innermost portion of the derived density profile in figure [ cfig4]d agrees well with the original profile . how these @xmath50 discontinuities manifest themselves is best illustrated with real data as seen in the following section . star formation theory abounds with open questions , many of which are related to the process by which dense cores within molecular clouds collapse . of particular interest is the balance between forces which induce collapse , such as gravity and external pressure , and support mechanisms such as thermal pressure , turbulence , angular momentum , magnetic fields , etc . a cloud core s density distribution is central to understanding this balance . thus measuring both the gas and dust components to obtain the distribution of the total proton density is of critical importance , representing a long - standing and active field of study . the basic model of an equilibrium mass distribution @xcite assumes an isothermal sphere bounded by some external pressure . even such a simple model yields powerful insights , such as that radial volume density profiles in molecular cloud cores should resemble power laws with an approximate @xmath118 relationship between radius and volume density at the edge of the core , with a weaker dependence on radius towards the center of the core . several recent studies have utilized bonnor - ebert spheres , or some derivative thereof , such as @xcite , @xcite , and @xcite . as mentioned above , some studies ( e.g. * ? ? ? * ; * ? ? ? * ) have fitted a power law to the radial density profile . while often satisfactory , there is considerable variation in the value of the exponent among different cloud cores , with @xmath119 found in these studies . precise measurements of the exponents in cloud cores speak to the significance of the support mechanisms . studies employing a variety of techniques have convincingly demonstrated that clouds in different evolutionary states exhibit different density distributions . @xcite made a strong case for the urgent need of investigations of density distributions and support mechanisms in pre - stellar cores in light of new data from planck . we have chosen to demonstrate our technique using total proton counts in molecular clouds due to their well - known power - law nature which may be used to validate the technique . by not assuming a geometry , we remove a significant source of bias present in previous observations . the relatively small sample size here is used for demonstration purposes , while a more focused study will be presented in forthcoming publications . it is assumed under most circumstances that gas and dust are fairly well mixed in the diffuse ism and in molecular clouds . some molecular species , such as @xmath120 or @xmath121co are generally good tracers of the total proton count in molecular clouds . however , @xmath120 can not be directly measured in clouds unless through absorption against a background source . @xmath121co requires us to determine its excitation temperature to obtain column densities . further , carbon monoxide has been shown to freeze onto dust grains at higher densities @xcite . while not without its limitations , dust provides a well - tested , proven alternative . estimating total proton column densities through stellar reddening avoids the need to determine temperatures and may be used ubiquitously throughout a cloud assuming sufficient background stars are visible . @xcite compared dust extinction , near - infrared emission , and @xmath121co emission as probes of the total proton content of dense clouds . after a detailed examination they concluded that dust extinction provided the simplest and most reliable probe . methods for deriving total proton column densities through stellar reddening data are well developed ( e.g. * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) and can be readily employed using infrared data from the 2mass sky survey @xcite . the 2mass data in conjunction with these methods provide a widely accessible , and comparatively uncontroversial method for obtaining total proton column density maps . the data for two of the sources , which might actually be described as either dense clouds or cloud cores , ( b133 , and l466 ) were derived by the authors from the raw 2mass stellar reddening catalog using an implementation of the nicer method @xcite . maps for the other clouds ( l1765 , l1709 , b5 , ngc1333 ) were obtained from the perseus and ophiuchus final extinction maps as part of the complete survey that also utilize the nicer method @xcite . l1765 , and l1709 are part of the ophiuchus complex , while b5 and ngc1333 are part of the perseus complex . each of these four clouds is in a region with substantial background extinction and is accompanied by neighboring features . for these , a 3 arcminute beam size was used . b133 and l466 are more isolated , with comparatively little background extinction or neighboring features . the maps for b133 and l466 employ a 1 arcminute beam . vs. @xmath23 plot derived from contours actually used in the derivation , color groups correspond to the coloring in a. the fitted line(s ) correspond to simple power - law fits for each color - group with the exponent(s ) printed in the legend , and are only drawn through segments which are believed to be trustworthy . c ) contour diagnostic plot similar to that in figure [ cfig2]a . d ) derived volume density profile . the fitted line(s ) correspond to simple power - law fits for each color - group with the exponent(s ) printed in the legend , and are only drawn through segments which are believed to be trustworthy . [ cfig5 ] ] the scheme used in figure [ cfig5 ] is used to describe each of the clouds in this study . figure [ cfig5]a represents the column density map for l1756 , with some sample contours added . column density contours are applied throughout the map and filtered as described in section [ cnumeric ] to produce the column density vs. area ( @xmath64 vs. @xmath23)plot in figure [ cfig5]b . it is apparent that there are three different behaviors in the @xmath64 vs. @xmath23 plot , therefore each measurement has been coded with a symbol and color . this color and symbol scheme is applied throughout the whole figure . the colors of the contours drawn on the column density map correspond to the same column density levels as the colors in the @xmath64 vs. @xmath23 plot . figure [ cfig5]c is a diagnostic plot similar to figure [ cfig2]c . the derived volume density profile is depicted in figure [ cfig5]d . in contrast to the simulated data in figure [ cfig2]d , @xmath19 and @xmath122 for the real cloud are not known , and thus the plot is scaled in terms of @xmath123 and @xmath99 . three regions are evident in l1765 with red crosses representing the highest column density contours , blue representing the outer - most contours , and green those in between . the changes in behavior between the three groups are in fact characterized by two @xmath50 discontinuities . in the case of l1765 , only the green data are believed to be trustworthy . the red group represents the inner core . in the column density map and the contour diagnostic plot , the red contours do not appear to be self - similar . the contours may in fact be self - similar , but there are too few pixels to properly define their shape , and thus they are not reliable . experimentation with real data reveals that the smallest contours must have an area greater than approximately 25 nyquist - sampled pixels to be sufficiently well - defined and thus be usable . with l1765 , these innermost contours are displayed as an example ; they are removed from consideration in the other clouds . the map shows green contours centered around the main cloud , as well as separate contours along the secondary clump . the blue contours however encircle the secondary clump as well . this technique can not correctly function where there are two cores . as a result the blue contours and the related data can not be trusted . only the green contours around the main cloud are trusted . the green group in l1765 seems to exhibit a very good power law with slope @xmath124 in the @xmath64 vs. @xmath23 plot , and the fitted line is drawn in green in figure [ cfig5]b . from section [ cderivation ] it is expected that if the @xmath64 vs. @xmath23 exhibits a power - law behavior , then so should the derived volume density profile . figure [ cfig5]d indeed shows that the green region s volume density profile function follows a power law with slope @xmath125 . we are thus able to determine the form of the volume density profile in the intermediate region of the cloud ( the green group ) , where the contours are well - defined , self - similar , and include only one core or clump . a reasonable concern is how can the green region be trusted when the red and blue are not . the red measurements are interior to the green ones , and thus have no effect at all on the green measurements . equation [ cd ] reveals that the depth of each shell is roughly constant in the region interior to the shell , and thus so is its contribution to all interior shells in the derived profile . the primary contribution of the blue measurements is to add an approximately constant volume density to the interior green and red measurements during the derivation process . however , that constant contribution is irrelevant unless we know the specific geometry of the cloud . only the changes in the shell depth near the edge of each shell can alter the form of the derived interior profile . therefore , only the few inner - most points in the derived profile interior to the @xmath50 discontinuity are affected . our derived profile yields values for @xmath123 and @xmath99 that are scaled by unknown geometry - dependent constants . based on figure [ cfig5]d , we can not say that at a radius of @xmath126 pc , the volume density within l1765 is equal to @xmath127 , nor is that the goal of this research . we can however say with significant confidence that in those regions of the main cloud encompassed by the green contours in fig [ cfig5]a , or approximately the middle third of the cloud radially , the volume density profile is governed by a power law with exponent @xmath125 . if , and only if , the cloud is assumed to be approximately spherical , @xmath123 and @xmath99 may actually represent similar values to the real @xmath24 and @xmath128 . without geometry information however , we can only be certain of the profile s form within the trusted region . the fact that we observed a strict power - law within a molecular cloud is in line with previous observations made through other methods . it is both encouraging and disconcerting that the derived profile follows a power law quite so well . it is encouraging to see that a real map , with real data , will produce an orderly volume density profile function ( in the green region ) and that a power - law is observed as it has been by with previous studies . however , it is necessary to make certain that the power - law is not a systematic effect of our technique or of the data itself . we therefore examine additional clouds to determine their behavior , and verify the validity of our technique . [ cfig6 ] ] figure [ cfig6 ] represents l1709 . in this case , the region interior to the red group is not shown as those contours have too small an area to be useful . the green contours however , are not trustworthy as they are influenced by the secondary peak at the edge of the map . the contours for the red group in figure [ cfig6]c exhibit a remarkable self - similarity in shape and center position even though they vary in area from 0.3 to 0.7 square parsecs . similarly to l1765 , the derived profile exhibits a strong power - law with a slope of @xmath129 . [ cfig7 ] ] the analysis for b5 is depicted in figure [ cfig7 ] . the blue and green regions are not trustworthy in this case because they encompass two secondary clumps in the top , and bottom right regions of the map . the red contours do not exhibit quite the same level of self - similarity as found in l1756 and l1709 due to the distension in the bottom right region . as a result , even the red contours may be somewhat suspect , however the distension corresponds to a variance of less than @xmath115 in the value of @xmath50 among the red contours . the red region corresponds to a power law with slope of @xmath130 . [ cfig8 ] ] the cloud cores examined so far have all belonged to the ophiuchus and perseus complexes and have been surrounded by neighboring clumps which prevented us from measuring the density profiles in the outermost regions of the clouds . furthermore , they have all exhibited a very similar power - law behavior , while originating from the same data source , which raises concerns that perhaps the way they were gridded , or the reduction method , may somehow be influencing the results . hence , we located two cloud cores ( l466 , and b133 ) which are isolated , and employed an independent data reduction , gridding the maps to 1 arcminute beams . figure [ cfig8 ] represents l466 . in this case , there are no adjoining clumps , or background extinction . thus we were able to utilize a much wider range of contours in the exterior regions of the cloud . here , both the red and blue regions are trustworthy , while the green region corresponds to an @xmath50 discontinuity and is not trustworthy . here , the red region exhibits the same behavior as the previous clouds with a power - law slope of @xmath131 . the outermost , diffuse region of the cloud also follows a power - law , but with a slope of @xmath132 . [ cfig9 ] ] figure [ cfig9 ] reveals that in b133 we can measure the density profile in the outermost region of the cloud . similarly to the case of l466 , there seem to be two power laws present in the red and blue regions with slopes of @xmath133 , and @xmath134 . the green region is again the site of an @xmath50 discontinuity ( significantly larger than in l466 ) and can not be trusted . what is the meaning of the two power - laws in the two clouds ( l466 and b133 ) where we have been able to confidently measure the radial profile in the diffuse region of the cloud ? how can two distinct power laws exist within the same cloud ? this kind of discontinuity can be troubling . previous researchers have noted that attenuated power - laws can well describe such clouds , while exhibiting different localized power laws in individual regions @xcite . it may be possible that an attenuated power law , such as that used in equation [ cngnp ] may accurately represent these clouds . there is insufficient data to fit @xmath77 to the derived profile from l466 and b133 since the innermost region is still unmeasured and @xmath77 has three free parameters ( @xmath19 , @xmath78 , and @xmath8 ) . using three free parameters it is not possible to derive a constrained fit for l466 and b133 however , the total mass of the cloud may be used to reduce this to a two - parameter problem utilizing the relationship @xmath135 which may be integrated using the @xmath136 hyper - geometric function to yield @xmath137 where @xmath138 represents the gamma function ( @xmath139 ) . equation [ cmasseqn ] permits us to turn the attenuated power - law fit into a two parameter problem using a cloud mass measured from the column density map . @xmath78 and @xmath95 seemed the most appropriate free parameters to use . it can be shown using the derivation in section [ cderivation ] that it is appropriate to use @xmath95 and @xmath140 along with the cloud mass even though geometric information is entangled in those parameters . and @xmath95 . the cross represents the position of the best fit with the lowest residual . the black contour represents the @xmath141 uncertainty for the fit . the blue and red contours represent the uncertainties considering masses respectively @xmath117 lower , and higher than the measured value . the straight lines represent positions where @xmath140 equals 0.1 , 0.2 , and 0.3 parsecs in a clockwise order . they represent the likely size of the cloud s core assuming the attenuated power - law is a correct fit . b ) the derived density profile ( points ) along with the fitted attenuated power - law ( solid line ) and the @xmath141 uncertainty ( dashed lines ) . [ cfig10 ] ] . [ cfig11 ] ] figures [ cfig10 ] and [ cfig11 ] present the results of such fits . the left panel in each figure represents the residuals map . the best fit for values of @xmath95 and @xmath78 is represented by a cross . the contours represent the @xmath141 uncertainties calculated from the residuals and are not necessarily symmetric . there may be some uncertainty in determining the masses of these clouds from the column density maps , due to background extinction , biases , uncertainty in the dust to gas ratio , distance estimates , and ambiguity in defining the edges of each cloud . these uncertainties may combine for several tens of percent in some cases . the blue and red contours correspond to alternate solutions with masses @xmath117 lower , and higher from the measured value to illustrate the effect . underestimating a cloud s mass will result in a lower modelled value of @xmath78 . while mass constrains the value of @xmath140 for any given @xmath95 and @xmath78 , lines where @xmath140 equals 0.1 , 0.2 , and 0.3 parsecs are drawn with the 0.1 parsec line closest to the vertical axis . the best - fit curves ( solid line ) are plotted in the right panels over the derived profiles , with the @xmath141 uncertainty bounds marked by dashed lines . the uncertainty bounds are not symmetric as they are not necessarily gaussian and are calculated using all possible solutions , not just the best fit . in both clouds , the exterior ( blue ) and interior ( red ) regions can be very well fit by an attenuated power law . it is a peculiarity of the attenuated power - law model that there may be a great deal of uncertainty in the values of @xmath78 which , in combination with @xmath140 , and @xmath95 can produce very similar results . the greatest uncertainty in the fit occurs in the innermost core of each cloud , as expected . there is nothing in the data which would suggest that the innermost regions of the clouds follow the fitted profile , as there is no data there . however , these fits do show that an attenuated power - law could explain why two different regions within the same cloud could appear to follow very different localized power - laws . this technique may be applied to many fields of study . in the previous section we examined the total proton volume density radial profiles in molecular cloud cores . to our knowledge , no one has previously applied a geometry independent method for measuring such volume density radial profiles . the imposition of geometric assumptions has been especially problematic since molecular clouds rarely resemble simple geometries . in fact , most studies involving the internal structure of molecular clouds have limited themselves to studying the distribution of the observed column densities . due to the desire to have more direct information on the internal structure of these clouds , various methods have been used to get volume density estimates despite their inherent limitations . the simplest and most common method involves estimating the object s shape from a column density map to arrive at an educated guess for its depth along one or more lines of sight @xcite . this method is usually used only along a single line of sight , typically the center , due to the uncertainties in estimating an object s three - dimensional shape . this method yields only the mean volume density along the line of sight , and is directly influenced by geometric assumptions . it yields no information on how volume density varies within a cloud . understanding the internal structure of clouds allows us to determine the relevant physics and chemistry . many studies have sought to do this by comparing an assumed cloud geometry to an observed column density map in order to produce a best - fit radial profile function . spheres and ellipsoids are most commonly assumed . early examples are the bonnor - ebert sphere @xcite or the study by @xcite . more recent examples of this methodology are found in @xcite , and @xcite . the primary advantage of this method is that it yields a volume density radial profile . however , it is often difficult to reconcile the idealized geometric assumptions with the actual objects studied . @xcite found that b68 seemed to provide an excellent fit to the bonnor - ebert sphere in their survey . any deviations from the assumed geometry manifest themselves in the derived radial profiles as bias in ways that are often unpredictable , and are thus frequently ignored or simply misinterpreted as uncertainties . we avoid such biases by discarding assumptions on the object s shape . as a result , the afore - mentioned @xmath50 discontinuities become readily apparent and allow us to avoid regions where the self - similarity assumption fails . this paper has presented a novel new method for determining the forms of the radial volume density profiles of objects such as molecular cloud cores without making assumptions about their geometry . while the method has been applied here only to dust extinction maps of molecular clouds , it is highly generalized and may be applied to any objects , and any observable quantities that satisfy the assumptions in section [ cassumptions ] . those assumptions may be briefly summarized as requiring that the object can be described using a single radial profile function as well as the validity of equation [ csimplen ] . as such , this method may be widely useful in a number of fields . the method relies on using only a column density map , which necessarily can not uniquely define a three - dimensional object whose geometry is unknown . it is a fortunate mathematical peculiarity that makes this method possible in that all of the object s geometric information , and no information about the form of the radial profile function are embedded into two dependent scalars ( @xmath89 and @xmath73 ) . these constants scale the derived @xmath70 profile to the cloud s original function @xmath71 . values for @xmath89 and @xmath73 can only be determined with knowledge of an object s geometry . however , the form of the derived radial profile can be derived accurately independent of geometry within the bounds of the assumptions in section [ cassumptions ] . those methods which rely on geometric assumptions necessarily introduce an often significant , yet difficult to predict bias due to deviations from an idealized geometry . our method yields the maximum amount of information attainable without the introduction of such a bias . our method is limited in several ways , the chief of which is the presence of @xmath50 discontinuities which arise due to variations in contour shapes as a result of the failure of assumption 2 . figure [ cfig4]b a cloud core for which self - similarity is satisfied only its the outer - most and central regions.regions of the derived profile immediately interior to the @xmath50 discontinuities are affected , while exterior regions and those sufficiently far to the interior of the discontinuities still maintain their form . if one were to assume the simulated object in figure [ cfig4]a were a sphere , the @xmath50 discontinuities would still be presented , but manifested in a less obvious , and more unpredictable manner . despite its limitations , our method presents the best option for discerning the form of the radial profile in those situations where it is suitable for use . section [ cderivation ] presents an analytic derivation from basic principles , while section [ cnumeric ] bolsters the derivation through tests using simulated data . all the clouds studied here exhibited similar behavior with power laws present in each . all clouds exhibited power - laws with similar slopes ranging from @xmath142 to @xmath143 in their middle regions . we believe that these power - laws are not artifacts of our method since analytic derivation , and numeric simulations show that the technique should be capable of properly deriving any kind of radial profile function . we chose to demonstrate the technique using 2mass data on molecular cloud cores . the total proton density is of particular interest as it contains information on support mechanisms as well the formation rates of molecules such as h@xmath144 , or @xmath121co . while we did not display the data for all clouds studied in this paper , it seems that quite often there are regions where the cloud s column density contours exhibit remarkable self - similarity accompanied by sharp changes in contour shape ( @xmath50 discontinuities ) between regions . our method shows that there does not appear to be a gradual change in the derived local power laws , but rather sudden shifts where the interior profile may follow a power law of @xmath145 , followed by an @xmath50 discontinuity , and a much steeper power law of @xmath146 in the exterior regions . that this sudden break accompanies a stark shift in contour shapes is intriguing . with only two isolated clouds , there is insufficient data with which to draw general conclusions , and that is beyond the scope of demonstration of the technique . if it can be proven that this is a common characteristic of isolated molecular cloud cores and is not some kind of artifact of the column density maps or the technique for deriving radial profiles , then is there a real effect which produces a sudden change in the density behavior of these clouds . one possibility may be that the properties of the dust particles change at lower densities , thus producing a sharper drop in observed extinction , or one of the cloud s support mechanisms ceases to be effective at a certain point leading to a steeper drop in density in the exterior . it may also be possible that the interior of the cloud is undergoing gravitational collapse while the exterior is not . we can say at the present time that we have no reason to believe that this change in local power laws is due to biases within the data or our reduction technique within the bounds of the limitations discussed throughout the paper as we have tested the technique using a variety of geometries , profile forms , beam widths , and reduction techniques . there is nothing in the analytical derivation suggesting that such a phenomenon should be produced as a side - effect . furthermore , while previous studies have found that the localized power law seems to change within different regions of individual cloud cores , these studies may not have been able to discern how sharp the change is due to their use of geometric assumptions which obscure the manifestation of @xmath50 discontinuities . this publication makes use of data products from the two micron all sky survey , which is a joint project of the university of massachusetts and the infrared processing and analysis center / california institute of technology , funded by the national aeronautics and space administration and the national science foundation . we wish to thank jorge pineda for useful discussions . this research was carried out in part at the jet propulsion laboratory operated for nasa by the california institute of technology . we thank the hayden planetarium , and rebecca oppenheimer in particular for generously providing a conducive environment in which a portion of this research was carried out . we appreciate the very insightful comments and suggestions from the anonymous reviewer that significantly improved this paper . alves , j.f . , lada , c.j . , & lada , e.a . 2001 , nature , 409 , 159 alves , j.f . , lada , c.j . , & lada , e.a . 2001 , the messenger , 103(1 ) , 15 arquilla , r. & goldsmith , p. f. 1985 , , 297 , 436 ballesteros - paredes , j. , vazquez - semadeni , e. , gazol , a. , et al . 2011 , mnras 416 , 1436 bergin , e. a. , alves , j. , huard , t. , & lada , c. j. 2002 , , 570 , l101 bonnor , w.b . 1956 , mnras 116 , 351 cernicharo , j. , bachiller , r. , & duvert , g. 1985 , , 149 , 273 chapman , n.l . , mundy , l.g . , lai , s .- , & evans , n. j. , ii 2009 , , 690(1 ) , 496 dapp , w.b . & basu , s. 2009 , mnras 395 , 1092 dickman , r.l . & clemens , d.p . 1983 , , 271 , 143 dobashi , k. 2011 , , 63 , s1 ebert , r. 1955 , zeitschrift fur astrophysik , 37 , 217 evans , n.j . , ii , rawlings , j.m . c. , shirley , y.l . & mundy , l.g . 2001 , , 557 , 193 froebrich , d. , murphy , g. c. , smith , et al . 2007 , mnras , 378 , 1447 froebrich , d. , & rowles , j. 2010 , mnras , 406 , 1350 goodman , a. a. , pineda , j. e. , & schnee , s. l. 2009 , , 692 , 91 harvey , d.w.a . , wilner , d.j . , lada , c.j . , & myers , p.c . 2001 , , 598 , 112 kainulainen , j. , lehtinen , k. , v@xmath147is@xmath147nen , p. , et al . 2007 , , 463 , 1029 kainulainen , j. , beuther , h. , henning , t. , & plume , r. 2009 , , 508(3 ) , l35 kandori , r. , nakajima , y. , tamura , m. et al . 2005 , , 130 , 2166 king , i. 1962 , , 67,471 klessen , r.s . 2000 , , 535 , 869 kramer , c. , alves , j. , lada , c.j . , et al . 1999 , , 342 , 257 lada , c.j . , lada , e.a . , clemens , d. , & bally , j. 1994 , , 429 , 694 launhardt , r. , stutz , a.m. , schmiedeke , a. et al . 2013 , , 551a , 98l li , p.s . , norman , m.l . , mac low , m. , & heitsch , f. 2004 , , 605 , 800 liu , t. , wu , y. & zhang , h. 2012 , , 202 , 4 lombardi , m. & alves , j. 2001 , , 377 , 1023 norlund , a.k . & padoan , p. 1999 , in interstellar turbulence , ed . j. franco & a. carraminana ( cambridge : cambridge university press ) , 218 ostriker , e.c . , stone , j.m . , gammie , c.f . 2001 , , 546 , 980 pineda , j. l. ; goldsmith , p. f. , chapman , n.l . , et al . 2010 , , 721 , 686 ridge , n. a. ; di francesco , j. ; kirk , h. et al . 2006 , , 131 , 2921 m.f . skrutskie , r.m . cutri , r. stiening , et al . 2006 , , 131 , 1163 stutz , a. , launhardt , r. , linz , h. et al . 2010 , , 518l , 87s tafalla , m. , myers , p. c. , caselli , p. , walmsley , c. m. , & comito , c. 2002 , , 569 , 815 tassis , k. , christie , d. a. , urban , a. , et al . 2010 , mnras , 408 , 1089 teixeira , p.s . ; lada , c. j. & alves , j.f . 2005 , , 629 , 276 ward - thompson , d. , scott , p.f . , hills , r.e . , & andr , p. 1994 , , 268 , 276 wong , t. , ladd , e. f. , brisbin , d. , et al . mnras , 386(2 ) 1069 wu , y. ; liu , t. ; meng , f. ; et al . 2012 , , 756 , 76
we present a geometry - independent method for determining the shapes of radial volume density profiles of astronomical objects whose geometries are unknown , based on a single column density map . such profiles are often critical to understand the physics and chemistry of molecular cloud cores , in which star formation takes place . the method presented here does not assume any geometry for the object being studied , thus removing a significant source of bias . instead it exploits contour self - similarity in column density maps which appears to be common in data for astronomical objects . our method may be applied to many types of astronomical objects and observable quantities so long as they satisfy a limited set of conditions which we describe in detail . we derive the method analytically , test it numerically , and illustrate its utility using 2mass - derived dust extinction in molecular cloud cores . while not having made an extensive comparison of different density profiles , we find that the overall radial density distribution within molecular cloud cores is adequately described by an attenuated power law .
a key process in serial analysis of gene expression ( sage ) consists of accurately mapping short sequence tags to known genes . the use of complete genome information , instead of limited and biased transcriptome data , allows the identification and mapping of a larger number of experimental tags , thus facilitating the tasks of gene discovery and annotation . the use of genome information in the tag - to - gene assignment process overcomes the problem of being limited to only those genes for which an est has been already found . however , this strategy posses new challenge for unambiguous tag mapping because the probability that a short tag sequence will be unique in the genome significantly decreases ( 1 ) . in a recent work ( 2 ) , we have presented a novel and improved method for the tag - to - gene assignment process in sage , called hierarchical gene assignment ( hga ) . the hga method provides a full annotation of the potential virtual sage tags within a genome , along with an estimation of their confidence for experimental observation . we applied this method to the saccharomyces cerevisiae genome , producing the most thorough and accurate annotation of virtual sage tags that is available today for this organism . in this work , we describe the implementation of a web server that can be used to map experimental sage tags from yeast against our previously generated annotation of potential genomic tags for this organism using the hga methodology ( 2 ) . the server is specifically designed to fully exploit the major benefits of the sage technique , which are to assist the processes of gene discovery and annotation ( 35 ) . the server contains three different modules ( figure 1 ) : ( i ) genome explore , ( ii ) genome mapping and ( iii ) library mapping . the first module can be used to explore a genome in the context of a future sage experiment , allowing the user to determine before hand if some genes of interest will be accurately measured by sage ( i.e. systems biology , study of gene regulatory networks or specific metabolic pathways , etc ) . this is useful for the planning of sage experiments and it can also be used for education purposes when teaching about the sage technique . this module provides a friendly graphical interface , links to external servers and databases and it is also invoked by the other two modules . the second module can be used to map experimental sage tags against the existing annotation of potential genomic tags . the results are clearly presented in a table that contains dynamic links to the graphical interface of the genome explore module and to external servers and databases . graphical expression maps of specific genes , specific genomic regions , full chromosomes or the complete genome can also be produced on the flight . the third module can be used to map experimental sage tags against all existing libraries of experimental sage tags produced by others . this allows the user to quickly compare its own sage results against those from previous sage experiments performed upon different conditions . this is useful to identify new sage tags and also to easily and simultaneously compare two or more full gene expression profiles . the core of the server is a mysql database of virtual genomic sage tags with confidence assignments that was generated by the hga methodology and it has been recently described in detail ( 2 ) . the server has three modules : ( i ) genome explore , ( ii ) genome mapping and ( iii ) library mapping . the module i is linked to ncbi blast server and to the saccharomyces genome database ( sgd ) and does not require any input from the user . the modules ii and iii require experimental tag sequences and their counts ( optional ) as input , are linked to the module i and , through this , to the external servers and databases . for more details about the functioning of these modules this server has been programmed in mysql and php languages and uses the jpgraph graphics library . the core of the server is a mysql database of virtual genomic sage tags with confidence assignments that was generated by the hga methodology and it has been recently described in detail ( 2 ) . the server has three modules : ( i ) genome explore , ( ii ) genome mapping and ( iii ) library mapping . the module i is linked to ncbi blast server and to the saccharomyces genome database ( sgd ) and does not require any input from the user . the modules ii and iii require experimental tag sequences and their counts ( optional ) as input , are linked to the module i and , through this , to the external servers and databases . for more details about the functioning of these modules this server has been programmed in mysql and php languages and uses the jpgraph graphics library . though the different modules of the sagexplore server have distinct functionalities and independent forms for submitting a query , some of their input parameters are common . table 1 summarizes the complete list of user - specified input parameters and user - provided input data that each module requires . common parameters to the three modules include the specification of the organism name and the anchoring - tagging enzyme pair used in sage , as well as the selection of several options for displaying the results of a query . table 1.input options and requirements of the different modules of the sagexplore serverinputdescriptiongenome exploregenome mappinglibrary mappinguser - specified input parametersorganism specificationyesyesyesanchoring - tagging enzyme pairyesyesyesodds ratio for tag confidence assignmentyesyesnogenomic mapping context and tag categoriesyesnonooutput display optionsyesyesyesuser - provided input datalist of genomic regionsyesnonolist of experimental sage tagsnoyesyes input options and requirements of the different modules of the sagexplore server when exploring a genome , various tag features can be specified by the user , which include : frequency of occurrence of a given tag sequence in the genome , annotated elements in the genome where a tag maps , the tag confidence assignment ( high , low or undefined ) , location within a gene element ( orf , 3-utr , 5-utr , exon or intron ) , and if the tag is near and downstream to an internal poly(a ) region within a gene ( figure 2a ; supplementary figure 2 ) . this allows the user to study in detail the reliability of any potential virtual tag in the genome . additionally , the user can also provide to the server any input data specifying different type of genomic regions to explore such as : one or more genes , one or more genome fragments , one or more chromosomes , or the complete genome ( figure 2a ; supplementary figure 2 ) . this allows the user to evaluate the expected reliability of sage results for a specific set of genes or genomic regions of interest . furthermore , the combination of these user - provided input parameters and user - specified input data provides a powerful tool to perform almost any possible query . on - line help about the required format to submit input data in addition to this , some simple examples of properly formatted input data are also provided . an example of the sequence details that are displayed for a particular gene where a given tag maps . ( f ) an example of the genomic details available for each tag . ( a ) input form of genome explore module . an example of the sequence details that are displayed for a particular gene where a given tag maps . ( f ) an example of the genomic details available for each tag . when mapping experimental tags against the database of genomic virtual tags or against the known experimental sage libraries , a list of experimental tag sequences should be provided by the user . the observed counts for each tag from multiple experimental points can also be included . in the case of the genome mapping module , this allows the building of graphic genome expression maps . in the case of the library mapping module , this allows a simple and fast comparison against existing gene expression profiles under different experimental conditions previously reported by others ( 68 ) . the output of the server for the three modules is presented as a table in html format , which can also be exported as a compressed text file ( tab delimited text ) . therefore , the output data can be easily imported into other software or database applications such as excel or mysql for further analysis . each column header in the output table is linked to a popup window that contains online help explaining its content and/or functionality . table 2 shows the complete description of the output data given by each module of the server . as an example , a typical output of the genome mapping module is shown ( figure 2b ; supplementary figure 3 ) . some columns in the output table contain a dynamic link that allows the user to retrieve more information about a particular tag by invoking additional queries to this server or to external servers and databases . one of these features consists of the analysis of the genomic context where a given tag maps , which can be graphically explored ( figure 2c ; supplementary figure 4 ) . this allows the user to see the mapping position of a tag in the genome , along with all the surrounding annotated elements such as genes and their structures ( i.e. coding regions and utrs ) . the server can generate these maps for the complete genome , for a single chromosome ( figure 2d ; supplementary figure 5 ) or for a given genomic region , in case the user wants to analyze some specific regions in more detail . the graphic expression maps facilitate the analysis of sage data and allow the easy and fast identification of transcriptionally active regions or co - regulated gene clusters under certain experimental conditions . table 2.output data given by the different modules of the sagexplore servercolumn descriptiongenome exploregenome mappinglibrary mappingsequential tag numberyes ( 1)yes ( 1)yes ( 1)tag confidence assignmentyes ( 2)yes ( 2)notag sequenceyes ( 3)yes ( 3)yes ( 2)tag frequency of occurrence in the genomeyes ( 4)yes ( 4)notag odds ratioyes ( 5)yes ( 5)notag classyes ( 6)yes ( 6)notag genomic location descriptionyes ( 7)yes ( 7)notag genomic location typeyes ( 8)yes ( 8)notag position within a transcriptyes ( 9)yes ( 9)nochromosome numberyes ( 10)yes ( 10)noinitial position of the tag in the chromosomeyes ( 11)yes ( 11)nochromosome strandyes ( 12)yes ( 12)nostandard gene nameyes ( 13)yes ( 13)nosystematic gene nameyes ( 14)yes ( 14)nogenomic contextyes ( 15)yes ( 15)notag detailsyes ( 16)yes ( 16)nodisplay sequenceyes ( 17)yes ( 17)noblastyes ( 18)yes ( 18)notag counts on each experimental librarynonoyes ( 310)tag countsnoyes ( 19)yes ( 11)tag user - defined informationnoyes ( 20)yes ( 12)this field in the output table has additional information dynamically linked . some of these links currently point to external servers and databases such as blast server and sgd database.the description of the columns displayed for each sage tag by the output tables of the server as a result of a particular query issued to each of the independent modules is specified . the numbers between parenthesis represent the sequential column number of each output table displayed by the server on its three different modules as a result of an issued query . output data given by the different modules of the sagexplore server this field in the output table has additional information dynamically linked . some of these links currently point to external servers and databases such as blast server and sgd database . the description of the columns displayed for each sage tag by the output tables of the server as a result of a particular query issued to each of the independent modules is specified . the numbers between parenthesis represent the sequential column number of each output table displayed by the server on its three different modules as a result of an issued query . the complete nucleotide sequence of a gene where a tag maps , along with the detailed representation of the annotated gene structure , can also be automatically extracted and analyzed ( figure 2e ; supplementary figure 6 ) . in the case that a tag maps onto an intergenic region , a flanking genomic sequence is extracted by the server ( 500 nts downstream and 500 nts upstream from the tag mapping position ) . in either case , the extracted sequences that contain the tag can be automatically aligned against the known sequence databases through the blast server . these server features are very powerful because they allow a fast and detailed analysis of those interesting tags that could be coming from currently unknown genes ( i.e. assisting the processes of gene discovery and annotation ) . in addition to this , the rapid design of oligonucleotide primers for experimental validation of some sage results by rt - pcr is also greatly facilitated . the ranking of tags for experimental validation is also greatly facilitated by accessing all the annotated tag details ( figure 2f ; supplementary figure 7 ) . several other tools have been described for the analysis and mapping of experimental sage tags ( table 3 ) . the current release of the sagexplore server presents several drawbacks as compared to some other tools , most of which are considered as future improvements of this server and are detailed in the next section subsequently . on the other hand , the sagexplore server has several advantages and some unique features as compared to the other tools , which include : ( i ) the database of virtual sage tags that the server uses has been built by the recently described hga methodology that assigns a confidence level based on experimental data to those tags that present multiple matches in the genome ; ( ii ) its particular orientation towards facilitating the tasks of gene discovery and annotation ; ( iii ) its graphical interface and the genome explore module , which can also be used for educational purposes and not only for advanced research and ( iv ) its genomic tag context sequence extraction and tag details display capabilities , which are very useful to speed up the experimental validation of sage results . table 3.existing tools for the analysis and mapping of sage tagsnamedatabasetypetag countstag mappinggraphical interfaceorganismweb addresstagmapper ( 9)refseq , estsservernoyesnoseveralhttp://tagmapper.ibioinformatics.org / websage ( 10)refseq , estsservernoyesnohumanhttp://www2.mnhn.fr / websage / sagemap ( 11)sage librariesserveryesyesyesseveralhttp://www.ncbi.nlm.nih.gov / projects / sagesagenet ( n.a.)sage librariesdatabasenononohuman , mouse , yeasthttp://www.sagenet.org / sage genie ( 12)sage librariesdatabasenonoyeshuman , mousehttp://cgap.nci.nih.gov / sageactg ( 13)refseq , estsserveryesyesnohuman , mousehttp://retina.med.harvard.edu / actg/5sage ( 14)genome , ests , sage librariesservernoyesyeshumanhttp://5sage.gi.k.u - tokyo.ac.jp / mouse sage site ( 15)refseq , genome , ests , sage librariesservernoyesnomousehttp://mouse.biomed.cas.cz / sage / discovery space ( 16)refseq , genome , sage librariesstandaloneyesyesyeshumanhttp://www.bcgsc.ca / discoveryspace / identitag ( 17)ests , cdna , sage libraries ( user provided)standalonenoyesnoseveralhttp://pbil.univ - lyon1.fr / software / identitag / usage ( 18)refseq , genome , sage librariesserveryesyesnoseveralhttp://www.cmbi.kun.nl / usage / sagexploregenome , sage librariesserveryesyesyesyeasthttp://dna.bio.puc.cl / sagexplore.htmlthis work.other organisms will be soon available for tag mapping on this server . , not available.the available publications describing the listed tools are cited between parenthesis next to their names . refseq and ests stand for databases of reference sequences and expressed sequence tags , respectively . tag counts column reflects if the tool takes into consideration in some way the observed counts of experimental tags . some tools perform statistical analysis based on those counts , others only allow to display that information along with the results of the query . tag mapping column reflects if the tool is able to map experimental tags against a specific built - in database . existing tools for the analysis and mapping of sage tags other organisms will be soon available for tag mapping on this server . , not available. the available publications describing the listed tools are cited between parenthesis next to their names . refseq and ests stand for databases of reference sequences and expressed sequence tags , respectively . tag counts column reflects if the tool takes into consideration in some way the observed counts of experimental tags . some tools perform statistical analysis based on those counts , others only allow to display that information along with the results of the query . tag mapping column reflects if the tool is able to map experimental tags against a specific built - in database . the server contains three different modules ( figure 1 ) : ( i ) genome explore , ( ii ) genome mapping and ( iii ) library mapping . the first module can be used to explore a genome in the context of a future sage experiment , allowing the user to determine before hand if some genes of interest will be accurately measured by sage ( i.e. systems biology , study of gene regulatory networks or specific metabolic pathways , etc ) . this is useful for the planning of sage experiments and it can also be used for education purposes when teaching about the sage technique . this module provides a friendly graphical interface , links to external servers and databases and it is also invoked by the other two modules . the second module can be used to map experimental sage tags against the existing annotation of potential genomic tags . the results are clearly presented in a table that contains dynamic links to the graphical interface of the genome explore module and to external servers and databases . graphical expression maps of specific genes , specific genomic regions , full chromosomes or the complete genome can also be produced on the flight . the third module can be used to map experimental sage tags against all existing libraries of experimental sage tags produced by others . this allows the user to quickly compare its own sage results against those from previous sage experiments performed upon different conditions . this is useful to identify new sage tags and also to easily and simultaneously compare two or more full gene expression profiles . the core of the server is a mysql database of virtual genomic sage tags with confidence assignments that was generated by the hga methodology and it has been recently described in detail ( 2 ) . the server has three modules : ( i ) genome explore , ( ii ) genome mapping and ( iii ) library mapping . the module i is linked to ncbi blast server and to the saccharomyces genome database ( sgd ) and does not require any input from the user . the modules ii and iii require experimental tag sequences and their counts ( optional ) as input , are linked to the module i and , through this , to the external servers and databases . for more details about the functioning of these modules this server has been programmed in mysql and php languages and uses the jpgraph graphics library . the core of the server is a mysql database of virtual genomic sage tags with confidence assignments that was generated by the hga methodology and it has been recently described in detail ( 2 ) . the server has three modules : ( i ) genome explore , ( ii ) genome mapping and ( iii ) library mapping . the module i is linked to ncbi blast server and to the saccharomyces genome database ( sgd ) and does not require any input from the user . the modules ii and iii require experimental tag sequences and their counts ( optional ) as input , are linked to the module i and , through this , to the external servers and databases . for more details about the functioning of these modules this server has been programmed in mysql and php languages and uses the jpgraph graphics library . though the different modules of the sagexplore server have distinct functionalities and independent forms for submitting a query , some of their input parameters are common . table 1 summarizes the complete list of user - specified input parameters and user - provided input data that each module requires . common parameters to the three modules include the specification of the organism name and the anchoring - tagging enzyme pair used in sage , as well as the selection of several options for displaying the results of a query . table 1.input options and requirements of the different modules of the sagexplore serverinputdescriptiongenome exploregenome mappinglibrary mappinguser - specified input parametersorganism specificationyesyesyesanchoring - tagging enzyme pairyesyesyesodds ratio for tag confidence assignmentyesyesnogenomic mapping context and tag categoriesyesnonooutput display optionsyesyesyesuser - provided input datalist of genomic regionsyesnonolist of experimental sage tagsnoyesyes input options and requirements of the different modules of the sagexplore server when exploring a genome , various tag features can be specified by the user , which include : frequency of occurrence of a given tag sequence in the genome , annotated elements in the genome where a tag maps , the tag confidence assignment ( high , low or undefined ) , location within a gene element ( orf , 3-utr , 5-utr , exon or intron ) , and if the tag is near and downstream to an internal poly(a ) region within a gene ( figure 2a ; supplementary figure 2 ) . this allows the user to study in detail the reliability of any potential virtual tag in the genome . additionally , the user can also provide to the server any input data specifying different type of genomic regions to explore such as : one or more genes , one or more genome fragments , one or more chromosomes , or the complete genome ( figure 2a ; supplementary figure 2 ) . this allows the user to evaluate the expected reliability of sage results for a specific set of genes or genomic regions of interest . furthermore , the combination of these user - provided input parameters and user - specified input data provides a powerful tool to perform almost any possible query . on - line help about the required format to submit input data in addition to this , some simple examples of properly formatted input data are also provided . an example of the sequence details that are displayed for a particular gene where a given tag maps . ( f ) an example of the genomic details available for each tag . ( a ) input form of genome explore module . an example of the sequence details that are displayed for a particular gene where a given tag maps . when mapping experimental tags against the database of genomic virtual tags or against the known experimental sage libraries , a list of experimental tag sequences should be provided by the user . the observed counts for each tag from multiple experimental points can also be included . in the case of the genome mapping module , this allows the building of graphic genome expression maps . in the case of the library mapping module , this allows a simple and fast comparison against existing gene expression profiles under different experimental conditions previously reported by others ( 68 ) . the output of the server for the three modules is presented as a table in html format , which can also be exported as a compressed text file ( tab delimited text ) . therefore , the output data can be easily imported into other software or database applications such as excel or mysql for further analysis . each column header in the output table is linked to a popup window that contains online help explaining its content and/or functionality . table 2 shows the complete description of the output data given by each module of the server . as an example , a typical output of the genome mapping module is shown ( figure 2b ; supplementary figure 3 ) . some columns in the output table contain a dynamic link that allows the user to retrieve more information about a particular tag by invoking additional queries to this server or to external servers and databases . one of these features consists of the analysis of the genomic context where a given tag maps , which can be graphically explored ( figure 2c ; supplementary figure 4 ) . this allows the user to see the mapping position of a tag in the genome , along with all the surrounding annotated elements such as genes and their structures ( i.e. coding regions and utrs ) . the server can generate these maps for the complete genome , for a single chromosome ( figure 2d ; supplementary figure 5 ) or for a given genomic region , in case the user wants to analyze some specific regions in more detail . the graphic expression maps facilitate the analysis of sage data and allow the easy and fast identification of transcriptionally active regions or co - regulated gene clusters under certain experimental conditions . table 2.output data given by the different modules of the sagexplore servercolumn descriptiongenome exploregenome mappinglibrary mappingsequential tag numberyes ( 1)yes ( 1)yes ( 1)tag confidence assignmentyes ( 2)yes ( 2)notag sequenceyes ( 3)yes ( 3)yes ( 2)tag frequency of occurrence in the genomeyes ( 4)yes ( 4)notag odds ratioyes ( 5)yes ( 5)notag classyes ( 6)yes ( 6)notag genomic location descriptionyes ( 7)yes ( 7)notag genomic location typeyes ( 8)yes ( 8)notag position within a transcriptyes ( 9)yes ( 9)nochromosome numberyes ( 10)yes ( 10)noinitial position of the tag in the chromosomeyes ( 11)yes ( 11)nochromosome strandyes ( 12)yes ( 12)nostandard gene nameyes ( 13)yes ( 13)nosystematic gene nameyes ( 14)yes ( 14)nogenomic contextyes ( 15)yes ( 15)notag detailsyes ( 16)yes ( 16)nodisplay sequenceyes ( 17)yes ( 17)noblastyes ( 18)yes ( 18)notag counts on each experimental librarynonoyes ( 310)tag countsnoyes ( 19)yes ( 11)tag user - defined informationnoyes ( 20)yes ( 12)this field in the output table has additional information dynamically linked . some of these links currently point to external servers and databases such as blast server and sgd database.the description of the columns displayed for each sage tag by the output tables of the server as a result of a particular query issued to each of the independent modules is specified . the numbers between parenthesis represent the sequential column number of each output table displayed by the server on its three different modules as a result of an issued query . output data given by the different modules of the sagexplore server this field in the output table has additional information dynamically linked . some of these links currently point to external servers and databases such as blast server and sgd database . the description of the columns displayed for each sage tag by the output tables of the server as a result of a particular query issued to each of the independent modules is specified . the numbers between parenthesis represent the sequential column number of each output table displayed by the server on its three different modules as a result of an issued query . the complete nucleotide sequence of a gene where a tag maps , along with the detailed representation of the annotated gene structure , can also be automatically extracted and analyzed ( figure 2e ; supplementary figure 6 ) . in the case that a tag maps onto an intergenic region , a flanking genomic sequence is extracted by the server ( 500 nts downstream and 500 nts upstream from the tag mapping position ) . in either case , the extracted sequences that contain the tag can be automatically aligned against the known sequence databases through the blast server . these server features are very powerful because they allow a fast and detailed analysis of those interesting tags that could be coming from currently unknown genes ( i.e. assisting the processes of gene discovery and annotation ) . in addition to this , the rapid design of oligonucleotide primers for experimental validation of some sage results by rt - pcr is also greatly facilitated . the ranking of tags for experimental validation is also greatly facilitated by accessing all the annotated tag details ( figure 2f ; supplementary figure 7 ) . several other tools have been described for the analysis and mapping of experimental sage tags ( table 3 ) . the current release of the sagexplore server presents several drawbacks as compared to some other tools , most of which are considered as future improvements of this server and are detailed in the next section subsequently . on the other hand , the sagexplore server has several advantages and some unique features as compared to the other tools , which include : ( i ) the database of virtual sage tags that the server uses has been built by the recently described hga methodology that assigns a confidence level based on experimental data to those tags that present multiple matches in the genome ; ( ii ) its particular orientation towards facilitating the tasks of gene discovery and annotation ; ( iii ) its graphical interface and the genome explore module , which can also be used for educational purposes and not only for advanced research and ( iv ) its genomic tag context sequence extraction and tag details display capabilities , which are very useful to speed up the experimental validation of sage results . table 3.existing tools for the analysis and mapping of sage tagsnamedatabasetypetag countstag mappinggraphical interfaceorganismweb addresstagmapper ( 9)refseq , estsservernoyesnoseveralhttp://tagmapper.ibioinformatics.org / websage ( 10)refseq , estsservernoyesnohumanhttp://www2.mnhn.fr / websage / sagemap ( 11)sage librariesserveryesyesyesseveralhttp://www.ncbi.nlm.nih.gov / projects / sagesagenet ( n.a.)sage librariesdatabasenononohuman , mouse , yeasthttp://www.sagenet.org / sage genie ( 12)sage librariesdatabasenonoyeshuman , mousehttp://cgap.nci.nih.gov / sageactg ( 13)refseq , estsserveryesyesnohuman , mousehttp://retina.med.harvard.edu / actg/5sage ( 14)genome , ests , sage librariesservernoyesyeshumanhttp://5sage.gi.k.u - tokyo.ac.jp / mouse sage site ( 15)refseq , genome , ests , sage librariesservernoyesnomousehttp://mouse.biomed.cas.cz / sage / discovery space ( 16)refseq , genome , sage librariesstandaloneyesyesyeshumanhttp://www.bcgsc.ca / discoveryspace / identitag ( 17)ests , cdna , sage libraries ( user provided)standalonenoyesnoseveralhttp://pbil.univ - lyon1.fr / software / identitag / usage ( 18)refseq , genome , sage librariesserveryesyesnoseveralhttp://www.cmbi.kun.nl / usage / sagexploregenome , sage librariesserveryesyesyesyeasthttp://dna.bio.puc.cl / sagexplore.htmlthis work.other organisms will be soon available for tag mapping on this server . n.a . , not available.the available publications describing the listed tools are cited between parenthesis next to their names . refseq and ests stand for databases of reference sequences and expressed sequence tags , respectively . tag counts column reflects if the tool takes into consideration in some way the observed counts of experimental tags . some tools perform statistical analysis based on those counts , others only allow to display that information along with the results of the query . tag mapping column reflects if the tool is able to map experimental tags against a specific built - in database . existing tools for the analysis and mapping of sage tags other organisms will be soon available for tag mapping on this server . , not available. the available publications describing the listed tools are cited between parenthesis next to their names . refseq and ests stand for databases of reference sequences and expressed sequence tags , respectively . tag counts column reflects if the tool takes into consideration in some way the observed counts of experimental tags . some tools perform statistical analysis based on those counts , others only allow to display that information along with the results of the query . tag mapping column reflects if the tool is able to map experimental tags against a specific built - in database . the current release of the sagexplore server only allows the exploring and mapping of sage tags to the yeast genome . also , the virtual tag database was built for a single combination of anchoring - tagging enzymes ( nlaiii - bsmfi ) . though this enzyme pair is the most frequently used in sage experiments , it is expected that other enzyme pairs could be useful to some experimentalists ( e.g. long - sage uses a different tagging enzyme ) . therefore , obvious improvements to the server involve the incorporation of additional organisms and enzyme pairs generally used in sage and long - sage . in addition to this , the odds ratio used to assign the tag confidences for those cases where a tag is found multiple times in the genome has been specifically tuned for yeast , based on experimental data ( 2 ) . it is expected that other organisms will have a different optimal odds ratio threshold to define tag confidences according to the hga methodology . therefore , flexibility about this parameter needs to be also added when new organisms are included . in addition to this , genome annotation is improved very often by the experimental characterization of new genes . this information is key in the hga methodology and the database of virtual sage - tags should be also updated frequently . finally , a large amount of sage experiments are underway , and thus several experimental libraries are released every year . it will be then necessary to update frequently the database containing the experimental libraries , which are used by the library mapping module of this server . currently , we are building a database of virtual sage - tags for xenopus tropicalis organism , which genome has been recently sequenced . this will constitute a larger challenge than that faced for yeast when implementing this server with the hga methodology ( 2 ) , since the 12 megabytes of yeast are not comparable to the 1.5 gigabytes for xenopus ( 150 times larger ) . this constitutes an intermediate point with human , which should be the next genome to be incorporated into this server . we also plan in the near future to include the genomes of other model organisms such as mus musculus , drosophila melanogaster and arabidopsis thaliana . the order or priority will depend on the user requests and feedback , but we are willing to help the sage community by providing useful tools to get the most of information out of these expensive large - scale experiments .
we describe a web server for the accurate mapping of experimental tags in serial analysis of gene expression ( sage ) . the core of the server relies on a database of genomic virtual tags built by a recently described method that attempts to reduce the amount of ambiguous assignments for those tags that are not unique in the genome . the method provides a complete annotation of potential virtual sage tags within a genome , along with an estimation of their confidence for experimental observation that ranks tags that present multiple matches in the genome.the output of the server consists of a table in html format that contains links to a graphic representation of the results and to some external servers and databases , facilitating the tasks of analysis of gene expression and gene discovery . also , a table in tab delimited text format is produced , allowing the user to export the results into custom databases and software for further analysis.the current server version provides the most accurate and complete sage tag mapping source that is available for the yeast organism . in the near future , this server will also allow the accurate mapping of experimental sage - tags from other model organisms such as human , mouse , frog and fly . the server is freely available on the web at : http://dna.bio.puc.cl/sagexplore.html .
within the central nervous system ( cns ) , astrocytes are the most abundant cells . their main task is to maintain the physiological homeostasis of neurons by providing a stable microenvironment and growth factors . astrocytes form multicellular syncytia in vivo that ensure neuronal homeostasis by taking up excess neurotransmitters and buffering the ionic content of the extracellular medium in the brain . astrocyte membranes contain numerous neurotransmitter receptors and transporters and can therefore sense and regulate formation , stability , and efficacy of synapses . recently , they have been shown to play a role in synaptic activity and regulating neuronal circuitry [ 24 ] . astrocytes are dysfunctional in various neurological disorders such as epilepsy , amyotrophic lateral sclerosis , hepatic encephalopathy , stroke , and focal cerebral ischaemia ( reviewed in ) . dysfunction is often accompanied by astrocytic hypertrophy and an increased number of astrocytic processes , termed astrogliosis . astrocytes also show these signs of activation in alzheimer 's disease [ 7 , 8 ] and in parkinson 's disease as well as in its rat model ( figure 1 ) . massive astrogliosis has been observed in postmortem tissue of parkinsonian patients [ 9 , 1113 ] . these tissues demonstrated a lack of astrocyte - derived neurotrophins compared to control brains [ 14 , 15 ] . because astrocytes support and protect dopaminergic neurons in vitro , a functional failure of astrocytes may contribute to cns pathology . the potential for antigen presentation and production of proinflammatory cytokines by astrocytes has been studied in the neuroinflammatory disease multiple sclerosis ( ms ) and its animal model experimental autoimmune encephalomyelitis ( eae ) . thus , they contribute to the immune privilege of the cns . the privilege is not simply the absence of immune reactions but rather a complicated network of passive and active barriers and of brain tissue . it can modify immune reactions in the cns so as to minimize the danger of destructive side effects in a tissue with limited ability to regenerate . in this review , we focus on astrocyte functions in health and disease , particularly on their interaction with lymphocytes . the perivascular space is separated from the brain parenchyma by the basement membrane and the glia limitans , made up of astrocytic end - feet , reviewed in . notably , it is not the direct contact of astrocytic end - feet with endothelial cells that induces the tightness but soluble factors secreted by them . the presence of numerous astrocytic end - feet close to the bbb allows for a rapid regulation of bbb permeability . humoural agents that are able to increase bbb permeability and may be secreted by astrocytes include endothelin-1 , glutamate , interleukin- ( il- ) 1 , il-6 , tumour necrosis factor ( tnf ) , macrophage inflammatory protein- ( mip- ) 2 , and nitric oxide . soluble astrocytic factors that induce tight junction formation at the bbb are less well characterized . a recent study has shown that sonic hedgehog , a member of the hedgehog signalling pathway family , is produced by astrocytes . sonic hedgehog promotes bbb formation and integrity , and hedgehog - mediated signals induce immune quiescence in the cns . thus , inhibition of hedgehog signalling exacerbates eae by increasing demyelination , accumulation of leukocytes in the cns , and production of interferon- ( ifn- ) and il-17 by infiltrating t cells . astrocytes are capable of producing a range of proinflammatory cytokines that have been found in the brain of alzheimer 's disease patients such as il-1 , il-1 , il-6 , and tnf . it has been shown that amyloid-2535 in combination with bacterial cell wall lipopolysaccharide ( lps ) induced a strong astrocytic production of il-6 and tnf while neither of the substances alone did . others found that lps induced the production of tnf , il-6 , and il-1 in microglia but not in astrocytes while astrocytes responded neither to lps nor tnf but to il-1 by producing tnf and il-6 . this indicates that astrocytes may be regulated by microglial il-1. microglial cells produce free radicals and proinflammatory cytokines such as tnf- when exposed to amyloid-142 [ 25 , 26 ] . tnf and superoxide anion production by macrophages cocultured with amyloid-142 was strongly reduced in the presence of primary human astrocytes or astrocytoma cells . interestingly , astrocytes bound amyloid-142 and showed activation of the transcription factor nfb in that study , but unlike in macrophages this activation did not result in tnf production . this indicates that distinct signal transduction pathways are activated in macrophages and astrocytes by inflammation . indeed , astrocytes can also downregulate microglial activation by the secretion of anti - inflammatory substances such as transforming growth factor- ( tgf- ) and prostaglandin e2 ( pge2 ) [ 28 , 29 ] and may thereby limit inflammation - induced neurodegeneration . thus , the clinical relevance of both astrocytic and microglial activation has not yet been fully elucidated . it is not only necessary for the growth and maturation of neurons and glia cells , but can also induce the production of proinflammatory cytokines . overexpression of gmf in astrocytes induces the production and secretion of granulocyte - macrophage - colony stimulating factor ( gm - csf ) , an activation of microglia and the expression of proinflammatory genes including major histocompatibility complex- ( mhc- ) ii , il-1 , and mip-1 . knockdown of gmf reduces the production of the proinflammatory cytokines and chemokines responsible for eae [ 32 , 33 ] . interestingly , it also inhibits growth of glioblastoma cells by inducing g0/g1 cell cycle arrest in vitro [ 34 , 35 ] . in the brain of alzheimer 's disease patients , however , what drives astrocytes to upregulate gmf to a level where it contributes to tissue damage is unknown . astrocytes produce or take up , store , and reexocytose a range of neurotrophins neuroprotective in eae [ 3841 ] , dementia of the alzheimer type , and parkinson 's disease [ 43 , 44 ] . astrocytes are the major source of nerve growth factor ( ngf ) and glial cell line - derived neurotrophic factor ( gdnf ) in the cns [ 4547 ] . in brain tissue of parkinson 's disease patients , gdnf , ngf , and brain - derived neurotrophic factor ( bdnf ) are deficient [ 14 , 15 ] , hence the clinical trials of therapeutic gdnf injection into the brain of parkinson 's patients . while intraputaminal infusion of gdnf was safe and improved motor functions in a small group of patients over one and two years , a randomized placebo - controlled trial found that motor function has not improved . notably , from all the 32 genes associated with astrocyte function described in this review , only gdnf was found to be associated with a disease : major depressive disorder . for this , see the ncbi catalog of genomewide association studies ( gwas ) ( http://www.genome.gov/gwastudies/ ) . on the other hand , as mentioned above , astrocytes are a major source of the proinflammatory cytokines il-1 and il-6 in the brain [ 51 , 52 ] . transgenic mice that lack il-6 production are resistant to eae induction [ 53 , 54 ] . this is due to a blockade of activation and differentiation of autoreactive t cells in the periphery with both t helper ( th ) 1 and th2 cells differentiation being affected . very recently , dendritic cells have been identified as a sufficient and probably the main source for eae induction . whether astrocytic il-6 plays a decisive role in the etiogenesis of eae transgenic mice that overexpress il-6 in astrocytes but are otherwise deficient in il-6 develop a mild form of ataxia , but no symptoms of lymphocyte - driven eae . thus , the observed ataxia may be a result of a general inflammatory process in the brain . families with a high il-1 over il-1 receptor antagonist ( il-1ra ) production ratio have a higher risk to have a patient relative with ms than families with a low ratio . mice deficient in il-1 receptor type i ( il-1ri/ ) are resistant to eae induction [ 58 , 59 ] . apparently , il-1 is necessary for the induction of il-17-producing t cells ( th17 ) . il-17 has been shown to be crucial for the development of eae [ 60 , 61 ] . however , both il-6 and il-1 do not necessarily have only detrimental effects . recently , il-6 has been demonstrated to induce il-10 in t cells and thus inhibit proinflammatory responses of th1 cells . the production of il-1 and il-6 does not necessarily lead to neuronal damage because these cytokines also induce upregulation of fas ligand ( fasl ) in astrocytes , which may induce t - cell apoptosis ( see below ) . in addition , il-1 and il-6 are messengers between the brain , particularly the hypothalamic - pituitary - adrenal axis , and the immune system . thus , il-1 produced during eae upregulates glucocorticoid production which has a downregulatory effect on inflammation . activated t cells can cross the bbb not only in neuroinflammatory diseases but also in the healthy brain [ 65 , 66 ] . later , it has been shown that in macrophage - depleted mice , activated t cells which extravasate are not able to enter the brain parenchyma via the basement membrane but accumulate in the perivascular spaces . matrix metalloproteinases ( mmp- ) 2 and -9 are necessary to cross the basement membrane after local digestion . these infiltrating t cells may combat infection , but damage to tissue needs to be avoided , and in particular that mediated by th1 and cytotoxic t cells and accompanied by inflammation . inflammatory cytokines such as tnf- are neurotoxic . given that neurons have a very limited capacity to regenerate in the mature brain one mechanism preventing damage is elimination of t cells : astrocytes induce apoptosis in these cells [ 6971 ] . this effect is mediated by the expression of fasl ( cd95l ) by astrocytes [ 63 , 72 , 73 ] . in eae , fasl expressing astrocytes exist in close vicinity to apoptotic t cells [ 74 , 75 ] . the same mechanism of enforcing immune - privilege has been observed in placenta [ 7679 ] , testes , and anterior chamber of the eye . a downside of this mechanism is that astrocytoma express fasl and thus escape immune attack [ 82 , 83 ] . in neuroinflammation , astrocytes can act as antigen - presenting cells ( apcs ) [ 84 , 85 ] . while microglia express mhc - ii readily upon activation in vivo and in vitro , astrocyte mhc - ii expression occurs only during prolonged inflammation in vivo or in vitro under stimulation by interferon- ( ifn- ) . this mhc - ii induction may be suppressed by neurons via a mechanism that has not fully been elucidated . one study claims that cell - cell contact is required while another one found that secreted glutamate and norepinephrine could inhibit ifn- induced mhc - ii expression in astrocytes . in keeping with this , neuronal loss induces mhc - ii expression in astrocytes [ 88 , 90 ] , supporting the view that astrocytes can present antigen only during severe neuroinflammation . the expression of costimulatory b7 molecules by astrocytes both in vivo and in vitro has been controversially discussed . while some authors found b7 expression on astrocytes [ 9194 ] , others did not [ 95 , 96 ] . functioning as apcs in vitro , astrocytes have been found to stimulate differentiated t cells ; and interestingly , they stimulate th2 cells more efficiently than th1 cells [ 87 , 97 ] . th2 cells may be less damaging than the cellular immune responses , and hence the preferred agents of protection against infection in the cns . thus , astrocytes from transgenic mice expressing ms - associated mhc - ii human haplotypes hla - dr2 and hla - dr4 induced a mixed th1/th2 cytokine response in mog - specific t cells , whereas dendritic cells induced a th1 response . one can only speculate about the biological relevance of an astrocyte - mediated th2 bias . in eae , t cells typically enter the cns as activated , differentiated th1 cells . if astrocytes preferentially restimulate th2 cells [ 87 , 97 ] , the proportion of these cells could increase , thus favouring an anti - inflammatory microenvironment . also , memory t cells are recruited to the cns during eae . memory cells are heterogeneous and part of the population is not biased for a certain th subpopulation , yet . thus , it is tempting to speculate that astrocytes may prevent induction of a th1 cytokine profile of memory cells in the cns . the astrocyte - mediated bias towards th2 responses can not be explained by their cytokine secretion as astrocytes do not produce il-4 , which is the main inductor of th2 responses , but might rather reflect the signal strength of the mhc - ii - t - cell receptor ( tcr ) interaction . lowering the signal strength has been found to favour th2 differentiation . for instance , the surface density of mhc - ii expression determines the cytokine profile of t cells with low mhc - ii expression levels favouring th2 responses . astrocytes do not readily express mhc - ii molecules and are thus likely to deliver a weaker tcr signal than apcs with higher density of mhc - ii molecules on their surface . in eae , infiltrating t cells do not proliferate in the target organ ; this has been ascribed to the influence of astrocytes . in vitro , astrocytes can either suppress [ 105107 ] or stimulate [ 87 , 97 , 108 ] t - cell functions . in coculture studies , astrocytes induce hyporesponsiveness in t cells . this was interpreted as a result of downregulation of the tcr and insufficient stimulation by low levels of icam-1 on astrocytes ; this would limit adhesion of t cells to astrocytes , so that the two cells ignore each other . as this would not silence invading t cells in the cns , other mechanisms t - cell activation is tightly regulated by surface molecules , providing scope for immunotherapy [ 109111 ] . while the primary costimulatory molecule cd28 and its homologue ctla-4 ( cytotoxic t - lymphocyte - associated antigen-4 , cd152 ) on t cells engage the same ligands b7 - 1 ( cd80 ) and b7 - 2 ( cd86 ) on apcs , ctla-4 binds with 10100-fold higher affinity than cd28 [ 110 , 112 ] . cd28 signaling initiates , sustains , and enhances t - cell activation while ctla-4 signaling inhibits t - cell activation and attenuates ongoing responses [ 110 , 113 , 114 ] . the relevance of this has been demonstrated by genetic inactivation of ctla-4 in mice , which leads to lymphoproliferative disease and early death [ 110 , 112 ] . t cells of this mouse strain proliferate spontaneously ex vivo and show an activated phenotype stressing the central role of ctla-4 in attenuating unwanted t - cell responses . in contrast to cd28 , which is constitutively expressed on the surface of t cells , ctla-4 is not detectable on resting t cells . expression of ctla-4 mrna and ctla-4 protein on the t - cell surface is induced upon activation . ctla-4 is stored intracellularly , and its surface expression is strictly controlled with a peak after 48 h72 h after t - cell stimulation [ 114 , 115 ] . blockade of ctla-4 in mouse models of autoimmune diseases increases the incidence of eae [ 111 , 116 ] . short blockade of ctla-4 during priming of the immune response has lasting effects , suggesting that failure in the regulation of ctla-4 would have long - lasting impact on immune responses including autoimmunity . thus , giving agonistic ctla-4 signals might be a promising strategy for controlling inflammatory responses in the cns , particularly as ctla-4 is highly expressed on the t cells which accumulate there . our own study showed that astrocytes inhibit t - cell proliferation , production of il-2 and il-10 , and expression of the il2r -chain ( cd25 ) . although inhibition did not require astrocyte contact with t cells , the mechanism was independent of the major inhibitory cytokine tgf-. the study provided optimal stimulation for t cells by having professional apcs and antigen in the cultures when astrocytes were added . thus , astrocytic inhibitory or stimulatory effects could be discerned from baseline effects occurring during t cell - apc interaction . in this way , we also avoided differences in the stimulatory capacity of astrocytes towards th1 versus th2 cells [ 87 , 97 ] . the interpretation is supported by a recent study showing that astrocytes inhibited proliferation and ifn- , interleukin- ( il- ) 4 , il-17 , and tgf- secretion levels of encephalitic t cells in vitro unless they were pretreated with ifn-. they even promoted t - cell proliferation , presumably by additional antigen presentation . il-27 has been shown to suppress th17 cells and thereby eae [ 120 , 121 ] . also , it negatively regulates th17 cells during chronic inflammation of the cns resulting from chronic infection with toxoplasma gondii . coculture of astrocytoma cell lines with cd3/cd28-activated t cells revealed suppression of t - cell proliferation . the effect was more pronounced when direct contact was allowed between astrocytes and t cells but remained strong when astrocytes and t cells were separated by cell culture inserts . the finding that t - cell proliferation was still inhibited by astrocytes when astrocytes and t cells were separated by a cell culture insert or a transwell - membrane showed that a soluble factor produced by astrocytes is responsible for this inhibition [ 107 , 123 , 124 ] . however , astrocytes might conceivably have protruded cellular nanotubes through the cell culture inserts so as to contact the t cells . the separating membranes had pore sizes of 200 nm or 400 nm [ 107 , 124 ] . an electron - microscopical study of astrocytes growing on engineered surfaces showed that astrocytes extend nanotubes with a diameter below 100 nm to make contact with other cells and may even exchange substances via these nanotubes . cell - cell contact did not bear sole responsibility for the control of t - cell proliferation , since astrocyte - conditioned supernatant also inhibited t - cell proliferation . despite being of interest for immunotherapy blockade of tgf- had no or only a minor effect on the inhibition of t - cell proliferation . inhibition of nitric oxide production also did not reverse the inhibitory effect [ 123 , 124 ] . furthermore , inhibition of indoleamine-2,3 dioxygenase ( ido ) by methyltryptophan did not affect astrocyte - mediated inhibition of t - cell proliferation . ido is a tryptophan - degrading enzyme and as such inhibits t - cell proliferation . it has been proposed as a major player in the immune privilege of the placenta . astrocytes and microglia are capable of expressing ido in vitro and in vivo upon activation with ifn- . ido blockade in eae mediates disease exacerbation , suggesting that ido induction by th1-derived ifn- may play a role in self - limiting autoimmune inflammation during eae and ms . systemic administration of cytosine - phosphate - guanine dinucleotide ( cpg ) , a frequent dinucleotide in bacterial dna and therefore detected by pattern recognition receptor toll - like receptor-9 ( tlr-9 ) , upregulates ido in plasmacytoid dendritic cells , where it is required for activation of regulatory t cells ( tregs ) , and blocks their conversion into th17 cells . although likely , whether ido induction in astrocytes by pge2 or cpg plays a role in the cns and whether astrocytes can induce treg activation is one of the open questions concerning astrocytes so far . ido - deficient mice develop exacerbated eae with enhanced th1 and th17 responses . in this model , not only tryptophan depletion was responsible for the effect on t cells but also a downstream tryptophan metabolite from the kynurenine pathway , 3-hydroxyanthranilic acid ( 3-haa ) , was . the kynurenine pathway starts with tryptophan degradation by ido or tryptophan-2,3 dioxygenase ( tdo ) leading to 3-haa . 3-haa was shown to increase the percentage of tregs and inhibited th1 and th17 cells leading to eae amelioration . 3-haa has been shown to be neuroprotective in cytokine - mediated inflammation in vitro while other metabolites of the kynurenine pathway such as 3-hydroxykynurenine and quinolinic acid ( quin ) appear to be neurotoxic . another metabolite of the ido - kynurenine pathway is kynurenic acid ( kyna ) which has been shown to be neuroprotective . interestingly , activated human astrocytes have been shown to produce large amounts of kyna but almost no quin . thus , astrocytic ido activation may lead to various effects which are mostly beneficial . astrocytes in a rat eae model could induce development of tregs , as has been shown in a study where t cells that had been cocultured with astrocytes not only lost ability to proliferate and inhibit proliferation of antigen - stimulated t cells but also markedly alleviated the disease . also in this study a heat - sensitive soluble factor was implicated , other than il-10 or tgf- . another surface molecule , b7-h1 ( pd - l1 ) , might downregulate t - cell responses in the cns ; it is a member of the b7-family known to downmodulate t - cell activity . in a model of fiber tract injury in the hippocampus of adult mice , it is strongly upregulated on astrocytes while t - cell recruitment to the site of injury was not accompanied by autoimmune demyelination . astrocytes are efficiently activated by the ifn- produced by th1 cells ( see above ) . under the influence of ifn- , astrocytoma cells upregulate expression of chemokines including ccl3 , ccl5 , cxcl8 , and cxcl10 , as well as proinflammatory cytokines such as il-6 and il-1 ( but also an anti - inflammatory il-1 receptor antagonist ) . most of these chemokines attract th1 cells more than th2 cells , thus aggravating neuroinflammation . thus , astrocytes may inhibit and delay neuroinflammation , but in case of sustained inflammation accompanied by high ifn- levels , they may switch to become potent apcs and even promotors of inflammation . growth , differentiation , survival , and maintenance of peripheral and central neurons are facilitated by ngf . subsequent to induction of eae , mice treated with ngf by intraperitoneal injection exhibited a delayed onset of disease in combination with lower clinical disease scores . moreover , myelin basic protein- ( mbp- ) specific t cells retrovirally transduced to secrete high levels of ngf are unable to mediate clinical eae and suppress induction of eae by nontransduced mbp - specific t cells in rats . this upregulation was found to be dependent on antigen recognition as blockade of mhc - ii abrogated the effect , and resting astrocytes which were not able to present antigens did not show an upregulation of ngf production . neutralisation of the cytokines ifn- , il-4 , and il-10 produced in the cocultures did not affect ngf production . this finding suggests a neuroprotective role of astrocytes during t - cell - mediated inflammation in the cns . conversely , cells of the immune system carry ngf receptors , and ngf signalling modulates immune function . ngf inhibits the mhc - ii inducibility of microglia , thereby limiting antigen - presentation in the cns . mechanisms by which astrocytes maintain immune privilege or limit inflammation - induced damage are summarised in figure 2 . however , the initial explanation of a strictly sealed bbb weakened when activated t cells were found to cross the bbb in the healthy brain . clearly , various cells contribute to the phenomenon , including astrocytes , the most abundant cells of the cns . astrocytes mediate neuronal differentiation and homeostasis , and evidence is increasing that astrocytes interact with the immune system . the concept of immune privilege of the cns may be weakening , but it is clear that astrocytes dampen inflammation and have beneficial , neuroprotective effects on the healthy brain . astrocytes need activation by ifn- to unfold their anti - inflammatory potential , in forms such as il-27 production . even when unable to prevent t - cell responses in the brain after prolonged provocation ( e.g. , by ifn- ) , their function does not become purely detrimental . when activated , astrocytes harbour mechanisms of damage limitation , such as production of neuroprotective ngf and preferential restimulation of th2 over th1 cells . when this is not sufficient to prevent autoimmune damage to the cns , it may still control tissue damage to some extent . the overall picture of astrocytes is as cns - intrinsic cells that combat local inflammation and maintain immune privilege , thus minimising damage .
astrocytes have many functions in the central nervous system ( cns ) . they support differentiation and homeostasis of neurons and influence synaptic activity . they are responsible for formation of the blood - brain barrier ( bbb ) and make up the glia limitans . here , we review their contribution to neuroimmune interactions and in particular to those induced by the invasion of activated t cells . we discuss the mechanisms by which astrocytes regulate pro- and anti - inflammatory aspects of t - cell responses within the cns . depending on the microenvironment , they may become potent antigen - presenting cells for t cells and they may contribute to inflammatory processes . they are also able to abrogate or reprogram t - cell responses by inducing apoptosis or secreting inhibitory mediators . we consider apparently contradictory functions of astrocytes in health and disease , particularly in their interaction with lymphocytes , which may either aggravate or suppress neuroinflammation .
prolidases ( proline - specific dipeptidase ) are peptidases with specificity for x - pro dipeptides . x - pro substrates contain n - terminal residues that are hydrophobic / uncharged ( ala- , ile- , leu- , val- ) , basic ( his- ) , aromatic ( phe- , tyr- ) , or sulphur - containing ( met- ) . prolidases only cleave dipeptides with proline at the c terminus ( nh2-x-/-pro - cooh ) . this modification or truncation process can develop either cotranslationally or posttranslationally after the action of an endoproteinase . prolidase is widespread in nature and has been isolated from different mammalian tissues [ 24 ] as well as from bacteria , such as the species of lactobacillus [ 1 , 5 ] and xanthomonas . while the physiological role of prolidase in bacteria is unclear , a deficiency of this enzyme in humans results in abnormalities of the skin and other proline - rich collagenous tissues . in contrast with other endopeptidases and exopeptidases , prolidase is thought to be involved in the terminal degradation of intracellular proteins , and may also function in the recycling of proline . prolidase also has biotechnological applications ; it has a potential use in the dairy industry as a cheese - ripening agent because the degradation of proline - containing peptides in cheese reduces bitterness . prolidases are also capable of detoxifying organophosphorus nerve agents such as sarin and soman . the crystal structure of prolidase has been solved only from pyrococcus furiosus , where the main subunit is a pita - bread fold containing a metal active center like aminopeptidase p from e. coli and methionine aminopeptidase from p. furiosus . two zn atoms in the active site of the solved crystalline structure have been found , which are included as an impurity in the crystallization medium . however , the native prolidase from p. furiosus requires two co ions per molecule in the active center for full catalytic activity . when co ions are replaced by zn ions , the protein does not show any enzymatic activity . the structure of the prolidase containing co ions with full activities remains to be solved . recently , the structure of prolidase from pyrococcus horikoshii ot3 ( project i d , ph1149 ) , which has 80% sequence identities with that from p. furiosus , has been deposited in the protein data bank ( 1wy2 ) . this protein also has zn ions in the active center as observed in the p. furiosus enzyme . furthermore , when the structure of a protein annotated as a putative dipeptidase from p. horikoshii ( project i d , ph0974 ) , having 36% sequence identities with ph1149 , was solved , no metal ions were found in the active center . the protein showed substrate specific activities for a dipeptide of met - pro , which is a feature of x - pro dipeptidase ( prolidase ) . in this paper , the structure of ph0974 ( phdpd ) is described in detail , and the difference in the structure of ph1149 with zn ions ( zn - phdpd ) will be discussed . in addition , the differences in both proteins in the binding feature of co or zn ions and in substrate - specific activities were examined in order to clarify the enzymatic function of this enzyme . the gene was amplified by a polymerase chain reaction ( pcr ) using p. horikoshii ot3 genomic dna as a template ( project i d : ph0974 ) . recombinant plasmid was constructed by the super - rare - cutter system ( hayashizaki et al . , manuscript in preparation ) . e. coli bl21-codonplus ( de3)-ril cells were transformed with the recombinant plasmid and grown at 37c in lb medium containing 50 g ml ampicillin for 20 hours . the cells were harvested by centrifugation at 6500 rpm for 5 minutes , suspended in 20 mm tris - hcl , ph 8.0 ( buffer a ) containing 0.5 m nacl and 5 mm 2-mercaptethanol and disrupted by sonication . the cell lysate was heated at 90c for 13 minutes . after heat treatment , denaturated proteins were removed by centrifugation ( 15,000 rpm , 30 minutes ) , and the supernatant solution was used as the crude extract for purification . the crude extract was desalted using a hiprep 26/10 desalting column ( amersham - biosciences ) and applied onto a super q toyopearl 650 m column ( tosoh ) equilibrated with buffer a. the protein was eluted with a linear gradient of 00.3 m nacl in buffer a. the protein was desalted with hiprep 26/10 desalting column with buffer a and subjected to a resource q column ( amersham biosciences ) equilibrated with buffer a. the protein was eluted with a linear gradient of 00.3 m nacl in buffer a. the buffer of the fractions containing the protein was exchanged using the hiprep 26/10 desalting column to 10 mm sodium phosphate , ph 7.0 and applied onto a bio - scale cht-20-i column ( bio - rad ) equilibrated with the same buffer . the protein was eluted with a linear gradient of 10200 mm sodium phosphate , ph 7.0 . the fractions containing protein were pooled , concentrated by ultrafiltration ( vivaspin , 5 k cut ) and loaded onto a hiload 16/60 superdex 75 pg column ( amersham biosciences ) equilibrated with buffer a containing 0.2 m nacl . the concentration of the protein was estimated from the absorbance at 280 nm assuming e1% = 10.14 . prolidase from p. horikoshii ot3 ( project i d : ph1149 ) was expressed and purified using similar methods . the concentration of the protein was estimated from the absorbance at 280 nm assuming e1% = 7.81 . the protein concentration of phdpd subjected to crystallization was 20 mg ml in 100 mm tris buffer at ph 8.0 containing 0.2 m sodium chloride . the reservoir contained 0.1 m buffer solution ( cacodylate naoh ph 6.5 ) , 40% ( w / v ) polyethylene glycol 400 , and 0.02 m magnesium acetate . protein solution consisted of 1 l of a 20 mg ml protein solution and 1 l of reservoir solution . the protein crystal used for data collection grew to the size of 0.1 0.1 0.1 mm after 810 days . diffraction data for phdpd were collected using a rigaku r - axis v imaging - plate detector at the bl26b1 beamline , spring-8 , japan . the crystals were flash - frozen in nitrogen - gas stream at 100 k during data collection . the oscillation angle used was 1.0 and the crystal - to - detector distance was set to 350 mm . three data sets for the mad ( multiwavelength anomalous dispersion ) phasing were collected from a single selenomethionone - labelled crystal . three wavelengths , corresponding to the maximum f ( peak ) , the minimum f ( edge ) and a reference wavelength ( remote ) , were selected for the selenomethionine - labelled crystal , based on the fluorescence spectrum of the se atom in the crystals . se - atom positions were obtained with the program solve and the initial electron density map was calculated by solve and resolve . phase calculation resulted in an overall figure of merit of 0.45 for data in the range of 202.6 resolutions . the program arp / warp was used to automatically build a partial model of the dimeric enzyme based on the amino acid sequence to the mad - phased electron density map at 2.6 and placed approximately 50% of the entire residues . combined solvent flattening and histogram matching , as implemented in dm were used to improve the phases . unambiguous parts and side chains could be added during the refinement , without noncrystallographic symmetry ( ncs ) restraints . the rest of the residues was built manually with quanta ( accelrys san diego , calif , usa ) . solvent molecules were gradually included into the structure at stereochemically preferred positions and with difference densities higher than 2.8 ( f0-fc ) and 0.8 ( 2f0-fc ) . a summary of the statistics for structure determination of phdpd is given in table 1 and a ribbon diagram of the structure in figure 1(a ) . sedimentation equilibrium experiments were carried out using a beckmann optima mode xl - a at 20c with an an-60 ti rotor at a speed of 13 k rpm . prior to the measurements , the protein solutions were dialyzed overnight against the respective buffer at 4c . the experiments at three different protein concentrations between 0.93 and 0.31 mg ml were performed in beckman 4-sector cells . the buffer used was 20 mm tris , ph 8.0 , including 100 mm nacl . the partial specific volume of 0.751 cm g used for phdpd was based on the amino acid compositions of the protein . proline dipeptidase activity was measured by a modification of the colorimetric ninhydrin method using met-pro.hcl , val-pro.hcl , gly-pro.hcl , ala-pro.hcl , phe-pro.hcl , glu-pro.hcl , and lys-pro.hcl as substrates . aminopeptidase and endopeptidase activities were measured by met-mca.tosoh ( tosylate form of l - methionine 4-methyl - coumaryl-7-amide ) and frets-25xaa , respectively , as substrates . the frets-25xaa is a fluorescence resonance energy transfer substrate ( frets ) library for determining endopeptidase specificity ( peptide institute , inc . ) . all assays were carried out at 100c in 50 mm mops ( 3-[n - morpholino]propanesulfonic acid ) buffer of ph 7.0 , containing 1.2 mm cocl2 . dsc ( differential scanning calorimetry ) was carried out using a vp - capillary dsc platform ( microcal , usa ) at a scan rate of 100 deg h. the protein concentration in the measurements was fixed at 0.01 mm in 50 mm tris , ph7.8 . the structure of phdpd was determined by the mad method at 2.3 resolution . the asymmetric unit contains two molecules , which are related by a two - fold noncrystallographic symmetry ( ncs ) . the native structure was refined to an r - factor of 21% ( rfree = 26.5% ) at 2.4 resolution . the root mean square deviations ( rmsds ) from ideal geometry for the bond lengths and bond angles were 0.008 and 1.4 , respectively . all residues are within allowed regions in the ramachandran plot ( 93.8% in the most favored region ) . the program lsqkab from ccp4 was used to calculate rms deviations for the superposition of molecules . a summary of the data collections , refined model and the relevant geometrical parameters is given in table 1 . the final refined model consists of two complete polypeptide chains from met1 to leu356 and 310 ordered water molecules . each of the monomer subunits has an n terminal domain ( residues 1120 ) , an -helical linker ( residues 121130 ) and a c terminal domain ( residues 131356 ) . the overall topology of phdpd and a view of the c backbone are shown in figures 1(a ) and 4(a ) . the n - terminal domain is composed of a central -sheet with six -strands ( strand order : 4 , 3 , 2 , 1 , 5 , and 6 ) and of five -helices around the central -sheet . the strands of 1 to 3 are in antiparallel relationship and the other strands are in parallel directions . the c - terminal domain is comprised of long mixed stranded -sheets ( 719 ) with four -helices ( 69 ) , lying on the outside of the surface . the -helices 6 and 8 run parallel to the nearby -sheet , while helices 7 and 9 are in antiparallel relationship on the outside surface . this domain should be a catalytic domain , which is similar to the reported structures of the pita - bread fold [ 10 , 12 , 2833 ] . the active center of phdpd can be assumed from the analogy of the structure of a pita - bread folded enzyme . the putative active site pocket is located between two 310 helixes ( residues 191195 and 281284 ) ( two red color helices in figure 1(a ) ) and in a deep groove of the inner surface as shown in figure 1(c ) . the active site is strongly curved by the central -sheet of the c - terminal domain and stabilized by four helices ( 69 ) that cover the outside surface of the deep pocket . the n and c terminal domains are linked by an 5 helix ( residues 121130 ) spanning between 6 and 6 . sequence identities of phdpd and zn - phdpd from p. horikoshii are 36% ( 131/357 ) , but those of zn - phdpd and the prolidase ( zn - pfprol ) from p. furiosus are quite high , 80% ( 279/348 ) . therefore , zn - phdpd has been assigned as a prolidase , although phdpd is a putative dipeptidase . we then examined the enzyme functions of phdpd and zn - phdpd from p. horikoshii . proline dipeptidase activities ( x - pro ) of both proteins were the highest for the dipeptide met - pro among the substrates examined ( table 2 ) . the specific activity of phdpd for the substrate met - pro was about 3 times that of zn - phdpd . in the case of phdpd , the catalytic efficiencies for the peptide containing nonpolar amino acids were higher than those for the peptide containing polar amino acids such as lys and glu . the substrate specificity of phdpd was broad compared with that of zn - phdpd whose relative activities are 10% or less for substrates other than met - pro . the substrate specificities of zn - phdpd are quite similar to the reported results for zn - pfprol as shown in table 2 . the effect of metal ions on the dipeptidase activities of phdpd and zn - phdpd was examined using met - pro as a substrate . as shown in table 3 , the relative activity of phdpd in the presence of 1.2 mm mncl2 was higher than that in 1.2 mm cocl2 , but that of zn - phdpd was about half . when the metal ions were not added , phdpd had 20% relative activity , but zn - phdpd had none . kinetic parameters for val - pro of phdpd in the presence of 1.2 mm cocl2 were determined to be 5.0 mm , 807 mol min mg , 541 s , and 108 mm s for km , vmax , kcat , and kcat / km , respectively . the value of km was similar to that reported for the prolidase from p. furiosus ( zn - pfprol ) , but the other kinetic parameters of phdpd were several times greater . it was detectable , but the vmax value was less than 0.1% of that for the dipeptide val - pro . furthermore , the endopeptidase activity was also examined with the substrate frets-25xaa ( peptide insititute , inc . ) which contains 475 combinations ( 25 19 = 475 ) of tripeptides except for cysteine . no endopeptidase activity was detected , even when 30 times the enzyme concentration was used compared with the assay for the dipeptide val - pro . these results indicate that phdpd can be called prolidase . because a cacodylate ion has been found the near active sites of zn - phdpd , the dipeptidase activity using met - pro as a substrate was measured in the presence of cacodylate ion from 0.4 m to 40 mm . the results indicate that zn - phdpd is not inhibited by cacodylate ion which was included in the crystalline buffer . using dsc , the binding constant between a protein and a ligand can be estimated from the shift in the denaturation temperature for thermal unfolding of a protein in the presence of a ligand relative to the denaturation temperature in the absence of the ligand . figure 2(a ) shows the dsc curves of phdpd in the presence of metals whose concentrations are twice that of the protein where two metal - binding sites in the protein are saturated . the peak temperature of the dsc curve in the absence of the metal was 104.4c and was lower than those in the presence of metals ( table 4 ) , indicating that these metals can tightly bind to phdpd . the difference in peak temperatures between the proteins in the absence and presence of metals indicates that zn ion most strongly binds to the protein , followed by co and mn ions ( figure 2(a ) , table 4 ) . in the case of zn - phdpd , as shown in figure 2(b ) , the order of binding strength for three kinds of metals to the protein was similar to that of phdpd , but the strength seemed to be considerably higher than that of phdpd : the differences in peak temperatures between the proteins in the absence and presence of 0.02 mm co were 5.9 and 13.2c for phdpd and zn - phdpd , respectively ( table 4 ) . the dsc curves of phdpd and zn - phdpd , which were dialyzed in 50 mm tris buffer at ph 7.8 including 1 mm edta overnight , were similar to those of samples without the metal ion . this suggests that original proteins hardly bound metal ions such as zn , co , and mn . reheating the dsc curves of phdpd and zn - phdpd did not show any excess heat capacities , indicating that heat denaturation of both proteins is irreversible . therefore , it might be difficult to strictly analyze the binding constants from the shifts in peak temperature due to ligand binding . after an error margin had been agreed upon , we calculated the binding constants using estimated dsc parameters and changes in denaturation temperatures because these are considerably more reliable . in the presence of 0.02 mm zn ions , the binding constants of phdpd and zn - phdpd were 1.2 10 and 1.6 10 m , respectively . the results suggest that the binding constants of zn - phdpd with zn ions were roughly higher than 2 orders compared to those of phdpd ( figures 3(a ) and 3(b ) ) . on the other hand , methionine aminopeptidase from e. coli , which has a pita - bread fold with two active metal sites , has been reported to be maximally stimulated with the addition of one equivalent of co or fe , and the first metal ion binds with a binding constant of 35 10 m , while the second one binds at 0.4 10 m based on the changes in the absorption spectra during titration . the crystal structure and amino acid sequence of the prolidase from p. horikoshii ( zn - phdpd ) ( pdb i d : 1wy2 ) are quite similar to those of that from p. furiosus ( zn - pfprol ) ( pdb i d : 1pv9 ) . both proteins have two zn ions in the active sites resulting in the absence of function . on the other hand , phdpd with the sequence identity of 38% to zn - pfprol and zn - phdpd showed prolidase activity in the absence of additional co ions ( table 3 ) . therefore , to elucidate why the proteins containing zn ions do not have prolidase activity without the addition of co ions , the structures of phdpd , zn - phdpd , and zn - pfprol were compared . figure 4(a ) shows a stereoview of the superposition of phdpd with zn - phdpd and zn - pfprol structures . furthermore , structure - based sequence alignment of the three proteins and rms deviation of c atoms between phdpd and zn - phdpd are shown in figures 5 and 6 , respectively . comparison of phdpd with zn - phdpd and zn - pfprol reveals major differences in folding , size , insertions and positioning of secondary structure elements in the n - terminal domain ( figure 5 ) . in particular , the 310 helix 1 is replaced by an 3 helix ( residues : 57 to 67 ) in both zn - phdpd and zn - pfprol ( figure 5 ) . the rms deviation from c superposition of the whole , n- and c - terminal domains was calculated separately as follows : 1.4 , 2.3 , and 1.0 for the superposition between zn - pfprol and phdpd , respectively ; and 1.5 , 2.0 , and 1.1 for that between zn - phdpd and phdpd , respectively ( figure 4(a ) ) . rms deviation values of five conserved residues ( asp215 , asp226 , his290 , glu319 , and glu333 ) and the neighboring two residues ( ile227 and thr228 ) belong to the lowest group of rms deviation values as shown in figure 6 , suggesting that these seven residues are considerably important in the active center . figure 4(b ) shows a stereoview of the conserved active site residues superimposed between the phdpd and zn - phdpd . a cacodylate ion was found close to the active site of zn - phdpd , but not in that of phdpd although both proteins were crystallized in the buffer containing cacodylate ions . a structural comparison of phdpd with zn - phdpd reveals that the metal - coordination sphere and stereochemical organization of the active site are slightly altered due to zn binding as shown in figure 4(b ) . five water molecules ( wat133 , wat268 , wat278 , wat279 and wat290 ) are located around the active site pocket in phdpd , all of which form a hydrogen - bonding network . the wat279 molecule nearly occupies the place of one of zinc ions in zn - phdpd . the water molecule w279 also creates similar coordination distances with conserved active site residues of phdpd and zn - phdpd . the coordination distances of asp226 , glu333 , ile227 , and thr228 of phdpd are slightly different from those of zn - phdpd ( table 5 ) . these observations might be correlated with the differences in binding with zn and co ions in the active site pocket . it has been proposed that methionine aminopeptidase and aminopeptidase p , which involve the pita - bread fold that contains co or mn ions in the active site , have a common reaction mechanism . the important active site residues interacting with substrates are conserved in the three proteins described above . one of them , his198 of phdpd corresponds to his79 of methioneine aminopeptidase from e. coli , which interacts with the nitrogen atoms of the scissile peptide bonds . mutation of this residue of methionine aminopeptidase and aminopeptidase p has been reported to lead to variants with negligible activities [ 39 , 40 ] . as shown in figures 6 and 7 , the position of his198 of phdpd is remarkably different from that of the corresponding his195 of zn - phdpd : the rms deviation value of c atoms of both proteins was 2.59 as represented by an arrow to his198 in figure 7 . relocation of this his residue tends to decrease the volume of the active site pocket . these results indicate that the absence of activity of zn - phdpd containing zn ions might be caused by changes in coordination geometry of the metal ions and/or relocation of an important active side residue . analytical centrifugation results showed that phdpd exists as a dimer in solution , and the association constant of monomer / dimer is 1.6 10 m. as shown in a ribbon diagram of figure 1(b ) , the dimer form of phdpd is an assembly of two monomers , related by a noncrystallographic 2-fold axis . the asymmetric unit also contains two molecules in the crystal as well as a dimer in solution . the dimeric enzyme has an overall globular shape of approximately 55 80 61 with a depression at its center . the accessible surface area of the monomer subunit is 16037 and 15955 for the respective subunits a and b. the area buried due to a dimer formation was 2310 and 7.2% of the total surface area . the buried surface area of phdpd was remarkably smaller than those of zn - phdpd and zn - pfprol ; especially , the difference in buried area of nonpolar atoms was remarkably great between them as shown in table 6 . when the buried area is divided into nonpolar ( c / s ) and polar ( n / o ) atoms , the hydrophobic interaction of dimer formation ( ghp ) can be estimated using the following equation : ( 1)ghp=asanonpolar+asapolar , where asanonpolar and asapolar represent the difference in asa ( accessible surface area ) due to dimer formation of the nonpolar and polar atoms of all residues , respectively . parameters and have been determined to be 0.154 and 0.026 kj mol , respectively , using the stability / structure database of mutant human lysozymes . the great differences in asa values of nonpolar atoms between phdpd and zn - phdpd ( or zn - pfprol ) resultantly indicate that hydrophobic interaction ( ghp ) due to dimer formation of zn - phdpd and zn - pfprol is remarkably higher than that of phdpd ( table 6 ) . one of two prolidases from p. horikoshii , zn - phdpd , has considerably high sequence identities with the prolidase ( zn - pfprol ) from p. furiosus , but the corresponding gene to phdpd is not found in the genome of p. furiosus . in the process of blast searches , we found that a hypothetical protein ( ph1902 ) from p. horikoshii can be annotated as x - pro dipeptidase from its 91% sequence identity with the x - pro dipeptidase from pyrococcus absysi , which has identities of 26% and 28% with phdpd and zn - phdpd , respectively . the sequence of ph1902 has 29% identity with zn - pfprol but has higher identity with the other two prolidases from p. furiosus ( 76 and 58% ) . furthermore , pepq-3 x - pro aminopeptidase and pepq-2 cobalt - dependent proline dipeptidase from pyrococcus abyssi have high sequence identities with phdpd ( 76% ) and zn - phdpd ( 84% ) , respectively , and two different x - pro aminopeptidases from thermococcus kodakarensis have 71 and 63% identities with phdpd and zn - phdpd , respectively . although the physiological role of prolidases in a cell remains to be solved , the substrate specificity of phdpd is broader and its function is more effective than that of zn - phdpd . the active site structures of both proteins are changed to active or inactive forms depending on the binding metals . when both active and inactive structures of each prolidase are solved in the future , the role of metal ions on the function of metalloaminopeptidases the enzyme assay of project i d ph0974 ( phdpd ) of p. horikoshii indicated that phdpd has the function of x - pro dipeptidase ( prolidase ) . the crystal structure of phdpd was solved at 2.4 resolution , and there are no metal ions in the active site . furthermore , dsc experiments suggest that there are big differences in binding constants with zn between phdpd and zn - phdpd . in order to elucidate why the proteins containing zn ions do not have the prolidase activity without the addition of co ions , the three structures of phdpd , zn - phdpd , and zn - pfprol were compared . the conclusions were ( 1 ) the coordination geometry in the active site of phdpd was slightly different from that of zn - phdpd and ( 2 ) the important his residue of zn - phdpd , which seems to interact with the nitrogen atoms of the scissile peptide bonds , considerably moved resulting in decreasing the volume of the active site pocket due to zn binding .
the crystal structure of a putative dipeptidase ( phdpd ) from pyrococcus horikoshii ot3 was solved using x - ray data at 2.4 resolution . the protein is folded into two distinct entities . the n - terminal domain consists of the general topology of the / fold , and the c - terminal domain consists of five long mixed strands , four helices , and two 310 helices . the structure of phdpd is quite similar to reported structures of prolidases from p. furiosus ( zn - pfprol ) and p. horikoshii ( zn - phdpd ) , where zn ions are observed in the active site resulting in an inactive form . however , phdpd did not contain metals in the crystal structure and showed prolidase activity in the absence of additional co ions , whereas the specific activities increased by 5 times in the presence of a sufficient concentration ( 1.2 mm ) of co ions . the substrate specificities ( x - pro ) of phdpd were broad compared with those of zn - phdpd in the presence of co ions , whose relative activities are 10% or less for substrates other than met - pro , which is the most favorable substrate . the binding constants of zn - phdpd with three metals ( zn , co , and mn ) were higher than those of phdpd and that with zn was higher by greater than 2 orders , which were determined by dsc experiments . from the structural comparison of both forms and the above experimental results , it could be elucidated why the protein with zn2 + ions is inactive .
WASHINGTON (AP) — The Supreme Court rejected an appeal from Apple Inc. Monday and left in place a ruling that the company conspired with publishers to raise electronic book prices when it sought to challenge Amazon.com's dominance of the market. The justices' order on Monday lets stand an appeals court ruling that found Cupertino, California-based Apple violated antitrust laws in 2010. Apple wanted to raise prices to wrest some book sales away from Amazon, which controlled 90 percent of the market and sold most popular books online for $9.99. Amazon's share of the market dropped to 60 percent. The 2-1 ruling by the New York-based appeals court sustained a trial judge's finding that Apple orchestrated an illegal conspiracy to raise prices. A dissenting judge called Apple's actions legal, "gloves-off competition." The Justice Department and 33 states and territories originally sued Apple and five publishers. The publishers all settled and signed consent decrees prohibiting them from restricting e-book retailers' ability to set prices. In settlements of lawsuits brought by individual states, Apple has agreed to pay $400 million to be distributed to consumers and $50 million for attorney fees and payments to states. The case is Apple v. U.S., 15-565. ||||| WASHINGTON (AP) — The Supreme Court has reversed the 2002 murder conviction of a Louisiana death row inmate after ruling that prosecutors failed to disclose evidence that could have helped his defense. The ruling on Monday came in the case of Michael Wearry, who was convicted in the 1998 death of a 16-year-old pizza delivery driver near Baton Rouge. The justices said that prosecutors should have turned over evidence casting doubt on the credibility of a prison informant and another witness who testified against Wearry. The court also said the state failed to disclose medical records raising questions about a witness' description of the crime. Justices Samuel Alito filed a dissent joined by Justice Clarence Thomas. Alito said the jury might have convicted Wearry even with the additional evidence. ||||| WASHINGTON (AP) — The Supreme Court ruled Monday that Alabama's top court went too far when it tried to upend a lesbian mother's adoption of her partner's children. The justices threw out a ruling by the Alabama Supreme Court in a dispute between two women whose long-term relationship ended bitterly. Before their breakup, one partner bore three children; the other formally adopted them in Georgia. The Alabama residents went to Georgia because they had been told Atlanta-area courts would be more receptive than judges in Alabama. Alabama courts got involved when the birth mother tried to prevent her former partner from regular visits with the children. The Alabama Supreme Court sided with the birth mother in refusing to recognize the other woman as a parent and declaring the adoption invalid under Georgia law. In December, the U.S. Supreme Court temporarily set aside the Alabama decision as the justices decided whether to hear the woman's appeal. The issue was whether the actions of one state's courts must be respected by another's. On Monday, the justices said in an unsigned opinion that "the Alabama Supreme Court erred in refusing to grant that judgment full faith and credit." The case is V.L. v. E.L., 15-648. ||||| WASHINGTON (AP) — The Supreme Court is staying out of a copyright dispute involving a California man who produced replicas of the Batmobile for car-collecting fans of the caped crusader. The justices on Monday let stand a lower court ruling that said the Batmobile's bat-like appearance and high-tech gadgets make it a character that can't be duplicated without permission from DC Comics, the copyright holder. Mark Towle produced replicas of the car as it appeared in the 1966 television show featuring Adam West as Batman and the 1989 movie starring Michael Keaton. He sold them for about $90,000 each. The 9th U.S. Circuit Court of Appeals last year sided with DC Comics in finding that the Batmobile is entitled to copyright protection.
– The Supreme Court ruled Monday that Alabama's top court went too far when it tried to upend a lesbian mother's adoption of her longtime partner's children, the AP reports. Before their breakup, one partner bore three children; the other formally adopted them in Georgia, which they were told courts would be more receptive. Alabama courts got involved when the birth mother tried to prevent her ex from visits with the children; the Alabama Supreme Court sided with the birth mother in refusing to recognize the other woman as a parent and declaring the adoption invalid under Georgia law. But the high court ruled that "the Alabama Supreme Court erred in refusing to grant that judgment full faith and credit." Other rulings handed down Monday: The court reversed the 2002 murder conviction of Louisiana death row inmate Michael Wearry, who was convicted in the 1998 death of a 16-year-old pizza delivery driver near Baton Rouge. The justices said that prosecutors should have turned over evidence casting doubt on the credibility of a prison informant and another witness who testified against Wearry. The court also said the state failed to disclose medical records raising questions about a witness' description of the crime. The court rejected an appeal from Apple Inc. and left in place a ruling that the company conspired with publishers to raise electronic book prices when it sought to challenge Amazon's dominance. Apple wanted to raise prices to wrest book sales away from Amazon, which controlled 90% of the market and sold most popular books online for $9.99; Amazon's share of the market dropped to 60%. The 2-1 ruling by the New York-based appeals court had sustained a trial judge's finding that Apple orchestrated an illegal conspiracy to raise prices. Meanwhile, the Supreme Court is staying out of a copyright dispute involving Mark Towle, a Californian who produced replicas of the Batmobile for car-collecting fans. The justices let stand a lower-court ruling that said the Batmobile's bat-like appearance and high-tech gadgets make it a character that can't be duplicated without permission from DC Comics, the copyright holder. Towle produced replicas of the car as it appeared in the 1960s television show featuring Adam West and the 1989 movie starring Michael Keaton, selling them for about $90,000 each. (One recent Supreme Court case hinged on a comma.)
mathematical models are very useful and frequently used nowadays . for example , in , the authors used a novel in vitro pharmacodynamic infection model of tuberculosis by exposing m. tuberculosis to moxifloxacin with a pharmacokinetic half life of decline similar to that encountered in humans introduced a mathematical model to quantify the contribution of antibiotic exposure and of other modifiable factors to the dissemination of vancomycin - resistant enterococci ( vre ) in the hospital setting and provided a framework to assist in targeting necessary interventions aimed at limiting the spread of vre . an extension of that model that incorporates an environmental reservoir for vre was developed in . developed a model and a pulse vaccination strategy , the repeated application of vaccine over a defined age range . it revealed as an effective strategy for the elimination of infectious diseases . in the classical epidemiological model [ 613 ] , a population of total size n is divided into s susceptible numbers , i infective numbers , and r recovered numbers . the relation between these three categories leads to the classical sir model : ( 1)s.(t)=s(t)i(t),i.(t)=s(t)i(t)i(t),r.(t)=i(t ) , where (>0 ) is the infection parameter or the transmission rate contact , (>0 ) is the removal parameter giving the rate at which infectives become immune , and 1/ is the mean infectious period . it is known that s(t)+i(t)+r(t)=n is constant . dividing s , i , and r by if death or isolation may occur , r(t ) represents all removals from the population ( including immunes , deaths , and isolates ) . an important parameter is the relative removal rate ( 2)c=. a major outbreak occurs only if the initial number of susceptibles s(0)>c . formulate an sir model with time delay effected by assuming that the force of infection at time t is given by ( 3)es(t)i(t ) , where >0 is natural death rate and >0 is a fixed time during which the infectious agents develop in the vector and it is only after that time that the infected vector can infect a susceptible human . point out that it is more natural to assume that is a distributed parameter than a fixed time . hence , the force of infection ( 3 ) has to be substituted by ( 4)0hf(s)s(ts)i(ts)esds , where f(s ) , that is the fraction of vector population in which the time taken to become infectious is s , is assumed to be a nonnegative function on [ 0 , h ] . mathematically , f : [ 0 , h ] r + 0 square integrable on [ 0 , h ] and satisfies ( 5)0hf(s)ds=1 , 0hsf(s)ds<+ , where we assume that the parameter *=0hsf(s)ds>0 is the average incubation time in the vector to become infectious . when natural birth , natural death , and the force of infection ( 4 ) are considered , we are yielded to an sir model with distributed time delay : ( 6)s.(t)=0hf(s)s(ts)i(ts)esdss(t),i.(t)=0hf(s)s(ts)i(ts)esdsi(t)i(t),r.(t)=i(t)r(t ) . here , it is assumed that the natural birth rate is equal to the natural death rate and all newborns are susceptible . (>0 ) denote the natural birth rate and death rate , 1/ is the mean life expectancy . the total population size n(t)=s(t)+i(t)+r(t ) satisfies n.(t)=(1n(t ) ) and n(t)1 as t. hence , model ( 6 ) can be regarded as a model with a total constant population . obviously , r0=/(+ ) is the reproduction number of the system ( 6 ) without time delay . that is , if r0>1 , then on average , each infected individual infects more than one other member of the population and a self - sustaining group of infectious individuals will propagate . for the sake of simplicity , we put in dimensionless form the model equations ( 6 ) by redefining a new nondimensional time t=(+)t . this leads to the dimensionable equations ( 7)s.(t)=r00hf(s)s(ts)i(ts)esdss(t),i.(t)=r00hf(s)s(ts)i(ts)esdsi(t),r.(t)=i(t)r(t ) , where ( 8)=+ , r0=+ , h=(+)h , =+ are the dimensionless parameters . for convenience , thus , the model ( 7 ) yields infectious diseases have tremendous influence on human life . one can investigate under what conditions a given agent can invade a ( partially ) vaccinated population , that is , how large a fraction of the population we have to keep vaccinated in order to prevent the agent from establishing . however , in practical situations one usually has to start a vaccination campaign when the agent has become endemic . recently , a new strategy denominated pulse vaccination strategy ( pvs ) has been revealed adequate against poliomyelitis and measles . a usual recommendation for measles immunization is to apply a first vaccination dose to all infants of 15 months of age and a second dose at six years . however , it was hypothesized that measles epidemics can be more efficiently controlled when the natural temporal process of the epidemics is antagonized by another temporal process , that is , by a vaccination effort that is pulsed in time rather than uniform and continuous . we call this policy pulse vaccination and it was shown theoretically that if children aged one to seven years are immunized once every five years , that may suffice for preventing the epidemics . the strategy of pulse vaccination ( pvs ) consists of periodical repetitions of impulsive vaccinations in a population , on all the age cohorts [ 5 , 1921 ] . at each vaccination time , a constant fraction of the susceptible population is vaccinated . some theoretical considerations , practical advantages , and examples of the pvs are presented in [ 5 , 2123 ] . for example , some successes against poliomyelitis and measles have been attributed to repeated pvs . as indicated in , models have clearly shown the advantages of a mass campaign approach in rapidly achieving high measles population immunity and interrupting measles virus circulation . further , we consider pvs in model ( 9 ) and assume that ( > 0 ) denotes the period of pulsing and ( 0<<1 ) is the proportion of those vaccinated successfully . incorporating pulse vaccination , we propose an sir model with pulse vaccination and distributed time delay : ( 10)s.(t)=r00hf(s)s(ts)i(ts)esdss(t),i.(t)=r00hf(s)s(ts)i(ts)esdsi(t),r.(t)=i(t)r(t ) , tk,s(t+)=(1)s(t),i(t+)=i(t),r(t+)=r(t)+s(t ) , t = k. note that the variable r do not appear in the first and second equations of system ( 10 ) . this allows us to attack ( 10 ) by studying the subsystem ( 11)s.(t)=r00hf(s)s(ts)i(ts)esdss(t),i.(t)=r00hf(s)s(ts)i(ts)esdsi(t ) , tk,s(t+)=(1)s(t),i(t+)=i(t ) , t = k. the initial conditions for ( 11 ) are from biological considerations , we discuss system ( 11 ) in the closed set ( 13)={(s , i)r+2 | 0s , i1}. it can be verified that is positively invariant with respect to ( 11 ) , that is , any solution starting in remains in in the future . most of the research literature on epidemiologic models are established by ode , delayed ode , or impulsive ode [ 2628 ] . however the main purpose of this paper is to analyze the impulsive model with distributed time delay ( 11 ) and establish sufficient condition so that the disease dies out . the second purpose of this paper is to investigate the role of distributed time delay in disease transmission and show that , under appropriate conditions , the disease is uniformly persistent , that is , there is a positive constant q ( independent of the choice of the solution ) such that i(t)q for sufficiently large t. in the following , we introduce some definitions and state two results which will be useful in subsequent sections . the solution ( s(t),i(t ) ) of system ( 11 ) is said to be globally attractive if every solution of system ( 11 ) tends to ( s(t),i(t ) ) as t. system ( 11 ) is said to be uniformly persistent if there is an >0 ( independent of the initial conditions ) such that every solution ( s(t),i(t ) ) with initial conditions ( 12 ) of system ( 11 ) satisfies ( 14)lim inf ts(t) , lim inf ti(t). system ( 11 ) is said to be permanent if there exists a compact region 0int such that every solution of system ( 11 ) with initial conditions ( 12 ) will eventually enter and remain in region 0 . we now present a technical result . consider the following impulsive system : ( 15)u.(t)=abu(t ) , tk,u(t+)=(1)u(t ) , t = k , where a>0 , b>0 , and 0<<1 . then there exists a unique positive periodic solution of system ( 15 ) : ( 16)e(t)=ab+(u*ab)eb(tk ) , k<t(k+1) , which is globally asymptotically stable , where u=(a / b)((1)(1eb)/(1(1)eb ) ) . integrate and solve the first equation of system ( 15 ) between pulses ( 17)u(t)=ab+(u(k)ab)eb(tk ) , k<t(k+1) , where u(k ) is the initial value at time k. using the second equation of system ( 15 ) , we deduce the stroboscopic map such that ( 18)u((k+1))=(1)[ab+(u(k)ab)eb]f(u(k ) ) , where f(u)=(1)[a / b+(ua / b)eb ] . it is easy to know that system ( 18 ) has unique positive equilibrium u=(a / b)((1)(1eb)/(1(1)eb ) ) . since f(u ) is a straight line with slope less than 1 , we obtain that u * is globally asymptotically stable . it implies that the corresponding periodic solution of system ( 15 ) ( 19)e(t)=ab+(u*ab)eb(tk ) , k<t(k+1) , is globally asymptotically stable . consider the following equation : ( 20)y.(t)=a1y(t)+a20hf(s)y(ts)ds , where a1,a2,h>0 , and f(s ) satisfies ( 5 ) . then the trivial solution y=0 of system ( 20 ) is globally asymptotically stable if and only if a2<a1 . in this section , we first demonstrate the existence of the infection - free periodic solution , in which infectious individuals are entirely absent from the population permanently , that is , i(t)=0 for all t0 . under this condition , the growth of susceptible individuals must satisfy ( 21)s.(t)=s(t ) , tk,s(t+)=(1)s(t ) , t = k , we show below that the susceptible population s oscillates with period , in synchronization with the periodic pulse vaccination . according to lemma 1 , we know that periodic solution of system ( 21 ) ( 22)se(t)=11(1)ee(tk ) , k<t(k+1) , is globally asymptotically stable . the infection- periodic solution ( se(t),0 ) of system ( 11 ) is globally attractive provided that r*<1 , where ( 23)r*r01e1(1)e. the proof will be given in the appendix . according to theorem 1 , we can easily obtain the following results . if r01 , then the infection - free periodic solution ( se(t),0 ) is globally attractive . if r0>1 , then the infection - free periodic solution ( se(t),0 ) is globally attractive provided that > * or <*. theorem 1 determines the global attractivity of ( 11 ) in for the case r*<1 . its epidemiological implication is that the infectious population vanishes , so the disease will die out . from corollaries 1 and 2 we know , in order to successfully prevent disease , the vaccination proportion should be large enough . this would lead to more difficulties and costs to implement vaccination for many people . in the following , we say the disease is endemic if the infectious population persists above a certain positive level for sufficiently large time . then there exists a positive constant q such that each positive solution ( s(t),i(t ) ) of system ( 11 ) satisfies ( 25)i(t)q , for t large enough . the proof will be given in the appendix . from theorem 2 , we also easily obtain the following results . if r0eh>1 , the disease will be endemic provided that <*. if r0eh(1)>1 , then the disease will be endemic provided that >*. if r0(1)(1e)>1(1)e , then the disease will be endemic provided that h < h*. suppose r*>1 . then system ( 11 ) the proof will be given in the appendix . in the following , we will study the influence of pulse vaccination rate ( with ) , period of pulsing ( with ) , and so on , on the system ( 11 ) by numerical analysis . from table 1 , we can observe that a large pulse vaccination rate or a short period of pulsing is sufficient condition for the global attractivity of infection - free periodic solution ( se(t),0 ) . from the last line of table 1 , we can also observe that when pulse vaccination rate is very large , although h=0 , the epidemic disease can not be permanent yet . this implies that pulse vaccination brings determinant effect on the dynamics behaviors of the model . two thresholds have been established , one for global stability of the infectious - free solution and one for persistence of the endemic solution . from corollaries 2 and 3 , we obtain that if r0eh>1 , the disease dies out when > * whereas the disease persists when <*. there is a gap between * and *. the reason for this gap is that the thresholds are given in concrete terms in this paper . we think the sharp threshold condition exists , but can presumably only be given in abstract terms . consider the linear dde ( 27)i.(t)=r00hf(s)se(ts)i(ts)esdsi(t ) , where se(t ) is the -periodic disease free state under the vaccination effort . the solutions of this linear equation are associated with a compact positive operator on c([h,0 ] ) . the disease dies out if r<1 and persists if r>1 . in terms of the vaccination effort this means that , if r0>1 , there is a ^(0,1 ) such that the disease dies out if >^ and the disease persists if <^. ^ is the unique vaccination proportion for which ( 27 ) has a -periodic positive solution . *< * given in the paper are lower and upper estimates of ^. the spectral radius of this operator r and its threshold condition will be considered in our future research . moreover , according to theorems 1 and 2 , we can choose the vaccination period ( with ) and increase the proportion ( with ) of those vaccinated successfully such that r*<1 in order to prevent the epidemic disease from generating endemic . ( i ) r * and r * are inversely proportional to value and directly proportional to value and r0 value , which implies that pulse vaccination measures the inhibition effect from the behavioral change of the susceptible when they transfer to the infectious class ( i ) . ( ii ) r * is a directly proportional to value , which implies that the natural birth or death rate measures the inhibition effect from the behavioral change of the susceptible class ( with s ) when it moves into the infectious class ( i ) . ( iii ) r * is inversely proportional to h value , which implies that the maximum infectious period of the disease measures the inhibition effect from the behavioral change of the susceptible class ( with s ) when it moves into the infectious class ( i ) . ( iv ) there is a value * such that r * is directly proportional to when < * and is inversely proportional to when >*. therefore the larger death rate is sufficient for the global attractivity of infectioncfree periodic solution ( se(t),0 ) . in fact , we can calculate the derivative of r * with respect to ( 28)dr*d=(1)ehr0[1(1)e]2g( ) , where g()=eh(1e)(1(1)e ) . hence , there exists a * such that dr*/d>0 for (0, * ) , whereas dr/d<0 for (*,+ ) . epidemic models with time delays have received much attention since delays can often cause some complicated dynamical behaviors . delays in many models can destabilize an equilibrium and thus lead to periodic solutions by hopf bifurcation [ 3032 ] . it is well known that periodic forcing can drive sir and seir models into a behavior which looks chaotic [ 33 , 34 ] . the impulsive model with distributed time delay ( 11 ) will be analyzed , in particular paying attention to the following points : ( i ) the global asymptotic stability for sir model with pulse vaccination and distributed time delay ; ( ii ) the behavior of the model when an insufficient level of people undergo the vaccination : bifurcation and chaotic solutions ; ( iii ) whether periodic or pulse vaccination does a better job than constant vaccination at the same average value .
pulse vaccination , the repeated application of vaccine over a defined age range , is gaining prominence as an effective strategy for the elimination of infectious diseases . an sir epidemic model with pulse vaccination and distributed time delay is proposed in this paper . using the discrete dynamical system determined by the stroboscopic map , we obtain the exact infection - free periodic solution of the impulsive epidemic system and prove that the infection - free periodic solution is globally attractive if the vaccination rate is larger enough . moreover , we show that the disease is uniformly persistent if the vaccination rate is less than some critical value . the permanence of the model is investigated analytically . our results indicate that a large pulse vaccination rate is sufficient for the eradication of the disease .
although subantral augmentation procedures ( sinus lifting ) can be considered as an established and highly successful method to multiply bone prior to implant insertion into the lateral maxilla site , the biological mechanisms of subantral bone regeneration are still focus of controversial scientific discussions . while in the eighties and nineties of the past century the discussion on graft material inserted subantrally focused on free autologous bone grafts the mainstream research turned over to heterologous , allogenic , xenogenic and synthetic bone graft materials . concerning free autologous bone grafts most questions puranen proved free autologous bone grafts stored in room air to lose all biological activity within 90 minutes , when kept in saline solution within 3 hours . bohr et al . investigated the osteogenic potency of freshly harvested autologous bone grafts in comparison to deproteinized cadaver bone : although he reported a better reossification of the fresh free autologous transplants in the augmentation site in the first five days following surgery , the overall advantage of fresh autologous bone grafts was beyond any experimental and clinical significance after the standard healing period . the key role of the periosteum in bone healing and regeneration was proven in other disciplines of medicine for quite a time [ 35 ] and was verified again only lately [ 6 , 7 ] but mostly neglected in dentistry and oral surgery . 2004 found sufficient bone regeneration after sinus lift surgery without the insertion of any bone graft material but sufficient bleeding into the subantral space but left open the answer to the question about the regeneration mechanisms which were then published by srouji et al . in 2009 [ 9 , 10 ] : the basal cell layer of the schneiderian membrane is periosteum as any other membrane covering vital bone like the dura mater [ 5 , 6]that solely produces all necessary cellular and humoral factors for bone healing and bone regeneration such as bone morphing protein 2 ( which has a key function in bone regeneration ) , osteonectin , osteocalcin , and osteopontin . vital periosteum alone initiates bone regeneration and production in absence of any calcified structure or the presence of osteocytes needing only a stable blood coagulum as srouji et al . based on the knowledge of the superior atraumaticity of ultrasonic surgery [ 12 , 13 ] and of bone regeneration mechanisms under the schneiderian membrane and the mandatory atraumatic detachment of the sinus membrane from the antral bone , the authors ( tkw - research - group ) developed the minimal invasive transcrestal hydrodynamic ultrasonic cavitational sinus lift ( thucsl - intralift ) for piezotome i / ii / solo in cooperation with satelec - acteon / france to preserve the sinus - membrane and its key function in the later bone regeneration [ 1417 ] . the aim of the present study was to verify in vivo the postulated bone regeneration capabilities of the periosteum of the schneiderian membrane in patients treated with the thucsl - intralift by detecting the origins of the calcification process radiographically on macroscopical level . within a multicenter study on the success rates of the thucsl - intralift using various radiopaque bone graft materials for subantral augmentation , 14 patients ( 8 female , 6 male ) at an average age of 52 yrs ( 16 yrs ) were selected with vastly pneumatized sinuses on the right side and remaining subantral alveolar crest heights of 4 mm or less . instead of radiopaque bone graft material only , a radiolucid collagenous sponge of a stable and defined volume of approximately 2 ccm was inserted subantrally to radiographically follow up the origins of new bone growth and calcification processes in cbct scans to indirectly verify the findings by lundgren et al . and srouji et al . sinus lift surgery on the right maxillary sinus was performed according to the strict thucsl - intralift protocol . the subantral alveolar crest was revealed by either a single or dual 6 mm diameter gingival punch or an 6 mm rectangular top crestal mucoperiosteal flap ( figure 1 ) . a pilot trepanation was performed with the diamond - coated tkw 1 ultrasonic tip for piezotome i / ii / solo ( satelec - acteon / france ) ( figure 2 ) . the sinus floor was opened with the diamond - coated atraumatic tkw 2-ultrasonic tip ( figure 3 ) followed by the preparation of a receptacle for the elevation applicator tkw 5 with the flat diamond - coated tkw 4 ultrasonic tip ( figure 4 ) . the sinus membrane then was atraumatically separated from the antral bone with the hydrodynamic ultrasonic cavitational applicator tkw 5 ( figure 5 ) at a flow rate of saline solution of 30 ml / min for 5 seconds thus creating a subantral volume of 2,5 ccm under the elevated sinus membrane . ( although the differences in physics between a hydraulic and a hydrodynamic cavitational separation of the sinus membrane from the bone are significant , the basic process can be circumscribed as detaching and elevating the membrane with water - pressure ) . once the elevated sinus - membrane was verified to float free and unperforated / unruptured in the traditional unilateral valsalva check , a form stable radiolucent collagenous sponge of approximately 2 ccm ( implante colageno / euro - klee / spain or parasorb - dentalkegel / resorba / germany , ( figures 6(a)6(e ) ) was inserted subantrally instead of radiopaque bone graft material to stabilize the elevated sinus membrane as well as the blood clot forming underneath and maintain the elevation volume achieved with the thucsl - intralift procedure . patients were followed up for pain , swelling , and any sign of nightly bleeding out of the corresponding nostril and/or observation of blood - contaminated sputum and/or unusual sneezing attacks one , two , three , and saven days after surgery . implants were inserted into the augmented site 8 months after thucsl - intralift and prosthodontic treatment latest completed 11 - 12 months after initial intralift surgery . radiographic followup was performed 4 and 7 months following surgery with calibrated cbct scans and the scans modulated with sharpness , edge detection and contrast filters as well as additive and subtractive grayscale enhancement filters for better distinction between soft and hard tissues . the calcification process was determined with grayscale match algorithms to the surrounding natural bone in mm in the augmentation area with the augmentation center as origin ( figure 7 white arrow ) in transversal , sagittal , and horizontal scan slides with the calibrated cbcts measurement tool . measurements were taken in mm measuring the absolute height of the augmentation including the alveolar crest in transversal and sagittal slides ( figure 7 yellow arrow ) and in 3 , 6 , 9 , and 12 o'clock position ( figure 7 red reference cross ) centripetally from the outer line of the visible calcification to the center . the maximum vertical height of the augmentation site was measured in the transversal and sagittal slides including the alveolar crest since a precise radiological separation of the newly formed bone from the remaining alveolar crest was not possible . all 14 thucsl intralift sinus lift procedures were conducted without perforation of the sinus membrane , and no postsurgical complications suspicious of sinus - membrane perforations occurred . the mean height of the alveolar crest in the 14 study patients was 3,2 mm ( st . dev . 0,8 mm ) at the entrance site of the intralift procedure measured intraoperatively . figure 8 shows a typical presurgical ( figure 8(a ) ) and immediate postsurgical ( figure 8(b ) ) panoramic x - ray of a female study patient . in most cases the inserted sponge was similar to a typical mucocele or was not detectable at all in panoramic x - rays . cbct scans after 4 months revealed an average achieved augmentation height of 16,3 mm in the transversal slides ( st . dev . 2,2 mm ) and 16,8 mm in the sagittal slide ( st.dev . 2,6 mm ) which was reduced to an average of 14,6 mm in the transversal slides and 14,7 mm in the sagittal slides after 7 months ( table 1 ) . the calcification process under the sinus - membrane radiologically showed an even centripetal circular distribution under the sinus - membrane and on the antral bone base with calcified tissue thicknesses of 3,6 mm to 4,3 mm ( excluding all measurements in 6 o'clock position since these measurements include the original alveolar crest height ) ( table 2 , figure 9 ) . after a healing period of 7 months all cbct scans showed a completion of the calcification process in the augmented subantral volume except some randomly distributed minor radiolucent spots / areas thus not allowing a precise distinction for measurement between noncalcified areas and calcified tissue ( table 2 , figure 10 ) . the mean loss of absolute augmentation height of calcified tissue in the cbct scans between 4 months and 7 months after surgery was 1,9 mm resulting in a final mean overall height of calcified tissue for implant insertion of 14,65 mm ( table 3 ) . after 4 month approximately a third of the subantral augmented volume in each measurement position ( 3 , 6 , 9 , 12 o'clock ) related to the total width / height / depth of the augmentation was presented as calcified tissue in the cbct scans ( table 3 , figure 9 ) . no precise distinction between calcified and noncalcified tissue could be made in the cbct scans after 7 months . all patients were successfully treated with two - stage dental implants from various manufacturers ( mostly q2-implant / trinon karlsruhe gmbh / germany , bego ri / bego / germany , sicace / sic - group / germany and others ) after 8 months and prosthetic suprastructure after 11 - 12 months ( figure 11 ) . figures 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , and 20 show two more typical cases of the present study . the radiological results of the present study confirm the experimental results published by ortak et al . [ 9 , 10 ] in vivo and suggest the schneiderian membrane to be the primary carrier of bone reformation in sinus lift procedures providing the necessary osteoprogenitor cells and humoral factors for bone regeneration [ 9 , 11 ] . nevertheless a volume stable subantral filling material is needed to stabilize the detached sinus membrane and formation of a blood coagulum in the upmost position to achieve sufficient augmentation heights and widths for implant insertion but the success of sinus lift procedures does not seem to depend on the type of augmentation material ( autologous , heterologous , xenogenic , synthetic calcified bone grafts ) used . the results of this study proved a form stable collagenous sponge to be sufficient in stabilizing the sinus - membrane above the achieved subantral augmentation volume as well as the resulting stable blood clot forming in the collagenous sponge . a general forensic drawback in using collagenous sponges in subantral augmentation procedures might be the inability to prove the successful sinus lift immediately after surgery since in an opg , a radiolucent sponge can hardly be detected ( figure 8(b ) and 17(b ) ) and only verified by the bone formation and calcification process after 3 - 4 months ( figures 9 and 18 ) or at the time of implant insertion . to establish such a subantral augmentation procedure would call for mandatory radiopaque collagenous sponges to enable radiographic verification but would possibly decrease expenses for augmentation materials . if the reduction of absolute augmentation height of an average of 2 mm between the 4th and the 7th month subsequent surgery could be prevented by the use of calcified bone graft instead of a collagenous sponge still has to be further investigated by a similar study protocol but has to be taken into consideration in the daily routine to prevent finally insufficient augmentation heights when using radiopaque collagenous sponges . compared to the results of the surgical technique reported by lundgren et . al . the insertion of a collagenous sponge seems to have advantages concerning more sufficient final augmentation heights . furthermore the results of this study suggest that after an overall period of 7 months following minimal invasive transcrestal sinus lift , the calcification process of the augmented subantral site seems to be completed in all cases even at augmentation volumes of 2 ccm . nevertheless this healing duration might not be applicable to lateral approach of sinus lift procedures or cases of iatrogenic puncture or minor ruptures of the sinus - membrane due to a vaster traumatization of the sinus - membrane and surgical site . this probably might result in longer bone formation and calcification duration due to healing processes and primary repair of the traumatized tissue before the bone formation and calcification process starts . finally the authors generally suggest to more rely on the osteogenic potential of the periosteum [ 47 ] and minimal invasive surgical techniques not only in sinus lift procedures than on grafting materials of various kinds .
introduction . sinus lift procedures are a commonly accepted method of bone augmentation in the lateral maxilla with clinically good results . nevertheless the role of the schneiderian membrane in the bone - reformation process is discussed controversially . aim of this study was to prove the key role of the sinus membrane in bone reformation in vivo . material and methods . 14 patients were treated with the minimal invasive thucsl - intralift , and 2 ccm collagenous sponges were inserted subantrally and the calcification process followed up with cbct scans 4 and 7 months after surgery . results . an even and circular centripetal calcification under the sinus membrane and the antral floor was detected 4 months after surgery covering 30% of the entire augmentation width / height / depth at each wall . the calcification process was completed in the entire augmentation volume after 7 months . a loss of approximately 13% of absolute augmentation height was detected between the 4th and 7th month . discussion . the results of this paper prove the key role of the sinus membrane as the main carrier of bone reformation after sinus lift procedures as multiple experimental studies suggested . thus the importance of minimal invasive and rupture free sinuslift procedures is underlined and does not depend on the type of grafting material used .
Only 106,000 people signed up for coverage in the new Obamacare exchanges as of Nov. 2, administration officials announced in the first official update of enrollment since the law’s disastrous Oct. 1 launch. One-quarter of those people came through the flawed HealthCare.gov site, which is used by 36 states. The rest selected plans in the 14 states and the District of Columbia that are running their own health insurance exchanges, most of which are operating much better than the federal site. Text Size - + reset Republicans focus on Obamacare The numbers are the first official glimpse of the damage caused by the tech failures with the law’s enrollment portal HealthCare.gov, and they fall well short of the administration’s early goal of having about a half-million sign up in the first month. The administration is hoping to get 7 million people signed up in the exchanges and at least 8 million in Medicaid by the time the open enrollment season ends March 31. (PHOTOS: Obamacare online glitches: 25 great quotes) Even those first-year goals are arguably modest: There are about 47 million uninsured people in the country, although that includes undocumented immigrants ineligible for Obamacare coverage. The White House has been tamping down expectations for weeks, warning that they have always expected the first month of enrollment to be low, even before the gravity of the website problems became clear. Health and Human Services Secretary Kathleen Sebelius maintained that the system is beginning to work, albeit slowly. “Even with the issues we’ve had, the marketplace is working and people are enrolling,” she told reporters on a call. (See POLITICO's full Obamacare coverage) “We can reasonably expect that these numbers will grow substantially over the next five months,” she added. Republicans pointed out that the sign-up numbers are puny compared with the millions of Americans who are receiving cancellation notices and losing the health plans that the president promised they could keep. “Pretty stunning” is how House Majority Leader Eric Cantor (R-Va.) summed it up on CNN. “Just another day in a series of mess-ups in Obamacare.” Sen. Lamar Alexander (R-Tenn.) said everyone who had signed up nationwide could fit into the University of Tennessee stadium “and still have room for the ‘Pride of the Southland’ marching band.” He added, “That’s bad news for the 5 million Americans who’ve had their policies canceled by Obamacare.” The statistics HHS released Wednesday afternoon included people who had selected their health plan — but not necessarily paid for it, the final step in enrollment. People have until Dec. 15 to pay for coverage starting in January. (Also on POLITICO: Issa hearing shows Obamacare deadline slipping) An additional 392,000 people were deemed eligible for either Medicaid or the Children’s Health Insurance Program under the law. And in what advocates of the law pointed to as an encouraging sign, close to 1 million — 975,407 people — had made it through the process of applying and confirming their eligibility, although they had not yet selected plans. Anne Filipic, president of Enroll America, said in a statement the larger numbers of people in the pipeline “confirm what we have been expecting — the problems with the website have indeed slowed enrollment, but Americans are hungry for the affordable, comprehensive coverage.” The administration did not release details on the demographics of those who enrolled, what kind of insurance plan they chose or whether they qualified for subsidies. A well-functioning insurance market needs healthy, younger participants to balance the costs of older and sicker ones, but the numbers released Wednesday gave no hint of early trends or eventual sustainability. Sebelius said such information would be included in future reports. Almost a third of all the exchange sign-ups came from California, a state the White House indicated will play a large role in shaping the law’s trajectory. Shortly after the HHS announcement, Covered California, the state exchange, updated its own tally, 60,000 as of Nov. 12. The federal count went through Nov. 2. Executive Director Peter Lee said the pace had doubled from October, with 2,000 now selecting a plan each day. (PHOTOS: 10 Sebelius quotes about the Obamacare website) The administration is placing a heavy emphasis on enrollments in California, Florida and Texas, which are home to one-third of the nation’s uninsured. Sign-ups in Florida and Texas, which are relying on the federal exchange, were much lower — 3,571 and 2,991, respectively. Jonathan Gruber, an MIT economics professor who advised policymakers crafting the Massachusetts health law and Obamacare, said he doesn’t think the enrollment numbers are “disappointing.” He noted that the Massachusetts exchange enrolled only 123 people in the first month. By that measure, he said, the federal government is doing well. “It’s just too early to say anything useful,” Gruber said on CNN. In Massachusetts, enrollment surged at the end of the enrollment period, which is probably what people will do on the Obamacare exchanges, Gruber said. “There’s no need to panic,” he said. Paige Winfield Cunningham contributed to this report. ||||| The administration says fewer than 27,000 people managed to enroll for health insurance last month in the 36 states relying on the problem-filled federal website for President Barack Obama's overhaul. House Oversight Committee Chairman Rep. Darrell Issa, R-Calif., holds up a checklist related to the preparation for the implementation of the Obamacare healthcare program, and specifically, the HealthCare.gov... (Associated Press) House Oversight Committee Chairman Rep. Darrell Issa, R-Calif. makes an opening statement as his panel holds its first public hearing on problems implementing the Obamacare healthcare program, on Capitol... (Associated Press) Todd Park, U.S. chief technology officer at the White House Office of Science and Technology Policy, who was previously chief technology officer at the Health and Human Services Department, raises his... (Associated Press) From left, David Powner, director of information technology management issues at the Government Accountability Office; Henry Chao, deputy chief information officer for Medicare and Medicaid Services;... (Associated Press) The dismal numbers released Wednesday by federal health officials were even lower than estimates recently circulated. There was one bright spot: States running their own websites did better than the feds, reporting more than 79,000 sign-ups. Even so, total private insurance enrollment after the first month of the health care rollout was only about one-fifth what the administration had expected during that time period. Enrollment numbers totaled 106,185. A Sept. 5 administration estimate had projected that 494,620 people would enroll in the first month. Health and Human Services Secretary Kathleen Sebelius says she expects things to improve. THIS IS A BREAKING NEWS UPDATE. Check back soon for further information. AP's earlier story is below. After weeks of criticism over the balky rollout of the health care sign-up website, the Obama administration is releasing figures on how many people have successfully enrolled through the new federal insurance exchanges. In advance, officials are lowering expectations for the numbers, given the widespread technical issues that have hampered the website since its Oct. 1 launch. The tightly held numbers being released Wednesday are believed to amount to only a small fraction of the nearly 500,000 initial sign-ups that administration officials had projected before the healthcare.gov site went live. The figures are expected to cover sign-ups that occurred in October, the first month of the six-month enrollment window. Officials say they expect enrollment to be heavier toward the end of that period. The announcement was coming as congressional investigators held hearings into the technical issues behind the dysfunctional rollout of the website. Rep. Darrell Issa, R-Calif., chairman of the House Oversight and Government Reform Committee, had a long list of issues: insufficient testing, possible security flaws, design shortcomings _ even allegations of political meddling. But there didn't seem to be a "smoking gun" behind the technical failure that has mortified supporters of President Barack Obama's health care law and cheered its opponents. The technology's cost to taxpayers: north of $600 million and climbing. It was the sixth major congressional hearing since computerized insurance markets went live Oct. 1 and millions of consumers encountered frozen screens. The oversight committee was sharply divided along partisan lines. "Established best practices of our government were not used in this case," said Issa. As a result, the law's promise of affordable health insurance "does not exist today in a meaningful way." Like other Republicans, Issa wants the law repealed, not fixed. Ranking Democrat Elijah Cummings of Maryland questioned Issa's fairness. Addressing Issa directly, Cummings said: "Over the past month, instead of working in a bipartisan manner to improve the website, you've politicized this issue by repeatedly making unfounded allegations." A key issue for Issa is why the administration required consumers to first create online accounts at HealthCare.gov before they could shop for health plans. That runs counter to the common e-commerce practice of allowing anonymous window-shopping. Outside experts say it increased the workload on a wobbly system. Issa and other Republicans suspect a political motive; Democrats say the explanation has to do with technical issues. The shopping feature had its own glitches and would have compounded system problems. The hearing featured Henry Chao, a little-known Medicare official, who had presented an overview of the enrollment system back in the spring, and commented then, "Let's just make sure it's not a third-world experience." Chao is deputy chief information officer for the Centers for Medicare and Medicaid Services, which also is leading the implementation of the Affordable Care Act. A career official who earlier helped implement the Medicare prescription drug benefit, he is widely seen as the operational official most knowledgeable about the health care law's online system. Chao's public comment in March at an insurance industry forum was taken as an edgy joke, and he later joined the parade of administration officials who assured lawmakers that everything was on track for a smooth launch, even as nonpartisan experts from the congressional Government Accountability Office warned that could not be taken for granted. Issa's investigators previously grilled Chao in a private session that lasted nine hours. Chao's name appears on a key Sept. 27 document authorizing the launch of the website despite incomplete security testing. But Issa's staff has released materials indicating that Chao was unaware of a memo earlier that month detailing unresolved security issues. On Wednesday, Chao testified that he is confident that the system is secure. In fact, he said he had recommended to his sister that she try it. Chao was also involved in the decision not to allow anonymous window-shopping, which is available on most e-commerce sites, including Medicare.gov. He testified Wednesday that the shopping feature "miserably" failed testing and would not have been a help to consumers. Chao said that shortly before the launch he directed a contractor to turn off the shopping feature, and instead apply resources to a more critical function. Issa has suggested a political calculation: The administration wanted to avoid consumers experiencing "sticker shock" over premiums, so it first required them to compute tax credits that work like a discount. The committee also heard from Todd Park, the White House chief technology officer. He testified that the website is getting better day by day, and week by week. It can now handle about 17,000 account registrations an hour. Page response times are under one second. But Park balked when Rep. Scott DesJarlais, R-Tenn., asked what letter grade he would give to the website rollout. "Obviously it's been really, really rocky," said Park. "It's what nobody wanted." Separately, the House Homeland Security Committee held its own hearing Wednesday. It gave Republicans a chance to criticize the health care law and the botched online rollout. But it resulted in few answers on the security of the website because officials testifying from Homeland Security said that wasn't their responsibility. While that department helps federal agencies like Health and Human Services comply with federal security standards, the law leaves many of the technical decisions up to the agencies themselves, the officials said. ___ Associated Press writer Anne Flaherty contributed to this report.
– The White House has been trying to dampen expectations about first-month enrollment in ObamaCare, and here's why: The administration said today that 26,794 people signed up in October via the federal HealthCare.gov website, reports AP. About 80,000 more signed up on state-run exchanges. Prior to the launch of ObamaCare, White House officials predicted about 500,000 would be on board by now—on both the federal and state exchanges—but the final tally came in at 106,185. That's a decent snapshot of just how badly the federal HealthCare.gov website has performed: It covers people in 36 states, while the 14 states running their own marketplaces managed to about triple its total. The number of enrollees reflects those who have selected a plan, not necessarily paid for it, notes Politico. The federal number is actually lower than what the Wall Street Journal reportedly previously, but at least it's up from the whopping first-day total of ... six.
psychotic disorders and schizophrenia are disabling conditions characterized by positive symptoms , negative symptoms , and cognitive impairments . most individuals with schizophrenia have a poor long - term outcome resulting in personal suffering and psychosocial disabilities including impaired interpersonal and vocational skills . these drugs have been shown to be effective during the acute phase and for preventing relapse ( kennedy et al 2000 ; quraishi and david 2000 ) ; kisling ( 1994 ) argued that if patients complied fully with their medication , relapse rates would fall to about 15% ( almost 50% of patients relapse within a year of achieving remission ) . noncompliance is common throughout medicine , but some aspects of schizophrenia may make it particularly difficult for patients to accept their treatment . although antipsychotic medication decreases symptoms , other issues may temper these beneficial effects , resulting in poor compliance and high rates of relapse . these issues include the side effects of antipsychotic drugs ( such as weight gain , parkinsonism ) , and poor functional recovery following psychotic episodes . the rate of noncompliance is difficult to assessed , but has been estimated at 25%41% ( jeste et al 2003 ) . hogarty and colleagues ( 1997 ) demonstrated that the relapse rate increases from 40% to 65% after one year and to 80% after two years if medication is discontinued . however , if psychosocial treatment is given in addition to maintenance drug treatment , the relapse rate may be up to 50% lower than that for drug treatment and standard care . factors associated with noncompliance include poor insight , negative attitude to medication , a history of noncompliance , substance abuse , short duration of illness and a poor therapeutic alliance ( jeste et al 2003 ) . individuals suffering from psychosis tend to have impaired social functioning ( erickson et al 1989 ; grant et al 2001 ) , quality of life ( priebe et al 2000 ; addington et al 2003a ; addington 2003b ) , cognitive and occupational functioning , even if they display clinical recovery ( penn et al 2005 ) . these rate - limiting factors should be considered as therapeutic targets for improving psychosocial outcome and increasing the readiness of people with schizophrenia to undergo rehabilitation . baseline attitudes to treatment and motivational and training variables also affect remediation ( ( fiszdom et al 2005 ) . several programs dealing with these aspects have been developed and the term of compliance therapy is sometimes used ( kemp et al 1996 , 1998 ) . compliance therapy includes cognitive behavioural therapy , psychoeducation and remediation , with the aim of providing information about the illness and side effects , and improving cognitive and psychosocial functioning . the articles were selected from the medline and pubmed databases , using the following terms : ( 1 ) remediation , ( 2 ) rehabilitation , ( 3 ) psychosis , ( 4 ) antipsychotics , and ( 5 ) psychosocial treatment . relapse rates have been shown to be up to five times higher in noncompliant than in compliant subjects , resulting in significantly higher costs for these patients and for society ( robinson et al 1999 ) . several studies have investigated the possible relationship between compliance and type of antipsychotic medication ( kane et al 1985 ; lacro et al 2002 ) . some have suggested that the use of atypical antipsychotic drugs may be associated with fewer side effects , better compliance and a lower rate of relapse . the newer antipsychotic drugs efficiently attenuate the symptoms of schizophrenia without causing dysphoria and motor side effects . this higher tolerability and efficacy may lead to more positive attitudes to drug treatment in schizophrenic patients taking second - generation antipsychotic drugs than in patients taking first - generation antipsychotic drugs ( day et al 2005 ) . marder and colleagues ( 1996 ) showed in their review that periodic visits for blood monitoring , which are obligatory for patients on clozapine , improved the therapeutic alliance , making it easier for the clinician to monitor compliance . the question of the relationship between adverse effects and compliance with medication is highly complex . some studies have reported a significant relationship between various adverse effects and noncompliance , whereas others do not . according to kampman and colleagues ( 2002 ) , patients predictions concerning their compliance extra - pyramidal effects are most frequently considered anticholinergic and other adverse effects are also linked to compliance . according to freudenreich and colleagues ( 2004 ) , extra - pyramidal symptoms are not the primary factor determining attitudes to treatments . these authors studied the relationship between drug attitude inventory ( dai scale ) score and psychopathology , insight , extra - pyramidal symptoms , level of functioning and type of antipsychotic drug in 81 schizophrenic outpatients . their results suggest that patients who recognize adverse effects of therapeutic drugs may actually have a more positive attitude . it has also been suggested that personal characteristics such as attitude to health and illness , may be critical in determining attitudes to medication ( jeste et al 2003 ) . these factors might reduce the importance of medication - related side effects in determining treatment compliance . indeed , it has been shown that distress due to side effects is not necessarily linked to noncompliance in outpatients with schizophrenia ( weiden et al 1991 ) . in addition , no significant difference in compliance was found between depot , first- and second - generation antipsychotics ( rittmannsberger et al 2004 ) . schizophrenic subjects do not perceive side effects and symptoms as independent ( carrrick et al 2004 ) . in a review of the literature concerning side effects and compliance , lacro and colleagues ( 2002 ) reported that subjective response to medication affects both compliance and the risk of relapse . these results were confirmed by rettenbacher and colleagues ( 2004 ) , in both inpatients and outpatients . these authors demonstrated a positive correlation between compliance and the patients impression that the drug had a positive effect on the illness . they therefore stressed the need to include patients and their relatives in the treatment decision process , to increase treatment compliance . perkins and colleagues ( 1990 ) proposed a model in which compliance with treatment is determined by the patients assessment of the perceived benefits of treatment and the risks of illness versus the costs of treatment , including adverse effects . patients who believe that the risks of treatment outweigh patients who recognize the therapeutic effects of their medication may therefore have a more positive attitude to treatment . thus , side effects do not seem to be the main predictor of compliance : attention should therefore be paid to the patients subjective feelings about treatment , including the recognition of positive and negative therapeutic effects in particular . pharmacological treatments are the first - line treatment for schizophrenia , but adjuvant treatments are also required to achieve functional recovery or to prevent relapse because antipsychotic drugs may not be sufficiently effective and noncompliance is a common problem ( ratakonda et al 1997 ) . psychotic disorders and schizophrenia should therefore be treated with a combination of drugs , psychological treatment and the rehabilitation of cognitive disorders and social skills . several approaches have been developed including supportive therapy , integrated psychological treatment and social skills training . interest in psychoeducation and remediation has recently increased ( spaulding et al 1999 ; wikes et al 1999 ; penads et al 2002 ; addington et al 2004 ; byerly et al 2005 ) . educational interventions aim to provide patients with information about their illness and medication , with a view to increasing their understanding and promoting compliance . penads and colleagues ( 2002 ) showed that clinically orientated cognitive rehabilitation treatments seem to improve not only cognitive functioning and other functional aspects related to the illness . they compared 24 schizophrenia patients with cognitive impairment and 10 schizophrenia patients without cognitive impairment on integrated psychological treatment . some studies have reported a positive correlation between executive functions , as evaluated with the wisconsin card sorting test and social competence ( spaulding et al 1999 ) , or between verbal memory performance and psychosocial skill acquisition ( spaulding et al 1999 ; wikes et al 1999 ) . in 1996 , marder and colleagues randomly assigned eighty patients with schizophrenia to two groups , receiving either social skills training or supportive group therapy . significant main effects were identified , showing that social skills training was significantly more effective than supportive group therapy and significant interactions between psychosocial treatment and drug treatment were identified . however , the improvements observed were modest and confined to certain subgroups of patients . mcpherson and colleagues ( 1996 ) compared one educational session with three educational sessions . they found that both regimes improved the patients knowledge about their medication but that three sessions of education gave significantly better results than one educational session during follow - up . four studies by addington and colleagues ( 2001 , 2003a , 2003b , 2004 ) examined the results of the calgary early psychosis program ( offering a wide range of psychosocial interventions targeting the family , drug therapy , social skills ) in patients with nonaffective first - episode psychosis ( examining social functioning and quality of life over the course of one year ) . however , only one of these studies reported a better quality of life and social functioning in patients receiving such interventions ( addington et al 2003b ) . one study reported a decrease in the use hallucinogens , cannabis and alcohol in heavy users ( addington et al 2001 ) and another reported improvements in depression ( addington et al 2003a ) . computer - assisted cognitive enhancement therapy has been shown to modify cognitive style and social cognition in 121 schizophrenia patients ( hogarty et al 2004 ) . the observed relapse rate was low in this study ( 10% after two years ) , and was significantly lower in the subgroup of patients with an iq of 80 or higher . finally , byerly and colleagues ( 2005 ) examined the effect of a cognitive and psycho - educational approach in an open trial including 30 subjects with schizophrenia and schizoaffective disorders . symptoms , insight , and attitude to medication did not change significantly during the study . most studies have demonstrated that cognitive deficits and related behavior are improved in patients suffering from schizophrenia , provided with sufficient rehabilitation ( spaulding et al 1999 ; wikes et al 1999 ; penads et al 2002 ; addington et al 2004 ) . recommendations for a specific psychosocial intervention in schizophrenia are probably best made on the basis of patient characteristics : intelligence , duration of illness , and phase of illness ( hogarty et al 2004 ) . a meta - analysis of randomised controlled trials of social skills training and cognitive remediation provided no clear evidence of any benefits of social skills training on the global adjustment of relapse rate , social functioning , quality of life or treatment compliance ( pilling et al 2002 ) . most studies on cbt in schizophrenia have assessed the efficacy of this approach and its effects on the symptoms of schizophrenia ( see turkington et al 2006 ) . however , few studies have tested whether cbt is more beneficial than treatment as usual ( tau ) in terms of relapse and rehospitalization rates . some studies have shown cbt to be of benefit in the treatment of positive ( tarrier et al 1998 ) and negative schizophrenia symptoms ( sensky et al 2000 ) . a prospective , multicentre , randomised controlled trial , with rater blinding and an 18-month follow - up period was conducted by tarrier and colleagues ( 1998 ) . in this study , cbt was found to be significantly more effective than tau for attenuating symptoms and reducing relapse and rehospitalization rates . other studies have also reported cbt to be significantly more effective than tau in psychotic subjects suffering from an acute episode ( lewis et al 2002 ; tarrier et al 2004 ; startup et al 2004 ) or chronic illness ( turkington et al 2002 ; durham et al 2003 ; rector et al 2003 ; trower et al 2004 ) . similar improvements were also observed when the patients insight was assessed ( rathod et al 2005 ; valmaggia et al 2005 ) . the patients insight into compliance and its implications were significantly better in cbt group than in the tau group , but this difference was not maintained at follow - up ( rathod et al 2005 ; valmaggia et al 2005 ) . functional cbt ( fcbt ) has recently been developed as a novel approach for the treatment of psychotic symptoms . this technique was developed to extend the effects of cbt beyond symptom reduction , focusing on symptom interventions , working on functional goals . the therapeutic alliance and the patients motivation are thought to be improved by linking interventions to life goals . in a pilot study , carter and colleagues ( 2005 ) compared fcbt with psychoeducation ( pe ) in 30 outpatients with schizophrenia . both treatments consisted of weekly one - hour individual sessions for a total of 16 weeks . fcbt was significantly correlated with greater attenuation of positive symptoms and improvements in functioning ( 60% for fcbt versus 31% for pe ) . the cbt group had significantly lower rehospitalization rates and higher levels of compliance with medication , persisting for more than two years . finally , zimmermann and colleagues ( 2005 ) performed a meta - analysis on the efficacy of cbt in schizophrenia . this meta - analysis supported the general conclusion that cbt is a promising approach for the adjuvant treatment of positive symptoms in schizophrenia . moreover , the therapeutic effects of cbt persist during follow - up , suggesting that cbt has long - term effects . similar conclusions were drawn by butler and colleagues ( 2006 ) in a review of meta - analyses . a large effect on the decrease in psychotic symptoms has been found and long - term follow - up has shown the maintenance of gains and even an increase in their magnitude . however , both authors highlighted a number of variables that have not been specifically examined such as therapeutic alliance and neuropsychological deficits . personal therapy has a pervasive effect on social adjustment , which continues to improve three years of after discharge ( hogarty et al 1997b ) . however , personal therapy increases the rate of psychotic relapse for independent patients living away from their families ( hogarty et al 1997a ) . the intervention of the family has also been identified as important for preventing relapse and readmission ( pilling et al 2002 ) . according to kemp and colleagues ( 1996 , 1998 ) , three dimensions define insight : acknowledgment of the psychiatric disease , ability to recognize psychiatric symptoms , and compliance with treatment . the lack of insight , or unawareness of illness in people with schizophrenia has been recognized as a medical condition : anosognosia . a dutch study has indicated that 80% of schizophrenic patients are aware of their diagnosis ( van meer et al 1997 ) . only 20% of these subjects sought this information from their psychiatrist , the others received this information from their doctor without asking for it . in a french study , more than 60% of patients declared that they knew the name of their illness and were able to talk about schizophrenic or psychotic disorders ( ferreri et al 2000 ) . wirshing and colleagues ( 2002 ) showed that only 10% of people suffering from schizophrenia were able to understand this from their first interview with the psychiatrist . a second explanation from the doctor was required in 53% of the cases . this study showed that the level of comprehension is correlated with the conceptual disorganization item of the bprs scale . several specialists regard the question of insight as a major factor , enabling schizophrenia patients to take an active role in managing their symptoms and problems . previous studies focusing on insight or self - awareness in schizophrenia have suggested that this cognitive dimension may have nosological value ( rittmannsberger et al 2004 ) . the results obtained suggest that severe self - awareness deficits are a prevalent feature of schizophrenia ( smith et al 2004 ) . the lack of insight of schizophrenia patients is an important clinical issue . in a reference study of about 35 male forensic patients suffering from chronic schizophrenia , only 51% believed that they were suffering from a mental disorder ( goodman et al 2005 ) . in this study , a similar proportion reported awareness of a need for medication and correctly attributed symptoms to illness . this study also showed that poorer insight was correlated with a higher frequency of violent events . amador and colleagues ( 1994 ) suggested that the lack of insight has two components : unawareness of illness and incorrect attributions of the causes of illness . insight into illness and greater recognition of symptoms , severity of illness and functioning are predictors of a more favorable outcome in schizophrenia . symptom awareness deficits are common in schizophrenia and have been associated with poor treatment compliance ( davis et al 2004 ) . several studies have reported that individuals with severe negative symptoms tend to have the poorest insight ( amador et al 1994 ; collins et al 1997 ; schwartz et al 1998 ; carroll et al 1999 ) . in all these studies , impaired insight was considered to be an important factor contributing to poor treatment response and outcome in schizophrenia . droulout and colleagues ( 2003 ) studied the relationships between insight and compliance with medication in subjects with psychosis . they demonstrated that compliance with medication is associated with the level of insight , independently of the patients other demographic and clinical characteristics . this association between low - level insight and poor compliance with medication has been confirmed in several studies . the question of the quality of life of individuals suffering from schizophrenia remains a little - studied issue . awad and colleagues ( 2004 ) developed a conceptual model , suggesting that the major determinants of quality of life in schizophrenia are symptom severity , level of psychosocial functioning , and the presence of medication side effects . this model also suggests that quality of life may be influenced by the individual s subjective response to neuroleptic medication . some previous reports have suggested that insight has a major impact on quality of life scores ( atkinson et al 1997 ) . the evaluation of quality of life made by individuals with schizophrenia may be influenced by the presence of psychotic symptoms and by adaptation to the adverse social circumstances that they frequently experience . in several epidemiological studies , schizophrenic patients with poor insight , particularly those who displayed a lack of awareness of the consequences of the illness , were found to be more socially isolated and to have poorer psychosocial functioning ( amador et al 1994 ) . according to browne and colleagues ( 2005 ) , there is no significant relationship between quality of life and the level of insight . these authors reported a direct link between the development of treatment strategies to alleviate neuroleptic - induced dysphoria and the benefits of rehabilitation programmes for improving quality of life . in everyday life , the question of quality of life is associated with the problem of comorbidity . for example , nicotine problems are very frequently diagnosed by psychiatrists in people suffering from schizophrenia ( montoya et al 2005 ) . patients with such nicotine problems display poorer treatment compliance than their counterparts without such problems ( hudson et al 2004 ) . in a sample of 1843 patients followed by psychiatrist , this study suggests that psychiatric patients who smoke have more clinical and psychosocial stressors and more severe psychiatric problems than those who do not smoke ( montoya et al 2005 ) . little is known about the extent to which patients suffering from schizophrenia are preoccupied by their health and how often they request assistance to give up smoking . in many studies , the therapeutic alliance has been associated with compliance ( fenton et al 1997 ; kampman et al 1999 ; lacro et al 2002 ; day et al 2005 ) . more and more studies are now examining the subjective reasons for which patients are willing or reluctant to take medication . according to loffler et al ( 2003 ) , the quality of relationships with clinicians during acute admissions appears to be a major determinant of patients attitudes to treatment and compliance with medication . they assessed attitudes to treatment and self - reported compliance with medication in 28 inpatients and showed that a poor relationship with the prescriber , a feeling of being coerced during admission and a low level of insight were predictive of a negative attitude to treatment . similarly , two other studies have shown that feelings of coercion were associated with a tendency to reject psychiatric services ( rogers et al 1993 ; lidz et al 2000 ) . in a study of compliance in outpatients , rittmannsberger et al ( 2004 ) found that regular visits to a psychiatrist were correlated with good compliance . according to these authors , not visiting a psychiatrist could be seen as just another aspect of noncompliance . they also suggested that visiting a psychiatrist may protect against noncompliance . according to davis et al ( 2004 ) these authors showed in a sample of 24 patients with schizophrenia spectrum disorders , that poorer performance in verbal memory tests was significantly related to client reporting of a strong alliance , whereas better performance in visual spatial reasoning tests was significantly related to therapist reporting of a strong alliance . published studies on compliance have highlighted the importance of assessing the factors influencing compliance at an early stage of the disease process . side effects of medication and the patient s awareness of their illness are major issues in the treatment of psychotic disorders because of the high rate of relapse . the use of atypical antipsychotic drugs , in addition to reducing schizophrenic symptoms , may also be associated with fewer side effects . the higher tolerability and efficacy of these drugs may lead to more positive attitudes to drug treatment in schizophrenic patients taking second - generation antipsychotic drugs . psychosocial interventions , cognitive remediation and psychotherapy have all been proposed as adjuvant treatments for increasing compliance , but the most robust results have been achieved with cognitive behavioral therapy . thus , reducing the side effects of the antipsychotic medication associated with psychological interventions seems to be a major challenge in efforts to improve compliance .
compliance and relapse are major issues in the treatment of psychotic disorders . about 50% of subjects with schizophrenia do not comply with treatment and relapse rates of 65% are reported after one year and 80% after two years . drug treatments are effective against psychotic symptoms , but can not promote functional recovery or prevent relapses when prescribed alone . the factors influencing compliance include side effects and the patients awareness of their illness . psychosocial interventions , cognitive remediation and psychotherapy have been proposed as adjuvant treatments to increase compliance and to decrease the rate of relapse . most of these interventions have been shown to increase compliance and to decrease the rate of relapse , but the most robust results have been achieved with cognitive behavioral therapy .
adolescent pregnancy is a major public health problem in many developing countries . in a recent multicountry survey conducted by the world health organization , the rate of thai adolescent births was 117 per 1,000 deliveries.1 previous studies among pregnant adolescents have reported an increased risk of adverse maternal and perinatal outcomes including maternal anemia , preterm delivery , low birth weight , and rate of neonatal admission to an intensive care unit.25 additionally , a substantial number of pregnancies occurring in adolescents are unintended and might be terminated ; these terminations may result in serious morbidity and mortality.6 thus , adolescent pregnancy not only increases the risk of adverse pregnancy outcomes , but also negatively impacts the quality of life of the parents and infants . designing and implementing effective contraception services for reducing adolescent pregnancy therefore is of utmost importance . as the pattern of contraceptive practice can be situation - specific , the pattern depends on the quality of medical services , the level of functional health literacy , and women s social and cultural backgrounds.79 it is therefore important to determine the area - specific data regarding contraceptive practices and pregnancy intendedness . accordingly , this study was conducted to assess pregnancy intendedness and previous contraceptive practice among thai pregnant adolescents . this descriptive study was approved by the research ethics committee of the faculty of medicine , khon kaen university , khon kaen , thailand . pregnant women between 15 and 19 years old attending the antenatal clinic at srinagarind hospital and the khon kaen branch of the planned parenthood association of thailand were invited for study participation . as no study has been conducted to determine the prevalence of unintended pregnancy among thai pregnant adolescents , we calculated a sample size at 50% prevalence of unintended pregnancy , which represents the largest sample size required . at a level of 10% precision and 95% of the confidence interval ( ci ) , the estimated number of pregnant adolescent required was at least 96 . face - to - face interviews by trained , female nurses using standardized questionnaires were carried out . detailed information elicited from participants included baseline characteristics , previous contraceptive use , source of contraceptive information , and their retrospective intention to be pregnant . pregnancy in this study was considered to be unintended when it was reported to have been either mistimed ( reported having wanted the pregnancy later ) or unwanted ( had not wanted to become pregnant then or later ) . all participants were asked whether they were using any type of contraceptive methods prior to this pregnancy and , if so , which method / methods . extremely young maternal age was defined as 16 years of age or younger.4 statistical analysis was carried out with spss software ( ibm corporation , armonk , ny , usa ) . data were summarized as number ( percentage ) or mean standard deviation when appropriate . on the basis of univariate analysis , variables potentially associated with unintended pregnancy and non - use of contraception including age , extremely young age versus older , gravidity ( primigravidas versus multigravidas ) , and educational status ( being in school versus having completed their education ) were assessed . these variables were further included ( if p<0.20 ) in a logistic regression analysis to determine which , if any , were jointly important in predicting unintended pregnancy and no contraceptive use prior to conception . independent variables were considered significant if their effects on unintended pregnancy and non - use of contraception were statistically significant at the 95% level of significance . during the study period , 200 participants were enrolled . mean age and standard deviation was 17.21.2 years . median age at first sexual intercourse was 16.0 years ( interquartile range , 1517 years ) . eighteen ( 9.0% ) had first sexual intercourse at an age of 13 years or younger . the levels of educational attainment among the remaining 71 participants were primary school ( 20 ) , and high school or college ( 51 ) . table 1 displays the baseline characteristics , previous contraceptive practices , and sources of contraceptive information . seventy - five ( 37.5% ; 95% ci , 30.844.6 ) of the participants had never used any contraceptive methods . of 125 participants who ever used contraception , regular use of contraceptives only two participants ( 1.0% ) had ever used an intrauterine device or subdermal implant . the participants age was a significantly independent predictor for never using contraceptives . extremely young adolescence was an independent factor predicting never using contraception ( odds ratio [ or ] , 5.57 ; 95% ci , 2.9510.53 ) ( table 2 ) . approximately two - thirds ( 69.0% ; 95% ci , 62.175.3 ) of participants did not use contraception before getting this pregnancy . extremely young adolescents were 6.4-times ( 95% ci , 2.9414.04 ) more likely than older participants to have not used contraception prior to getting this pregnancy ( table 3 ) . one hundred and thirty - one ( 65.5% ; 95% ci , 58.572.1 ) participants did not think they were likely to become pregnant . extremely young age was a significant independent predictor of not perceiving risk of getting this pregnancy ( or , 2.37 ; 95% ci , 1.264.44 ) ( table 4 ) . one hundred and thirty - two ( 66.0% ; 95% ci , 58.972.5 ) participants declared that they had not intended to become pregnant . significant independent factors predicting unintended pregnancy were educational status and participants age ( table 5 ) . the results of this study indicate an underuse and inappropriate use of contraception among this study population . approximately one - third ( 37.5% ) of participants never used any contraceptive methods . among participants reported as ever using contraception , additionally , effective contraception such as long - acting reversible contraception ( larc ) was rarely used . therefore , non - use , inconsistent use , and use of methods with high typical use failure rates were fundamental reasons leading to adolescent pregnancies in this study . in this study , the only significant independent factor associated with whether pregnant adolescents had ever used contraceptives was participants age . when adjusted by gravidity and educational status , extremely young pregnant adolescents were approximately six - times as likely as adolescents of 17 years or older to report as having never used any types of contraception ( 95% ci , 2.9510.53 ) . in addition , a significantly higher proportion of pregnant adolescents in the extremely young age group reported as not using any contraception prior to becoming pregnant when compared to older participants ( adjusted or , 6.42 ; 95% ci , 2.9414.04 ) . previous studies reported that one of the common reasons leading to inappropriate use of contraception was unawareness of the risk of pregnancy.10,11 in this study , a remarkably high proportion of participants perceived themselves at low risk of becoming pregnant , particularly among extremely young adolescents ( 76.5% ) . thus , a high rate of non - use , inconsistent use , and use of contraceptives with high failure rates noted in the study may be anticipated , particularly among extremely young adolescents . the result of this analysis together with the previously reported findings indicated that the majority of pregnancies occurring among adolescents were unintended.1214 in this study , the educational status and age of extremely young adolescents were significant independent contributors to a high rate of unintended pregnancy . getting pregnant during school can cause a participant to be expelled from school or become a dropout of the school . unsurprisingly , participants who were still in school were approximately seven - times more likely than those who had completed their education to have unintended pregnancy ( 85.6% versus 37.8% ; adjusted or , 6.17 ; 95% ci , 3.2713.75 ) . in the present study , extremely young pregnant adolescents carried a significantly higher risk of unintended pregnancy than older adolescents ( 90.1% versus 49.6% ; adjusted or , 5.76 ; 95% ci , 2.4213.70 ) . the high rate of unintended pregnancy among extremely young adolescents might be due to the fact that younger participants were less able to be supported by their families and be cared for than older adolescents . larc was rarely used among adolescents and young women.15 only 1% of participants in this study had ever used intrauterine devices or subdermal implants . in a large prospective cohort study to promote use of larc as a means of reducing unintended pregnancy , participants younger than 21 years who used oral combined pills , patches , or rings were almost twice as likely as older participants to experience unintended pregnancy.16 the effectiveness of larc is high among adolescents.16,17 additionally , rates of early discontinuation of larc among adolescents were notably low.18 thus , it is evident that larc is the first - line contraceptive option for adolescents and young women . some previous studies showed that exposure to health providers had no significant impact on women s knowledge of contraceptive effectiveness and patterns of practice.7,9 communication failure , poor contraceptive counseling skills , and the incomplete knowledge of counselors about updated contraceptive capabilities have been highlighted as potential contributors of using a less effective contraceptive method or of an inappropriate use.7,9 in this study , the majority of participants ( 58.0% ) received contraceptive information from their teachers and health care providers which might generally be considered as reliable sources of contraceptive information . the underuse of effective contraceptive methods among the current participants , however , still existed . the exact nature of this unfavorable finding requires further evaluation in research if effective prevention of adolescent pregnancy is to be established . firstly , baseline knowledge , attitudes , beliefs about contraceptive use , and underlying reasons for underuse of larc among the participants were not evaluated in this study . secondly , some information that might impact contraceptive practice , ie , male partner s characteristics , socioeconomic characteristics of the couple , detailed characteristics of counselors , and accessibility to contraceptive and abortion services , were not available . finally , this study was conducted among pregnant adolescents at antenatal clinics , and thus generalizability of the study results to other population settings should be meticulously viewed . however , the results of this study highlight the magnitude of unintended pregnancy and its associated factors among pregnant adolescents in thailand a region with a high prevalence of unintended pregnancy . in conclusion , non - use and use of contraceptive methods with high failure rates ( eg , oral combined pills and condom ) were major reasons leading to adolescent pregnancies in this study .
backgroundadolescent pregnancy is a major health problem in many developing countries.objectiveto assess contraceptive practices and pregnancy intendedness in pregnant adolescents.materials and methodsthis study was prospectively conducted from september 2013 to june 2014 . all consecutively pregnant women between 15 and 19 years old attending the antenatal clinic at srinagarind hospital and the khon kaen branch of the planned parenthood association of thailand were invited for participation . face - to - face interviews by trained interviewers using standardized questionnaires were carried out . logistic regression was used to determine an adjusted odds ratio ( aor ) and 95% confidence interval ( ci ) of independent predictors.resultstwo hundred participants were enrolled . mean age was 17.2 years . one hundred and eighteen ( 59.0% ) were currently in school . seventy - five ( 37.5% ) participants had never used any contraceptive methods . of the 125 participants who had ever used contraception , regular use of contraceptives was reported in only 21 participants ( 16.8% ) . only two participants ( 1.0% ) had ever used an intrauterine device or implant . participants age was a significant independent factor associated with non - use of contraceptives ( aor , 6.42 ; 95% ci , 2.9414.04 ) . of the 200 participants , 132 ( 66.0% ) declared that the pregnancy was unintended . significant independent factors predicting unintended pregnancy were educational status ( aor , 6.17 ; 95% ci , 3.2713.75 ) and participants age ( aor , 5.76 ; 95% ci , 2.4213.70).conclusionnon - use and use of contraceptive methods with high failure rates were major reasons leading to adolescent pregnancies . participants age was an independent factor predicting non - use of contraceptives . educational status and age of the participants were significant factors predicting unintended pregnancy .
SECTION 1. SHORT TITLE. This Act may be cited as the ``International Arbitration Enforcement Act of 1999''. SEC. 2. FINDINGS. The Congress makes the following findings: (1) Arbitration is an efficient and flexible dispute resolution mechanism of great benefit to United States persons doing business internationally. (2) In some countries, particularly those with undeveloped or inconsistent judicial systems, international arbitration may be the only fair and reliable dispute resolution mechanism available to United States persons. (3) The usefulness of international arbitration depends in large measure on the commitment of foreign states to enforce foreign arbitral awards pursuant to their accession to, and observance of, the Convention on the Recognition and Enforcement of Foreign Arbitral Awards. (4) United States persons are often without remedies when foreign states violate the Convention by refusing to enforce foreign arbitral awards or by otherwise impairing the ability to collect the awards by improperly delaying their enforcement. (5) It is in the interest of the United States to maintain the reliability of international arbitration, to promote the observance of the Convention, and to protect United States persons from economic injury resulting from violations of the Convention by foreign states. (6) Similarly, it would be unjust to permit a foreign state to be shielded from liability in the United States for the damages suffered by a United States person abroad resulting from a violation of the Convention by the foreign state. (7) It is therefore in the national interest to create a judicial remedy in favor of United States persons injured as a result of a violation of the Convention by a foreign state and to facilitate the execution of any judgment entered in such an action. SEC. 3. PURPOSE. The purpose of this Act is to create a civil remedy against foreign states whose violation of the Convention injures United States persons by prohibiting the enforcement of foreign arbitral awards entered in favor of such United States persons or by impairing the ability of such United States persons to collect such awards. SEC. 4. DEFINITIONS. As used in this Act-- (1) Convention.--The term ``Convention'' means the Convention on the Recognition and Enforcement of Foreign Arbitral Awards, done at New York on June 10, 1958. (2) United states person.--The term ``United States person'' means-- (A) any United States citizen or alien admitted for permanent residence into the United States; or (B) any corporation, trust, partnership, or other judicial entity established pursuant to the laws of the United States or its several States and territories. (3) Foreign arbitral award.--The term ``foreign arbitral award'' means any arbitral award to which the Convention applies. SEC. 5. LIABILITY FOR VIOLATION OF THE CONVENTION. (a) Civil Remedy.--(1) Any foreign state that is certified by the President under subsection (b) to have injured a United States person through the state's violation of the Convention with respect to a foreign arbitral award shall be liable to the United States person for money damages consisting of-- (A) the amount of the foreign arbitral award, plus any interest provided for by the award; and (B) the attorney's fees and costs incurred by the United States person in bringing an action under this Act with respect to such certification. (2) Actions may be brought under paragraph (1) with respect to arbitral awards entered before, on, or after the date of the enactment of this Act. (b) Presidential Certification.--The President may certify an injury to a United States person through a violation of the Convention if-- (1)(A) a foreign state has failed to enforce a foreign arbitral award entered in favor of that United States person in violation of the state's obligations under the Convention; or (B) a foreign state has impeded, in violation of its obligations under the Convention, the enforcement of a foreign arbitral award entered in favor of that United States person such that the ability of the United States person to collect the award may reasonably be presumed to have been impaired or reduced; and (2) the United States person has exhausted all judicial and administrative remedies in the foreign state in which the arbitral award is sought to be enforced, or the further pursuit of such remedies would reasonably be considered to be futile. (c) Effect of Presidential Certification.--A Presidential certification that a United States person has been injured by a foreign state's violation of the Convention shall, in any action brought under this Act, establish an evidentiary presumption that-- (1) the foreign state certified to have violated the Convention has done so; and (2) the damages suffered by the United States person are equivalent to the amount of the award plus interest, if any. (d) Jurisdiction.--(1) Chapter 85 of title 28, United States Code, is amended by inserting after section 1331 the following new section: ``Sec. 1331a. Civil actions involving violations of the Convention on the Recognition and Enforcement of Foreign Arbitral Awards ``The district courts shall have exclusive jurisdiction, without regard to the amount in controversy, of any action brought under section 5 of the International Arbitration Enforcement Act of 1999.''. (2) The table of sections for chapter 85 of title 28, United States Code, is amended by inserting after the item relating to section 1331 the following: ``1331a. Civil actions involving violations of the Convention on the Recognition and Enforcement of Foreign Arbitral Awards.''. (e) Waiver of Sovereign Immunity.--Section 1605 of title 28, United States Code, is amended-- (1) by striking ``or'' at the end of paragraph (6); (2) by striking the period at the end of paragraph (7) and inserting ``; or''; and (3) by adding at the end the following: ``(8) in which the action is brought with respect to violations of the Convention on the Recognition and Enforcement of Foreign Arbitral Awards under section 5 of the International Arbitration Enforcement Act of 1999.''. (f) No Immunity From Attachment or Execution.--(1) Section 1610(a) of title 28, United States Code, is amended-- (A) by striking the period at the end of paragraph (7) and inserting ``, or''; and (B) by adding at the end the following: ``(8) the judgment or attachment relates to a claim for which the foreign state is not immune under section 1605(a)(8), regardless of whether the property is or was involved in or related to the act giving rise to or upon which the claim is based.''. (2) Section 1610(b) of such title is amended-- (A) by striking ``or'' at the end of paragraph (1); (B) by striking the period at the end of paragraph (2) and inserting ``, or''; and (C) by adding at the end the following: ``(3) the judgment or attachment relates to a claim for which the foreign state is not immune under section 1605(a)(8), regardless of whether the property is or was involved in or related to the act giving rise to or upon which the claim is based.''. (g) Limitations Period.--An action under this Act may be brought within one year after the President makes the certification under subsection (b) on which the action is based.
Amends Federal law to grant district courts exclusive jurisdiction over violations of the Convention. Waives a foreign state's sovereign immunity in any action brought against it for violations of the Convention, including the enforcement of such actions.
the initial conditions for cosmic chemical evolution are of fundamental importance to our understanding of galaxy formation and the process of galactic chemical evolution . these conditions , set by the yields of the first few generations of stars , depend on various ( largely unknown ) factors including the form of the primordial stellar initial mass function and the uniformity of the enrichment of the intergalactic medium ( igm ; @xcite ) . in order to pin down the initial conditions of cosmic chemical evolution , one should seek to understand the origin and relative abundances of the metals in the least chemically evolved systems . the most metal - poor damped ly@xmath0 systems ( dlas ) , for example , are usually interpreted as distant protogalaxies at an early stage of chemical evolution @xcite . whilst the origin of their metals is still largely unknown , recent hydrodynamical simulations suggest that such systems might have been enriched by just a few supernova events @xcite . if this is indeed the case , the most metal - poor dlas provide a simple route to study the first stages of chemical enrichment in our universe . by definition , dlas have a neutral hydrogen column density in excess of @xmath6 i atoms @xmath7 ( @xcite ; see also the review by @xcite ) , which acts to self - shield the gas from the ultraviolet background radiation of quasars ( qsos ) and galaxies @xcite . this results in the gas having a simple ionization structure subject to negligible corrections for unseen ion stages @xcite , quite unlike the ly@xmath0 forest clouds that trace the low density regions of the igm ( e.g. @xcite ) . the main concerns that limit abundance studies in dlas are line saturation and the possibility that dust may hide some fraction of the metals @xcite . these concerns are alleviated when the metallicity of the dla is below @xmath8 z@xmath9 , which is also the regime where we expect to uncover the enrichment signature of the earliest generations of stars . the recent interest in the most metal - poor dlas @xcite complements the ongoing local studies of metal - poor stars in the halo of the milky way @xcite . these stars are believed to have condensed out of near - pristine gas ( perhaps a metal - poor dla itself ? ) , that was enriched by only a few earlier generations of stars . thus , the first generation of stars can also be studied through the signature retained in the stellar atmospheres of the most metal - poor stars in the halo of our galaxy . however , unlike the relative ease with which one can measure the abundances of metal - poor dlas , deriving element abundances from the stellar atmospheres of metal - poor stars is not straightforward @xcite . systematic uncertainties in the derived abundances are introduced by assuming that the spectral line being examined forms in a region that is in local thermodynamic equilibrium ( lte ) , as well as the need to account for three - dimensional ( 3d ) effects in the 1d stellar atmosphere models . these effects are particularly acute for oxygen , where several different abundance indicators are known to produce contradictory estimates in the low - metallicity regime @xcite . despite the efforts of many authors , our uncertainty in the derived oxygen abundances has sparked an ongoing debate as to the trend of [ o / fe ] @xmath10 , where @xmath11 refers to the number of atoms in element a and b. ] in the milky way when [ fe / h ] @xmath12 . a history of the relevant discussion on [ o / fe ] is provided by @xcite , with further details given in section [ sec : ofe ] . in brief , at low metallicity , both o and fe are produced exclusively by type - ii supernovae ( sne ii ) and the winds from their progenitors . when [ fe / h ] @xmath13 , there is a drop in [ o / fe ] due to the _ delayed _ contribution of fe from type - ia supernovae ( sne ia ) . thus , the [ o / fe ] ratio is most commonly used to measure the time delay between sne ii and the onset of sne ia . at the lowest metallicity , however , one can use the [ o / fe ] ratio as a measure of the relative production of @xmath0- to fe - peak elements by the first few generations of massive stars . another key diagnostic ratio at low metallicity that may shed light on the nature of the early generations of stars was uncovered by @xcite who reported a rather surprising evolution of [ c / o ] with decreasing o abundance in their sample of 34 halo stars ( see also @xcite ) . in disc and halo stars when the oxygen abundance is @xmath13 , [ c / o ] steadily rises from [ c / o ] @xmath14 to solar . when [ o / h ] @xmath15 , galactic chemical evolution models that _ only _ consider the nucleosynthetic products of population ii stars predict [ c / o ] to decrease or plateau , contrary to the observed trend . the increase in [ c / o ] with decreasing metallicity has thus been interpreted as evidence for an increased carbon yield from either population iii stars @xcite or rapidly - rotating low - metallicity population ii stars @xcite . at first , concerns were raised regarding the accuracy of the derived c and o abundances , since the lines used are subject to large non - lte corrections . @xcite , however , performed a non - lte analysis of the same lines , with further contraints from additional ci lines , to confirm the reality of the stellar [ c / o ] trend . these results depend somewhat on the adopted cross sections for collisions of ci and oi atoms with electrons and hydrogen atoms , but for all probable values , [ c / o ] increases with decreasing metallicity when [ o / h ] @xmath16 . to summarize , at present there are still some remaining concerns that prevent us from accurately measuring c and o abundances in the atmospheres of metal - poor halo stars . these difficulties have prompted a few teams to focus on very metal - poor @xmath17 , in line with the classification scheme for stars proposed by @xcite . ] ( vmp ) dlas where the absorption lines of cii and oi may be unsaturated and the abundances of c and o can be measured with confidence . unfortunately , these near - pristine dlas are rare , falling in the tail of the metallicity distribution function of dlas @xcite . thus , only a handful of confirmed vmp dlas are known at present . the first high spectral resolution survey ( @xmath18 , full width at half maximum , fwhm@xmath19 km s@xmath20 ) for vmp dlas was conducted by @xcite , whose specific goal was to study the relative abundances of the cno group of elements as a probe of early nucleosynthesis . indeed , this was the first study to independently confirm the increased [ c / o ] abundance at low metallicity , suggesting that near - solar values of [ c / o ] are commonplace in this metallicity regime . the [ c / o ] trend reported by @xcite has also been independently noted by @xcite in a medium spectral resolution ( @xmath21 , fwhm@xmath22 km s@xmath20 ) survey of 35 dlas ( a preliminary report of this study can be found in @xcite ) . in many of their systems , the cii and oi lines were thought to be affected by line saturation , leaving only five dlas to test the trend in c / o . interestingly , this sample of dlas suggests that [ c / o ] continues to rise to _ supersolar _ values when [ o / h]@xmath23 .
we present a high spectral resolution survey of the most metal - poor damped ly@xmath0 absorption systems ( dlas ) aimed at probing the nature and nucleosynthesis of the earliest generations of stars . our survey comprises 22 systems with iron abundance less than 1/100 solar ; observations of seven of these are reported here for the first time . together with recent measures of the abundances of c and o in galactic metal - poor stars , we reinvestigate the trend of c / o in the very metal - poor regime and we compare , for the first time , the o / fe ratios in the most metal - poor dlas and in halo stars . we confirm the near - solar values of c / o in dlas at the lowest metallicities probed , and find that their distribution is in agreement with that seen in galactic halo stars . we find that the o / fe ratio in very metal - poor ( vmp ) dlas is essentially constant , and shows very little dispersion , with a mean [ @xmath1o / fe@xmath2@xmath3 , in good agreement with the values measured in galactic halo stars when the oxygen abundance is measured from the [ oi]@xmath4 line . we speculate that such good agreement in the observed abundance trends points to a universal origin for these metals . in view of this agreement , we construct the abundance pattern for a typical very metal - poor dla and compare it to model calculations of population ii and population iii nucleosynthesis to determine the origin of the metals in vmp dlas . our results suggest that the most metal - poor dlas may have been enriched by a generation of metal - free stars ; however , given that abundance measurements are currently available for only a few elements , we can not yet rule out an additional contribution from population ii stars . [ firstpage ] galaxies : abundances @xmath5 galaxies : evolution @xmath5 quasars : absorption lines
figure [ rho_t ] shows the temperature dependence of @xmath4-axis resistivity at zero and 45 t fields . at zero field , @xmath11 is metallic all the way down to @xmath0 . this represents a clear contrast with the semiconductinglike upturn in @xmath14 observed at lower dopings of bi@xmath2sr@xmath2cacu@xmath2o@xmath15 @xcite in the pseudogap state @xcite . we can examine our data within the overall temperature dependence @xmath16 which reproduces the temperature dependence of @xmath13 @xcite . also , it can be as easily fitted by a power law with the exponent 1.3 ( @xmath17 ) ( inset in fig . [ rho_t ] ) . regardless of the choice , the temperature dependence is not @xmath7-quadratic as in a conventional fl ; it marks an n - fl state even in the heavily overdoped region . when we apply 45 t along the @xmath4 axis , the superconductivity is destroyed and the entire temperature dependence up to 100 k can now be fitted with the simple fl form @xmath18 . this clearly demonstrates that sufficiently high magnetic fields destroy all remnants of the n - fl behavior , recovering the all familiar fermi - liquid metal ; _ i.e. _ in this overdoped cuprate there exists _ a field - induced transformation from the n - fl to fl state_. to follow the temperature dependence of @xmath14 at different fields we plot it against @xmath1 in fig . [ rho_t2 ] . it is evident that the @xmath19 dependence is observed below a field - dependent temperature @xmath20 indicated by the arrows . at higher temperatures the @xmath11 data deviate from the @xmath1 behavior as can be seen more clearly by subtracting @xmath21 in the upper panel . we note that although the change is gradual , the power in the temperature dependence unmistakably changes from 2 at low temperatures ( @xmath22 ) to less than 2 at high temperatures ( @xmath23 ) . the field dependence of the @xmath20 is depicted in the @xmath7-@xmath24 diagram in fig . [ t_h ] . at 45 t , fl state extends up to 100 k , and at lower fields the fermi liquid breaks down crossing to an n - fl behavior above @xmath20 . with decreasing field @xmath25 decreases linearly and extrapolates to zero in the vicinity of the upper critical field @xmath26 [ see below ] , terminating at a putative qcp . we conclude then that in zero temperature limit , the normal state above @xmath27 in tl@xmath2ba@xmath2cuo@xmath3 is a fermi liquid , in agreement with the recent observation in this system of the wiedemann - franz law @xcite . next we examine the field dependence of @xmath14 at constant temperatures , plotted in fig . [ rho_h](a ) . at low temperatures , the resistivity is zero below the so - called irreversibility field @xmath28 @xcite in the vortex solid state . we note that above @xmath28 the magnetoresistance is always _ positive_. we recall that in less - doped pseudogapped bi@xmath2sr@xmath2cacu@xmath2o@xmath15 the observed magnetoresistance is _ negative _ over a large field range @xcite , consistent with filling of the low - energy states within the pseudogap in the applied magnetic field . we surmise then , that at this doping the pseudogap is either way below the superconducting energy scale , or perhaps entirely absent . the superconducting coherence can survive up to a characteristic field @xmath29 , above which the quasiparticle conductivity overcomes the vortex contribution @xcite . this often underestimates the upper critical field @xmath27 near @xmath0 in high-@xmath0 cuprates ; it is notoriously difficult to obtain from transport owing to large thermal fluctuations . however , previous studies of @xmath4-axis magnetotransport @xcite revealed that in the overdoped regime in the low-@xmath7 limit , @xmath27 is very near @xmath30 . in our sample we evaluate @xmath31 t. above this limiting field @xmath32 at low-@xmath7 is strictly @xmath24-linear in the normal state over the entire field range . to take a closer look at higher @xmath7 , we subtract the high - field linear term from @xmath32 and obtain @xmath33 , which quantifies the deviation from the @xmath24-linear dependence . this analysis highlights a noticeable deviation from the field - linearity below a temperature - dependent characteristic field @xmath34 , see fig . [ rho_h](b ) . the obtained @xmath35 is also plotted in fig . [ t_h ] . remarkably and _ consistently _ it follows the @xmath25 line within the experimental error bars . we surmise then , while the @xmath24-linear and large magnetoresistance is a non - trivial finding in its own right that needs to be further understood , here it clearly is a phenomenon of the fermi - liquid . indeed , several theoretical accounts within the fermi - liquid picture derive large @xmath24-linear @xmath32 @xcite . we remark that at low temperatures below 5 k the standard fl state is confirmed by the classical kohler s rule for magnetoresistance , see fig . [ rho_h](c ) . at higher temperatures , where the low field data below @xmath34 [ including @xmath36 no longer follow what is expected in the simple fl state , the scaling is clearly violated . and while the violation of kohler s rule at high temperatures can be caused by other mechanisms , the low temperature data are consistently in correspondence with the field - induced fl state . the temperature - dependent violation further indicates that here the magnetoresistance is not simply governed by @xmath37 ( a product of the cyclotron frequency and scattering time ) . from this we conclude that the observed field - induced @xmath19 behavior is an intrinsic effect and not an artifact due to @xmath37 . at finite temperatures the observed field - induced transformation appears to be crossover - like . so now we will ask whether the @xmath38 k terminus of @xmath35 indicates a true phase transition at qcp . we note the conspicuously strong field dependence of the fl coefficient @xmath39 : it increases with decreasing field and decreasing @xmath20 , see inset of fig . indeed we find that the field dependence can be fitted to @xmath40 where @xmath41 and @xmath42 are constants and @xmath43 and @xmath44 are the relevant parameters of the fit . as we discussed earlier , in zero field @xmath11 can be analyzed either by a power law or by the @xmath45 dependence . in the analysis of the field dependence the two different forms would require different values of @xmath41 in eq . ( 1 ) . in the former case , we take @xmath46 @xmath47cm / k@xmath48 and we can fit the @xmath49 by @xmath50 and @xmath51 t. in the latter case , we use the finite coefficient @xmath52 @xmath47cm / k@xmath48 ( see fig . [ rho_t ] ) and the fit gives @xmath53 and @xmath54 t. thus , experimental @xmath49 algebraically diverges at @xmath44 . within fermi liquid theory , as @xmath55 k the energy dependence of the total scattering rate near the fermi surface takes the form @xmath56 ( @xmath57 comes from the impurity scattering , @xmath58 is a constant in energy @xmath59 , and @xmath60 is the fermi energy ) @xcite . at the qcp the singularity of @xmath61 will mirror that of @xmath49 the two coefficients are related through the quasiparticle - quasiparticle scattering cross section . the found divergence of @xmath39 thus gives us confidence in assigning @xmath44 as the qcp field , and @xmath43 as the exponent characterizing quantum criticality . we remark that strongly correlated electron systems commonly obey kadowaki - woods relation @xmath62 @xcite , where @xmath63 is the electronic coefficient of specific heat and a measure of the effective mass @xmath64 of a landau quasiparticle . while this relation is complex ( and sometimes violated @xcite ) , we note that with large ( @xmath65 @xcite ) resistivity anisotropy in tl@xmath2ba@xmath2cuo@xmath3 , the obtained @xmath39 values near @xmath44 imply enhanced @xmath66 mj / mol - k@xmath48 , comparable to that e.g. in superconducting sr@xmath2ruo@xmath67 @xcite , where similarly anisotropic @xmath39 values between the @xmath4-axis and in - plane resistivities have been observed . this enhancement of @xmath39 and a lack of saturation may also be related to the enhanced susceptibility @xmath68 in the overdoped tl@xmath2ba@xmath2cuo@xmath3 @xcite . we surmise then that at finite temperatures the system is governed by the quantum fluctuations , generating the n - fl state which crosses over to the conventional fl above @xmath34 . the n - fl state with non-@xmath1 dependence of resistivity @xcite and a violation of kohler s rule @xcite has also been observed in heavy - fermion superconductors having strong antiferromagnetic fluctuations . notably , in cecoin@xmath69 with quasi - two dimensional electronic structure , a quite similar field - induced qcp has been identified by the transport and specific heat measurements @xcite . in high magnetic fields , the resistivity recovers the @xmath19 dependence at low temperatures in a similar manner near the upper critical field @xmath26 ( @xmath70 t ) . it has been pointed out @xcite that the underlying antiferromagnetic fluctuations @xcite become critical in the immediate vicinity of the superconductivity , preventing development of magnetic order . we note that anisotropic violation of the wiedemann - franz law in cecoin@xmath69 was recently found near the qcp @xcite , where the fl renormalization parameter @xmath71 @xmath72 tends to zero in the @xmath4 direction but remains finite in the @xmath12 plane . this suggests that the @xmath4 direction is more susceptible to instabilities related to qcp . an intriguing question to ask is whether field - induced @xmath73 in a highly overdoped cuprate is a shear coincidence or are they inherently linked . in particular , one may ask whether an extended regime of superconducting fluctuations can promote the observed n - fl state . in the heavy - fermion superconductor cecoin@xmath69 , fl coefficient @xmath39 also diverges at the qcp located very near @xmath26 , with @xmath43 close to unity @xcite . by applying pressure @xmath44 is strongly suppressed and is no longer coincident with @xmath26 @xcite . this rather compellingly points to a qcp controlled by a competing order , most likely related to antiferromagnetism @xcite . in cuprates , neutron scattering experiments @xcite show that magnetic field can _ induce _ a distinct static magnetic order , and a surprisingly much enhanced spin fluctuations at low @xmath7 within the vortex cores , also detected by a spatially resolved nmr @xcite . thus , spin correlations in cuprates seem to experience a field - induced boost . our work , in a departure from previous studies , probes the high - field regime at very high hole doping much distanced from the antiferromagnetic ` mother order ' . that the antiferromagnetic fluctuations @xcite could have such long reach @xcite and play a role in the uncovered field - induced qcp is quite extraordinary . we expect that the true nature of the quantum critical fluctuations that produce the n - fl state in the highly overdoped tl@xmath2ba@xmath2cuo@xmath3 is complex , since here we are not far from the superconductivity s charge - doping end point @xcite . from our experiments , with salient similarities found between a cuprate and a heavy - fermion compound , all evidence here points to a spin - controlled qcp universal to these strongly correlated electron systems . single crystals of tl@xmath2ba@xmath2cuo@xmath3 were grown by a flux method @xcite . in this system , the doping can be tuned by oxygen content covering a range from somewhat overdoped ( @xmath74 k ) up to heavily overdoped ( @xmath75 k ) @xcite . in our study , we used a homogeneous highly - overdoped crystal with a sharp transition at @xmath9 k ( see fig . [ rho_t ] ) . the @xmath4-axis resistivity @xmath76 was measured in the 45-t hybrid magnet at nhmfl ( comprising a 11.5 t superconducting outsert and 33.5 t resistive insert magnets ) by the standard four - probe method using an ac resistance bridge @xcite . the temperature at high fields was controlled to @xmath77 mk by using a capacitance censor at low temperatures , where the magnetoresistance of cernox resistive sensors is not negligible . * acknowledgments . * we thank a. i. buzdin , s. chakravarty , s. fujimoto , n. e. hussey , h. kontani , and c. m. varma for discussions , and b. brandt for technical assistance at nhmfl . this work was supported in part by grants - in - aid for scientific research from jsps , and for the 21st century coe center for diversity and universality in physics " from mext , japan . nakajima y _ et al . _ ( 2007 ) non - fermi liquid behavior in the magnetotransport of ce@xmath78in5 ( @xmath78 : co and rh ) : striking similarity between quasi two - dimensional heavy fermion and high-@xmath0 cuprates . _ j phys soc jpn _ 76:024703 . grigera sa _ ( 2004 ) disorder - sensitive phase formation linked to metamagnetic quantum criticality . _ science _ 306:1154 - 1157 . ( 1999 ) evidence for quantum critical behavior in the optimally doped cuprate bi@xmath2sr@xmath2cacu@xmath2o@xmath79 . _ science _ 285:2110 - 2113 . kubo y , shimakawa y , manako t , igarashi h ( 1991 ) transport and magnetic properties of tl@xmath2ba@xmath2cuo@xmath80 showing a @xmath81-dependent gradual transition from an 85-k superconductor to a nonsuperconducting metal . _ phys rev b _ 43:7875 - 7882 . wilson sd _ et al . _ ( 2007 ) quantum spin correlations through the superconducting - to - normal phase transition in electron - doped superconducting pr@xmath86lace@xmath87cuo@xmath88 . _ proc natl acad sci usa _ 104:15259 - 15263 . kakuyanagi k , kumagai k , matsuda y , hasegawa m ( 2003 ) antiferromagnetic vortex core in tl@xmath2ba@xmath2cuo@xmath80 studied by nuclear magnetic resonance . _ phys rev lett _ scalapino dj ( 1995 ) the case for @xmath89 pairing in the cuprate superconductors . _ phys rep _ 250:330 - 365 . kawakami t , shibauchi t , terao y , suzuki m , krusin - elbaum l ( 2005 ) evidence for universal signatures of zeeman - splitting - limited pseudogaps in superconducting electron- and hole - doped cuprates . _ phys rev lett _ 95:017001 . -axis resistivity @xmath14 in an overdoped crystal of tl@xmath2ba@xmath2cuo@xmath3 under zero ( black solid line ) and a 45 t field ( squares ) . red dashed and solid curves are the fits to @xmath90 and @xmath91 , respectively . inset : @xmath14 vs @xmath92 at zero field . solid line is a linear fit . , width=340 ] -axis resistivity @xmath14 as a function of @xmath1 at fixed fields . upper panel : @xmath14 with the fermi - liquid contribution subtracted highlights the non - fermi - liquid behavior for @xmath23 ( marked by arrows ) . lower panel : @xmath14 fitted to the @xmath19 dependence ( dashed lines ) for @xmath22 . onsets of the deviation from @xmath19 have error bars indicated in fig . [ t_h ] . inset shows an expanded view of the low temperature region.,width=340 ] , and open squares , @xmath35 , separate fermi - liquid ( fl ) and non - fermi - liquid ( n - fl ) states . red squares are the onset of superconductivity ( sc ) . thick red line represents @xmath93 , which in cuprates varies exponentially with @xmath7 @xcite . red hatched area outlines @xmath94 . inset : the fermi - liquid coefficient @xmath39 as a function of @xmath24 . the data can be fitted to eq . ( 1 ) as shown by blue - solid and red - dashed lines corresponding to two choices of @xmath41 , see text . , width=321 ] -axis resistivity @xmath14 . ( a ) @xmath14 vs field @xmath24 at fixed temperatures . dashed line is a linear fit to the 1.5 k data . below @xmath29 the downward rounding of @xmath14 signifies the onset of superconductivity , and @xmath14 is zero below the irreversibility field @xmath28 . ( b ) @xmath95 obtained by subtracting the @xmath24-linear part from @xmath32 at fixed @xmath7 . each curve is shifted vertically for clarity . @xmath35 , marked by arrows [ also in ( c ) ] , are the deviation points from @xmath24-linear magnetoresistance ( mr ) . ( c ) kohler plot of normal - state mr against @xmath96 . @xmath97 is the normal - state zero - field @xmath11 [ dashed line in fig . [ rho_t ] ] . , width=351 ]
in high transition temperature ( @xmath0 ) superconductivity , charge doping is a natural tuning parameter that takes copper oxides from the antiferromagnet to the superconducting region . in the metallic state above @xmath0 the standard landau s fermi - liquid theory of metals as typified by the temperature squared ( @xmath1 ) dependence of resistivity appears to break down . whether the origin of the non - fermi - liquid behavior is related to physics specific to the cuprates is a fundamental question still under debate . we uncover a new transformation from the non - fermi- to a standard fermi - liquid state driven not by doping but by magnetic field in the overdoped high-@xmath0 superconductor tl@xmath2ba@xmath2cuo@xmath3 . from the @xmath4-axis resistivity measured up to 45 t , we show that the fermi - liquid features appear above a sufficiently high field which decreases linearly with temperature and lands at a quantum critical point near the superconductivity s upper critical field with the fermi - liquid coefficient of the @xmath1 dependence showing a power - law diverging behavior on the approach to the critical point . this field - induced quantum criticality bears a striking resemblance to that in quasi - two dimensional heavy - fermion superconductors , suggesting a common underlying spin - related physics in these superconductors with strong electron correlations . uantum criticality refers to a phase transition process between competing states of matter governed not by thermal but by quantum fluctuations demanded by heisenberg uncertainty principle @xcite . it has emerged at the front and center of the physics of strongly correlated electron systems known to host competing quantum orders , and is witnessed by a proliferation of reports on heavy fermions @xcite , itinerant ( quantum ) magnets @xcite , and high - transition - temperature ( high-@xmath0 ) superconductors @xcite , with quantum matter tuned ( at times arguably ) through a transition by pressure , magnetic field , or doping . arguably , since one has to rely on long shadows cast by quantum criticality far above zero temperature @xcite , for , obviously , @xmath5 k can not ever be attained . the often - invoked hallmark of quantum criticality is an unconventional behavior of resistivity . for resistivity contribution , the standard fermi liquid ( fl ) theory of metals predicts a quadratic temperature dependence @xmath6 at low temperatures . in high-@xmath0 cuprates , however , the baffling @xmath7-linear resistivity over a huge temperature range near optimal ( hole ) doping has been observed @xcite , flagging , in this sense , a non - fermi liquid ( n - fl ) behavior in the metallic state above @xmath8 . this has led to new theoretical concepts , some related ( e.g. , phenomenology of marginal fermi liquid " @xcite ) and some unrelated ( e.g. , strange metal " state @xcite ) to quantum criticality . in most considerations of cuprates near quantum critical points ( qcps ) the tuning parameter is charge doping @xcite . and while there is some experimental support @xcite for a doping - driven qcp , it is still to be broadly confirmed . thus , it is of primary import to probe experimentally how the n - fl state transforms into the conventional fl state , and whether and how charge or spin degrees of freedom are involved . here we report on the transformation from such ` strange ' n - fl state to the conventional fl metallic state in high-@xmath0 superconductors in high magnetic fields . our experiments measuring charge transport in overdoped tl@xmath2ba@xmath2cuo@xmath3 reveal an unanticipated quantum criticality in a cuprate that is not doping- but field - induced . the results are in close correspondence with the quantum criticality in the quasi - two dimensional heavy - fermion superconductors having strong antiferromagnetic fluctuations , suggesting common fundamental physics of magnetic origin responsible for the observed qcp . to have access to large regions of the metallic regime at low temperatures , we use magnetic fields to destroy superconductivity in heavily doped tl@xmath2ba@xmath2cuo@xmath3 ( @xmath9 k ) . this material has a single cuo@xmath2 layer per unit cell and is relatively clean among cuprates as evidenced by the high @xmath0 ( up to 93 k ) that can be controlled with oxygen content . we focus here on the @xmath4-axis longitudinal magnetotransport ( @xmath10 ) , since it should be less affected by orbital contributions than the transverse geometry , and since in our overdoped system it is expected that fermi surface is three - dimensional - like and coherent @xcite , as revealed by the fact that the temperature dependence of @xmath4-axis resistivity @xmath11 can be well scaled by that of @xmath12-plane resistivity @xmath13 [ see below ] .
quantum control of molecular processes@xcite has proved , over the past two decades , to be viable both theoretically and experimentally . an examination of the coherent control literature , wherein scenarios are expressly designed to take advantage of quantum interference phenomena , shows that the vast majority of applications has been to processes occurring in the continuum energy regime . recently we proposed a new approach to controlling bound state dynamics in large polyatomic molecules@xcite that exploits interferences between overlapping resonances . we have demonstrated the viability of this scenario in controlling internal conversion in pyrazine.@xcite in the present paper we further develop this method , applying it to the control of intramolecular vibrational redistribution ( ivr ) . as an example , we study the control of the flow of energy between bonds in a model of ocs . this molecule , though small , is of particular interest at high energies , where , classically , it displays predominantly chaotic dynamics . in spite of the classical chaos , quantum control via the present scenario is shown to be excellent . this paper is organized as follows : section ii provides an overview of the theory , with a discussion of the feshbach partitioning technique which , as we have shown,@xcite provides a highly efficient method for dealing with bound state problems . section iii describes the collinear ocs model and its classical dynamical characteristics . in section iv we discuss the application of the method to the control of ivr in ocs . an appendix describes our use of the feshbach partitioning technique for the numerical solution of the bound state problem for small systems such as ocs . a more ambitious method for addressing considerably larger systems , the `` qp algorithm '' , is described elsewhere.@xcite we consider a system described by an hamiltonian @xmath0 which can be partitioned physically into the sum of two components @xmath1 and @xmath2 , plus the interaction @xmath3 between them : @xmath4 the eigenstates and eigenvalues of the full hamiltonian are defined by : @xmath5 the ( `` zeroth - order '' ) eigenstates and eigenvalues of the sum of the decoupled hamiltonians are defined as @xmath6 below , we are interested in the time evolution of the system , initially prepared in a superposition of zeroth order states . | , [ psi0 ] where @xmath7 are `` preparation '' coefficients . all sums over @xmath8 , here and below , are assumed to be confined to a subspace @xmath9 . for example , the selected initial states might consist of a set with population heavily concentrated in one bond of a molecule , in which case , energy flow out of such superposition states is examined . the time - evolution of eq . ( [ psi0 ] ) at any subsequent time can then be obtained by expanding the ( zeroth order ) eigenstates , @xmath10 , in terms of the exact eigenstates @xmath11 to give : |(t ) = _ , c _ a_,^ * e^-i e_t/| , [ td ] with @xmath12 . the structure of @xmath13 as a function of @xmath14 defines a resonance shape that provides insight , in the frequency domain , into the population flow out and into the zeroth order @xmath8 states . given this time evolution , the amplitude for finding the system in a state @xmath8 at time @xmath15 is c_= m_{\kappa,\kappa'}(t ) & \equiv & \sum_{\gamma } a_{\kappa,\gamma}^ { } a_{\kappa',\gamma}^ * e^{-i e_{\gamma}t/\hbar } \nonumber \\ & = & { \langle{\kappa}}| \left ( \sum_{\gamma } e^{-i e_{\gamma}t/\hbar } { |{\gamma}\rangle } { \langle{\gamma}}| \right ) { |{\kappa'}\rangle } \label{mkappa}\end{aligned}\ ] ] is the ( @xmath16 ) element of the overlap matrix @xmath17 defined by the term in brackets in eq . ( [ mkappa ] ) . note that , for @xmath18 , if the states @xmath8 and @xmath19 do not overlap with a common @xmath20 , i.e. , there are no _ overlapping resonances _ , then @xmath21 . our previous studies@xcite have demonstrated the significance of such overlapping resonances to the control of radiationless transitions , such as internal conversion . from eq . ( [ time1 ] ) , the probability of finding the system in a collection of states @xmath8 contained in the initial set @xmath9 at time @xmath15 is given by @xmath22 where @xmath23 is a @xmath24-dimensional vector whose components are the @xmath25 coefficients , and @xmath26 . the generalization to the question of finding population in an alternative collection of states , other than @xmath9 , is straightforward . however , it is unnecessary for the study below , as will become evident . equation ( [ totalpop2 ] ) allows us to address the question of enhancing or restricting the flow of probability out of @xmath9 by finding the optimal combination of @xmath25 that achieves this goal at a specified time @xmath27 . experimentally , the resultant required superposition state can be prepared using modern pulse shaping techniques . our interest is to control the flow of population out of some generic molecular subspace into the entire molecular hilbert space . in order to do so we make use of the bound state version of the feshbach partitioning technique.@xcite here , since the control approach is being tested on a small system , we solve the resulting equations in a straightforward way , as described in appendix a. larger systems can take advantage of the `` qp algorithm''.@xcite the feshbach partitioning technique is based on defining two projection operators q _ || , p _ | , [ qpoperators ] which satisfy the following properties : @xmath28 = 0 , & \label{equal2 } \\ & p + q = \mathbb{i } , & \label{equal3 } \end{aligned}\ ] ] [ equal ] where @xmath29 is the identity operator . in what follows , the flow of probability of interest is from the @xmath30 space to the @xmath31 space . using eqs . ( [ equal3 ] ) and ( [ qpoperators ] ) , the eigenstates of the full hamiltonian can be written as | = _ || + _ || . similarly , the schrdinger equation can be expressed as [ e _ - h][p + q]| = 0 , whereby multiplying it by @xmath31 and then by @xmath30 , and using eq . ( [ equal ] ) , one obtains the following set of coupled equations : @xmath32p{|{\gamma}\rangle } & = & phq{|{\gamma}\rangle } , \label{p1 } \\ \left[e_{\gamma } - qhq\right]q{|{\gamma}\rangle } & = & qhp{|{\gamma}\rangle } . \label{q1 } \end{aligned}\ ] ] [ coupledset ] the states @xmath8 and @xmath33 are solutions to the decoupled ( homogeneous ) equations arising from eqs . ( [ p1 ] ) and ( [ q1 ] ) , respectively . that is , @xmath34p{|{\beta}\rangle } = 0 , \label{homobeta } \\ \left[e_{\kappa } - qhq\right]q{|{\kappa}\rangle } = 0 . \label{homokappa } \end{aligned}\ ] ] [ homos ] contrary to continuum problems , in general @xmath35 and it is possible to express @xmath36 in terms of the particular solution of the ( inhomogeneous ) eq . ( [ p1 ] ) , @xmath37^{-1}phq{|{\gamma}\rangle } . \label{psolve}\ ] ] substituting eq . ( [ psolve ] ) into eq . ( [ q1 ] ) results in [ e _ - qhq]q| = qhp[e_- php]^-1phq| . by rearranging terms in this equation , one obtains [ e _ - h]q| = 0 , [ qket ] where @xmath38 is an effective hamiltonian , defined as = qhq + qhp[e _ - php]^-1phq . [ hbar ] the term between squared brackets can be written as ^-1 = _ || [ specres ] by using the spectral resolution of an operator . the matrix elements of @xmath38 are given by @xmath39 [ parameters ] with @xmath40 being the coupling term . equations ( [ delta ] ) and ( [ gamma ] ) represent the energy shift and the decay rate , respectively . by diagonalizing eq . ( [ qket ] ) in a self - consistent manner , one obtains the energy eigenvalues , @xmath41 , and the values for the overlap integrals , @xmath42 . .parameters defining the potential energy surface given by eq . ( [ eq04 ] ) . all magnitudes are given in a.u . [ cols="^,<,^,<,^,<,^",options="header " , ] we now consider the suppression ( and enhancement ) of ivr in the above model of ocs . our intent is to assess the extent of control in such a system , and to establish the relationship between control and overlapping resonances . the coupling terms @xmath43 and , subsequently , the overlap integrals @xmath44 and the energy eigenvalues @xmath41 are calculated by expanding the ocs wave functions in products of the zeroth order states , @xmath45 and @xmath46 are eigenstates of the uncoupled cs and co bond potentials , respectively , with quantum numbers @xmath47 and @xmath48 . our interest is in the flow , for example , out of the cs bond . hence , the @xmath30 subspace is chosen to represent all wave functions containing only excitation in the cs bond , i.e. , @xmath8 are @xmath49 , for all @xmath47 , whereas the @xmath31 subspace spans the space represented by all other zeroth order excitations , i.e. , the @xmath33 are @xmath50 , describing excitation in the cs bond . initiating excitation within @xmath30 and watching the flow into @xmath31 then corresponds to an experiment wherein excitation flows out of the cs bond . as seen in sec . [ sec3a ] , the coupling term , @xmath51 , necessary to obtain the energy shifts and decay rates , consists of a static term ( @xmath52 ) , and a dynamic term [ proportional to @xmath53 in eq . ( [ eq03 ] ) ] . the overlap integrals and energy eigenvalues are obtained by self - consistent diagonalization of eq . ( [ qket ] ) . all vibrational states , @xmath45 and @xmath46 , are numerically calculated using a discrete variable representation ( dvr ) technique,@xcite obtaining a total of 45 eigenvectors for the cs bond , and 59 for the co bond . the number of eigenvectors is larger in the second case , because the dissociation threshold of the co bond is higher in energy . from all the vibrational states obtained , we have observed that control is best when considering a superposition of states , i.e. , eq . ( [ psi0 ] ) , that is near the dissociation onset . the energy differences between these states are relatively small ( @xmath54 a.u . , whose inverse corresponds to a timescale of @xmath55 fs ) , thus giving rise to a high density of states with time scales comparable to vibrational relaxation . the result is a greater opportunity for overlapping resonances which , as will be seen below , enhances the ability to control energy flow . in our case , the states used are the last nine bound eigenvectors ( before the dissociation onset ) of the cs bond , whose corresponding eigenvalues are given in table [ tab2 ] . note , however , that dense eigenstate manifolds will occur at far lower energies in larger molecules . hence , the initial @xmath56 is comprised of a superposition of nine cs states in table 1 , with the co in the ground vibrational state . fs are given in table [ tab2].,width=377 ] figure [ fig3 ] shows the time - evolution of the population , @xmath57 , for an initial wave function constructed from the nine zeroth order @xmath30 space states noted above , and optimized for maximal or minimal energy flow at @xmath58 fs . the optimal coefficients were found using the method described in sec . [ sec2 ] ; the @xmath59 coefficients and their probabilities are given in table [ tab2 ] . results in panel ( a ) correspond to an initial superposition optimized to minimize the population flow from the @xmath30 to the @xmath31 space , while panel ( b ) shows results optimized to enhance the flow of population . as is clearly seen , the initial falloff in panel ( a ) is much slower than that in ( b ) . to quantify this decay , the initial @xmath57 falloff was fit to an exponentially decreasing function , @xmath60 where @xmath61 is the decay time , and @xmath62 is the average around which @xmath57 fluctuates for the first 1.0 ps . note that the @xmath61 values can only be regarded as approximate since the falloff is , in general , not exponential , and @xmath61 depends on the time scale over which the exponential is fit . ( here the fit is over 400 fs ) . in case ( a ) , the decay time is @xmath63 fs , while in case ( b ) it is @xmath64 fs , about seven times smaller . furthermore , we note that in panel ( a ) , only about @xmath65% of the population has been transferred from @xmath30 to @xmath31 during the first 50 fs , while , in contrast , approximately 82% of the population has being transferred to the @xmath31 in panel ( b ) during the same time . moreover , the population that asymptotically remains localized along the cs bond is also larger in the case of ivr suppression ( @xmath66 ) than in that of enhancement ( @xmath67 ) . the controlled results should be compared to the natural ivr behavior of the individual levels participating in the superposition . to this end , the @xmath57 for each of the participating levels is shown in fig . although the energy difference between these states shown is relatively small , the populations , @xmath68 , evolve with a range of initial falloff values , as can be seen in the corresponding values of @xmath61 , given in table [ tab2 ] . note also , from this table , that the control seen in fig . [ fig3 ] is not due to the identification of a particular @xmath10 that independently maximizes or minimizes the decay . indeed , by inspecting the value of the @xmath59 coefficients , we find , in the case of ivr suppression , participation of most of the nine levels , with @xmath69 60% of the total initial population concentrated in the two states with @xmath70 and @xmath71 . neither of these two states independently have the longest decay times , but their interference is crucial to control . similar observations result from considering the data for optimized ivr enhancement , despite the fact that @xmath72 has a relatively small @xmath61 . in this case the optimized superposition also gives a significantly smaller @xmath73 than does the individual @xmath72 state . a qualitative measure @xmath74 of the contribution from the interference of overlapping resonances , and @xmath75 from the direct contribution , was provided in eq . ( [ poverlap ] ) . results for @xmath74 and @xmath75 for the maximization and minimization cases above are provided in fig . [ fig5 ] where the contribution from overlapping resonances ( dashed line ) , become dominant after the first 10 fs , thus demonstrating the important role played by these resonances in the ivr control scenario . this is seen to be the case for both the maximization , as well as minimization , of the flow from the cs bond . ( dashed line ) and @xmath75 ( dotted line ) to : ( a ) ivr suppression , and ( b ) ivr enhancement . the solid line represents the corresponding @xmath57 function from figs . [ fig3].,width=377 ] wave packet evolution corresponding to ivr suppression . dashed lines represent equipotential energy contours , with the innermost corresponding to the wave packet energy , @xmath76 a.u.,width=321 ] a pictorial , and enlightening , view of the results is provided in figs . [ fig7 ] and [ fig8 ] , where the wave packets associated with ivr suppression and enhancement are shown . as can be seen in fig . [ fig7 ] , for the case of ivr suppression , the wave packet remains highly localized along the @xmath77 mode , with minimum spreading along the @xmath78 mode . in particular , it undergoes a slight oscillation along the @xmath77 mode , concentrating most of the probability around the region where the cs dissociation takes place , in a clear correspondence to what happens with a classical counterpart . for the case of ivr enhancement , the effect is the opposite . as can be seen in fig . [ fig8 ] , the spreading of the wave packet along the @xmath78 mode coordinate is relatively fast . the method described above is , of course , applicable at any time during the dynamics . for example , we tried , and successfully attained , control for times at long as 1.5 ps ( corresponding to over 50 cs vibrational periods ) , resulting in about a 55% of the population localized in the cs bond for ivr suppression , and about 22% for ivr enhancement . in this paper , a method for controlling intramolecular vibrational redistribution has been developed and has been applied to ocs , where extensive control over ivr is attained . of particular interest is that the control is achieved even though the associated classical dynamics is chaotic . the method , wherein the coefficients of an initial superposition of zeroth order states are optimized , is shown to rely upon the presence of overlapping resonances , a feature which is expected to be ubiquitous in realistic molecular systems . we have assumed throughout this paper that the initial state that optimizes the intramolecular vibrational redistribution can be prepared , for a real molecule , using modern pulse shaping techniques . computations displaying the resultant field were not , however , carried out on this ocs model since they are best done using more realistic molecular potentials in higher dimensions , yielding realistic optimizing fields . work of this kind is in progress . we thank the natural sciences and engineering research council of canada for support of this research . here , we provide a route to compute the eigenvalues and overlap integrals via eq . ( [ qket ] ) . we start by defining @xmath79 and @xmath80 to be the basis - set dimensions in the @xmath30 and @xmath31 space , respectively , and @xmath81 . the probability of being in the @xmath30 space , @xmath57 , is given by eq.([totalpop2 ] ) . in order to find @xmath57 , two sets of values are needed : the set of eigenvalues @xmath82 , and the overlap integrals @xmath44 between the zeroth - order states in @xmath30 and the exact eigenstates @xmath20 . the partitioning algorithm described below is ingenious in the sense that it allows one to concentrate specifically on obtaining these two sets of values . the method is well suited to small systems . 1 . choose a starting energy @xmath83 , with @xmath84 corresponding to the @xmath84th iteration . in particular , one may choose an energy close to the zeroth - order energies . 2 . take @xmath85 from the last iteration , and compute @xmath86 . 3 . diagonalize @xmath86 , and select one of its eigenvalues to be the next trial energy , @xmath87 . 4 . if @xmath88 , go back to step 2 . 5 . if @xmath89 , @xmath87 becomes the eigenvalues @xmath41 , and its corresponding eigenvector , @xmath90 , is proportional to @xmath91 . repeat steps 1 - 5 until all @xmath92 unique eigenvalues @xmath41 are obtained . in the process of diagonalizing the effective hamiltonian , @xmath38 , each eigenvector @xmath90 has been normalized to 1 . therefore , the use of the algorithm leads to a loss of information about @xmath91 . this makes necessary to also compute the constant of proportionality between @xmath91 and @xmath90 . this is done by requiring that @xmath93 for the full eigenvectors . thus , one can assert that q| = c_|d _ , [ qd ] with @xmath94 being the proportionality constant . the problem then reduces to finding the @xmath94 associated with each @xmath41 . this is accomplished by expressing @xmath95 as @xmath96 where |q^2| = |c_|^2 |d _ = |c_|^2 , [ qsquared ] and , using eq . ( [ psolve ] ) , @xmath97^{-1 } \nonumber \\ & \times & \left [ e_\gamma - php \right]^{-1}phq|\gamma\rangle . \label{psquared}\end{aligned}\ ] ] the application of the spectral resolution of an operator , eq.([specres ] ) , to eq . ( [ psquared ] ) leads to @xmath98 whereby , by making use of eq . ( [ qd ] ) , one obtains @xmath99 now @xmath100 is easily computed by realizing that @xmath101 according to the procedure previously described , we can determine @xmath104 , given @xmath105 , with the exception of a constant phase factor . note that , in general , each proportionality factor , @xmath94 , can be written as @xmath106 , where @xmath107 is a random phase . however , this is not a problem since the results are independent of any constant phase factor ; as seen from eq . ( [ mkappa ] ) , all overlap integrals appear in pairs , @xmath108 , which can be expressed as @xmath109 the surface of section has been computed in the standard way , i.e. , by following each trajectory , and noting @xmath110 and @xmath111 each time that @xmath112 crosses the surface @xmath113 with @xmath114 . lill , g.a . parker , and j.c . light , chem . * 89 * , 483 ( 1982 ) ; j.c . light , i.p . hamilton , and j.v . lill , j. chem . phys . * 82 * , 1400 ( 1985 ) ; s.e . choi and j.c . light , j. chem . phys . * 92 * , 2129 ( 1990 ) .
coherent control of bound state processes via the interfering overlapping resonances scenario [ christopher _ et al . _ , j. chem . phys . * 123 * , 064313 ( 2006 ) ] is developed to control intramolecular vibrational redistribution ( ivr ) . the approach is applied to the flow of population between bonds in a model of chaotic ocs vibrational dynamics , showing the ability to significantly alter the extent and rate of ivr by varying quantum interference contributions .
SECTION 1. SHORT TITLE; AMENDMENT OF 1986 CODE. (a) Short Title.--This Act may be cited as the ``Middle Class Tax Relief Act of 1993''. (b) Amendment of 1986 Code.--Except as otherwise expressly provided, whenever in this Act an amendment or repeal is expressed in terms of an amendment to, or repeal of, a section or other provision, the reference shall be considered to be made to a section or other provision of the Internal Revenue Code of 1986. TITLE I--TAX RELIEF FOR MIDDLE-INCOME TAXPAYERS SEC. 101. INCREASE IN PERSONAL EXEMPTION AMOUNT. (a) In General.--Paragraph (1) of section 151(d) (defining exemption amount) is amended to read as follows: ``(1) In general.--Except as otherwise provided in this subsection, the term `exemption amount' means the sum of-- ``(A) a regular exemption amount equal to $2,000, and ``(B) an additional exemption amount equal to $1,000, in the case of a middle-income taxpayer.'' (b) Phaseout of Additional Exemption Amount.--Subsection (d) of section 151 is amended by redesignating paragraph (4) as paragraph (5), and by inserting after paragraph (3) the following new paragraph: ``(4) Special rules relating to middle-income taxpayer additional exemption amount.-- ``(A) Definition of middle-income taxpayer.-- ``(i) In general.--For purposes of this subsection, the term `middle-income taxpayer' means a taxpayer whose adjusted gross income for the taxable year does not exceed the applicable maximum dollar amount. ``(ii) Applicable maximum dollar amount.-- For purposes of this paragraph, the term `applicable maximum dollar amount' means-- ``(I) $102,000 in the case of a joint return or a surviving spouse (as defined in section 2(a)), ``(II) $87,300 in the case of a head of household (as defined in section 2(b)), ``(III) $61,000 in the case of an individual who is not married and is not a surviving spouse or head of household, and ``(IV) $51,000 in the case of a married individual filing a separate return. ``(B) Phaseout of additional exemption amount.-- ``(i) In general.--In the case of any middle-income taxpayer whose adjusted gross income exceeds the applicable transition dollar amount, the additional exemption amount shall be reduced by the amount determined under clause (ii). ``(ii) Amount of reduction.--The amount determined under this clause with respect to the additional exemption amount shall be the amount which bears the same ratio to the additional exemption amount as-- ``(I) the excess of the taxpayer's adjusted gross income for the taxable year over the applicable transition dollar amount, bears to ``(II) the excess of the applicable maximum dollar amount over the applicable transition dollar amount. ``(iii) Applicable transition dollar amount.--For purposes of this subparagraph, the term `applicable transition dollar amount' means-- ``(I) $47,000 in the case of a joint return or a surviving spouse (as defined in section 2(a)), ``(II) $37,000 in the case of a head of household (as defined in section 2(b)), ``(III) $28,000 in the case of an individual who is not married and is not a surviving spouse or head of household, and ``(IV) $23,500 in the case of a married individual filing a separate return.'' (c) Inflation Adjustments.--Paragraph (5) of section 151(d) (as so redesignated by subsection (b) of this section) is amended by adding at the end the following new subparagraph: ``(C) Adjustments relating to additional exemption amount.--In the case of any taxable year beginning in a calendar year after 1994, the dollar amount contained in paragraph (1)(B) and each dollar amount contained in paragraph (4) shall be increased by an amount equal to-- ``(i) such dollar amount, multiplied by ``(ii) the cost-of-living adjustment under section 1(f)(3) for the calendar year in which the taxable year begins, determined by substituting `calendar year 1993' for `calendar year 1989' in subparagraph (B) of such section.'' (d) Conforming Amendments.-- (1) Paragraph (6) of section 1(f) is amended by striking ``section 151(d)(4)'' each place it appears and inserting ``section 151(d)(5)''. (2) Paragraph (3) of section 151(d) is amended-- (A) in the paragraph caption, by inserting ``of regular exemption amount'' after ``Phaseout'', and (B) in subparagraph (A), by inserting ``regular'' before ``exemption amount''. (3) Subparagraph (A) of section 151(d)(5) (as so redesignated by subsection (b) of this section) is amended-- (A) in the matter preceding clause (i) by striking ``paragraph (1)'' and inserting ``paragraph (1)(A)'', and (B) by striking ``basic'' in the heading and inserting ``regular''. (e) Effective Date.--The amendments made by this section shall apply to taxable years beginning after December 31, 1993. TITLE II--REVENUE PROVISIONS SEC. 201. INCREASE IN RATE OF INDIVIDUAL INCOME TAX FOR HIGH-INCOME TAXPAYERS. (a) In General.-- (1) Married individuals filing joint returns and surviving spouses.--Subsection (a) of section 1 (relating to tax imposed on married individuals filing joint returns and surviving spouses) is amended by striking the item beginning ``Over $78,400'' and inserting the following new items: ``Over $78,400 but not over $100,000. $17,733.50, plus 31% of the excess over $78,400. ``Over $100,000................ $24,429.50, plus 36% of the excess over $100,000.'' (2) Heads of households.--Subsection (b) of section 1 (relating to tax imposed on heads of households) is amended by striking the item beginning ``Over $67,200'' and inserting the following new items: ``Over $67,200 but not over $85,000. $15,429.50, plus 31% of the excess over $67,200. ``Over $85,000................. $20,947.50, plus 36% of the excess over $85,000.'' (3) Unmarried individuals (other than surviving spouses and heads of households).--Subsection (c) of section 1 (relating to tax imposed on unmarried individuals, other than surviving spouses and heads of households) is amended by striking the item beginning ``Over $47,050'' and inserting the following new items: ``Over $47,050 but not over $70,000. $10,645.50, plus 31% of the excess over $47,050. ``Over $70,000................. $17,760, plus 36% of the excess over $70,000.'' (4) Married individuals filing separate returns.-- Subsection (d) of section 1 (relating to tax imposed on married individuals filing separate returns) is amended by striking the item beginning ``Over $39,200'' and inserting the following new items: ``Over $39,200 but not over $50,000. $8,866.75, plus 31% of the excess over $39,200. ``Over $50,000................. $12,214.75, plus 36% of the excess over $50,000.'' (5) Estates and trusts.--Subsection (e) of section 1 (relating to tax imposed on estates and trusts) is amended by striking the item beginning ``Over $9,900'' and inserting the following new items: ``Over $9,900 but not over $12,600. $2,343, plus 31% of the excess over $9,900. ``Over $12,600................. $3,180, plus 36% of the excess over $12,600.'' (b) Effective Date.--The amendments made by this section shall apply to taxable years beginning after December 31, 1993. SEC. 202. SURTAX ON INDIVIDUALS WITH INCOMES OVER $225,000. (a) General Rule.--Subchapter A of chapter 1 (relating to determination of tax liability) is amended by adding at the end the following new part: ``PART VIII--SURTAX ON INDIVIDUALS WITH INCOMES OVER $225,000 ``Sec. 59B. Surtax on section 1 tax. ``Sec. 59C. Surtax on minimum tax. ``Sec. 59D. Special rules. ``SEC. 59B. SURTAX ON SECTION 1 TAX. ``In the case of an individual who has taxable income for the taxable year in excess of $225,000, the amount of the tax imposed under section 1 for such taxable year shall be increased by 15 percent of the amount which bears the same ratio to the tax imposed under section 1 (determined without regard to this section) as-- ``(1) the amount by which the taxable income of such individual for such taxable year exceeds $225,000, bears to ``(2) the total amount of such individual's taxable income for such taxable year. ``SEC. 59C. SURTAX ON MINIMUM TAX. ``In the case of an individual who has alternative minimum taxable income for the taxable year in excess of $225,000, the amount of the tentative minimum tax determined under section 55 for such taxable year shall be increased by 2.5 percent of the amount by which the alternative minimum taxable income of such taxpayer for the taxable year exceeds $225,000. ``SEC. 59D. SPECIAL RULES. ``(a) Surtax to Apply to Estates and Trusts.--For purposes of this part, the term `individual' includes any estate or trust taxable under section 1. ``(b) Treatment of Married Individuals Filing Separate Returns.--In the case of a married individual (within the meaning of section 7703) filing a separate return for the taxable year, sections 59B and 59C shall be applied by substituting `$112,500' for `$225,000'. ``(c) Coordination With Other Provisions.--The provisions of this part shall be applied-- ``(1) after the application of section 1(h), but ``(2) before the application of any other provision of this title which refers to the amount of tax imposed by section 1 or 55, as the case may be.'' (b) Clerical Amendment.--The table of parts for such subchapter A is amended by adding at the end the following new item: ``Part VIII. Surtax on individuals with incomes over $225,000.'' (c) Effective Date.--The amendments made by this section shall apply to taxable years beginning after December 31, 1993. SEC. 203. INCREASE IN RATE OF CORPORATE INCOME TAX. (a) In General.--Subparagraph (C) of section 11(b)(1) (relating to amount of tax) is amended by striking ``34 percent'' and inserting ``36 percent''. (b) Conforming Amendment.--Paragraph (2) of section 11(b) (relating to ineligibility of personal service corporations for graduated rate) is amended by striking ``34 percent'' and inserting ``36 percent''. (c) Effective Date.--The amendments made by this section shall apply to taxable years beginning after December 31, 1993. SEC. 204. INCREASE IN RATE OF INDIVIDUAL ALTERNATIVE MINIMUM TAX. (a) In General.--Subparagraph (A) of section 55(b)(1) (relating to tentative minimum tax) is amended by striking ``24 percent'' and inserting ``27 percent''. (b) Effective Date.--The amendment made by this section shall apply to taxable years beginning after December 31, 1993.
TABLE OF CONTENTS: Title I: Tax Relief for Middle-Income Taxpayers Title II: Revenue Provisions Middle Class Tax Relief Act of 1993 - Title I: Tax Relief for Middle-Income Taxpayers - Amends the Internal Revenue Code to provide an additional exemption amount ($1,000) to the regular personal exemption ($2,000) for middle-income taxpayers. Specifies the maximum gross income amounts for such taxpayers. Provides a formula for reducing the additional exemption amount for middle-income taxpayers whose incomes exceed certain transitional dollar amounts. Provides for inflation adjustments of amounts under this title. Title II: Revenue Provisions - Increases the individual income tax rates for certain high-income taxpayers. Imposes a surtax on the individual tax rate or the alternative minimum tax rate for individuals whose incomes exceed $225,000. Increases the rates of corporate income tax and of alternative minimum tax.
null
we demonstrate that , after focal demyelination of adult mice corpus callosum , demyelinated axons form functional glutamatergic synapses onto adult - born ng2 + oligodendrocyte progenitor cells ( opcs ) migrating from the subventricular zone ( svz ) . one week after lesion , this glutamatergic input is significantly reduced compared to endogenous callosal opcs , and is lost upon differentiation into oligodendrocytes . therefore , axon - op synapse formation is a transient and regulated step that occurs during remyelination of callosal axons .
Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period. ||||| Duct-Taped Toddler (MediaTakeout) (CBS/KYW/AP) Pennsylvania mother Caira Ferguson is being held without bail after what police call a "disgusting and deplorable" picture led them to file child endangerment and other charges. The picture shows a toddler, wearing only a diaper, bound to a chair with her mouth, hands and feet duct taped. A woman, who police say is the one-year-old's mother, is also seen in the picture sitting next to the girl, according to CBS affiliate KYW. Authorities say multiple agencies are investigating the incident, including police in Chester Township where the 21-year-old Ferguson lives. When questioned by police, sources say Ferguson admitted she is the woman in the picture, stating the picture was not taken as a joke, but failed to explain why or who took the picture, KYW reported. Ferguson went to police earlier this month to complain that her identity had been stolen and someone had posted a photo online of her young daughter bound to a chair with duct tape covering her mouth. "No, I did not duct tape my child," Ferguson said as she was led away to jail in handcuffs Wednesday afternoon. Ferguson's mother says she thinks another child was responsible. The toddler is in custody of child welfare workers.
– A Pennsylvania mom has been arrested after showing a photo of herself posing with her toddler daughter gagged and bound to a chair with duct tape ... to police. The photo—which shows the mom smiling and her baby clad only in a diaper—first surfaced on the Mediatakeout website. The 21-year-old mom approached Chester County investigators with the original photo, arguing that the site had "stolen" her identity, reports WSBT TV. A search of the young woman's home turned up a child's plastic chair with bits of tape still attached, according to police, who called the photo "disgusting." She later admitted binding the girl with tape, investigators said. She faces charges of child endangerment, unlawful restraint and false imprisonment. The child is being placed with relatives.
The controversial regulation that prohibits students from bringing cellphones into schools is ­finally being scrapped, sources told The Post. The policy reversal was sought for years by students, parents and advocates who saw the devices as a lifeline between parents and their kids — particularly on the commute to and from school. “Finally, someone saw the light!” said civil rights lawyer Norman Siegel, who filed an unsuccessful lawsuit challenging the ­legality of the ban in 2006. While parents and students have long opposed the ban, teachers have been less resolute about the matter, with many concerned about cheating, cyber-bullying and their own roles of policing cell use. Former Mayor Mike Bloomberg also vigorously supported the ban for many of the same reasons. Mayor Bill de Blasio — who admitted that his own son, Dante, violated the regulation as a Brooklyn Tech High School student — vowed during his campaign last year to undo Bloomberg’s policy. De Blasio is expected to announce revised regulations Wednesday at Brooklyn’s High School of Telecommunications Arts and Technology. Sources said the new policy would leave much of the decision-making in the hands of principals, including whether to collect the devices or simply require students to keep them out of sight. “The question the ­Department of Education has to ask themselves is how do we enforce it?” said George Anthony, a history teacher at Wagner HS on Staten Island. “What are we going to do if illicit content is found on the phones?” he added. “Hopefully strong leadership can enforce it.” Several principals contacted by The Post also expressed concerns about the difficulties that lifting the ban will create. “It’s definitely going to cause problems,” predicted one school leader. “I catch teachers all the time using their phones even though they’re not supposed to.” While electronic devices like cellphones have been banned from schools for more than a decade, the prohibition wasn’t aggressively enforced until April 2006, when the city started using portable metal detectors at randomly selected schools to reduce violence. The lawsuit challenging the ban as irrational and unsafe was filed a few months later, but ultimately failed. City Hall and DOE officials didn’t respond to multiple requests for comment regarding lifting the ban. ||||| Susan Watts/New York Daily News Timothy Everett, a senior at the Urban Assembly School for the Performing Arts, agrees with lifting the ban on student cellphones in New York City schools. Can you hear me now? Cell, yes! More than 1.1 million kids in New York City schools will soon be permitted to bring cellphones and other portable devices to class — a dramatic reversal of the ban. The city’s existing rules bar cellphones and other electronic devices such as iPads from school property. Students are required to leave them at home or check them with businesses outside the building, often incurring daily charges. But in a sharp departure, the new rules will allow each school’s principal to work with teachers and parents to develop a cellphone policy tailored to their needs. bulentozber/Getty Images/iStockphoto New York City kids won't have to hand over their cellphones before going to school, thanks to changed rules. Viorel Florescu/for New York Daily News Businesses like Safe Mobile Storage, a truck that stores phones for students at Columbus High School in the Bronx, may not be needed in New York City anymore now that the ban on pupils' cellphones is being lifted. Justin Sullivan/Getty Images Devices like iPads were also banned in city schools under the old rules. Previous Next Enlarge Mayor de Blasio said the change will enable parents to stay in better touch with children, especially in case of an emergency. De Blasio also said it would end the inequity and unfairness of the current ban, which is most strictly enforced at schools with metal detectors in low-income communities. “Parents should be able to call or text their kids — that’s what this comes down to,” said de Blasio. “It’s something Chirlane and I felt ourselves when Chiara took the subway to high school in another borough each day.” Under the new regulations, principals and teachers will decide on a range of options for handling cellphone use in their schools. Their choices will include: New York Daily News * Store mobile devices in backpacks or a designated location during the school day. * Allow mobile devices to be used during lunch or in designated areas only. * Allow mobile devices for instructional purposes in some or all classrooms. Andrew Burton/Getty Images Mayor de Blasio says the new rules will enable parents to stay in better touch with children, especially in case of an emergency. For schools that do not promptly create their own policy, a default rule will allow students to bring cellphones but require them to be kept hidden. As part of the change, schools will also increase education and training to identify and prevent cyberbullying, and to foster responsible digital citizenship among students. De Blasio has said for months that he intended to lift the ban and that his son, Dante, brings a cellphone to school at Brooklyn Tech. Parents, students and educators said they supported the change. “It’s a good idea,” said Timothy Everett, 17, a senior at the Urban Assembly School for the Performing Arts in Harlem. “You need your phone for emergency purposes.” ON A MOBILE DEVICE? WATCH THE VIDEO HERE. [email protected]
– New York City's school teachers might want to check themselves into detention because the city is ending its ban on cell-phone-wielding schoolkids, clearing the way for some 1.1 million students to skip into class with iPhones in tow. Well, sort of: As the New York Daily News reports, while Mayor Bill de Blasio today announced he's killing the ban effective March 2, he's leaving the specifics of cell-phone policy up to individual schools to work out themselves. "Parents should be able to call or text their kids—that’s what this comes down to," says de Blasio, who says the ban, long championed by predecessor Michael Bloomberg, was unfairly implemented anyway. "It’s something Chirlane and I felt ourselves when Chiara took the subway to high school in another borough each day." A school's default policy will be to allow kids to bring devices to school, but stash them during the day. Other options on the table include whether to let kids use them at designated places or times (like lunch), or whether to use devices for texting, er, learning purposes in the classroom. Predictably, de Blasio's move is eliciting some strong support ("Finally, someone saw the light!" crows one civil rights lawyer to the New York Post), mostly from kids and parents, as well as some real concern—mostly from educators. "What are we going to do if illicit content is found on the phones?" asks one teacher, while another principal is more blunt: "It's definitely going to cause problems." (One student used her phone to cheat—with dire results.)
Dylann Roof Asks To Fire Legal Team Of 'Biological Enemies' Enlarge this image toggle caption AP AP Dylann Roof, on federal death row for gunning down nine people two years ago at a historically black church in Charleston, S.C., wants his legal team dismissed because of the lawyers' ethnicity as he seeks to have his conviction and death sentence overturned. "My two currently appointed attorneys, Alexandra Yates and Sapna Mirchandani, are Jewish and Indian respectively," Roof wrote in a letter filed Monday with the 4th U.S. Circuit Court of Appeals. "It is therefore quite literally impossible that they and I could have the same interests relating to my case." The handwritten note goes on to state, "Because of my political views, which are arguably religious, it will be impossible for me to trust two attorneys that are my political and biological enemies." Roof, who represented himself at the sentencing portion of his trial, targeted African-Americans in what federal prosecutors said was a bid to start a race war. On the evening of June 17, 2015, worshippers welcomed Roof at a prayer meeting at Emanuel African Methodist Episcopal Church. After sitting among them for nearly an hour, Roof opened fire as people closed their eyes in prayer. Roof confessed to the crime during a taped interrogation. But at Roof's federal trial, defense attorney David Bruck argued that his client was influenced by hateful online rhetoric. The Post and Courier reported that Bruck wanted to present evidence of Roof's mental illness in a bid to spare his life, but Roof opposed it. In his letter, Roof says that Bruck is Jewish and that it "was a constant source of conflict even with my constant efforts to look past it." "My intentions are to have the appeals process for my case go as smoothly as possible," Roof writes, "the appeals should be worked on and written by lawyers with my best interests in mind." The Post and Courier reports that Yates and Mirchandani were appointed to represent Roof after he was sentenced to death in January. A federal jury found him guilty of hate crimes and murder late last year, and he later pleaded guilty to murder charges in South Carolina state court. ||||| Close Get email notifications on Jennifer Hawes daily! Your notification has been saved. There was a problem saving your notification. Whenever Jennifer Hawes posts new content, you'll get an email delivered to your inbox with a link. Email notifications are only sent once a day, and only if there are new matching items. ||||| A federal court rejected a request by Dylann Roof, the unabashed white supremacist who killed nine black parishioners at a South Carolina church two years ago, to dismiss his attorneys because they’re Jewish and Indian. Roof, who was sent to death row for the June 2015 massacre at a historically black church in Charleston, requested that the two public defenders appointed to handle his appeal be removed from his case, saying their ethnicities are “a barrier to effective communication.” “Because of my political views, which are arguably religious, it will be impossible for me to trust two attorneys that are my political and biological enemies,” the 23-year-old said in a handwritten, three-page motion filed Monday in the U.S. Court of Appeals for the 4th Circuit. The court denied the request in a one-sentence ruling Tuesday. His attorneys, Alexandra Yates and Sapna Mirchandani, did not respond to requests for comment. Rishi Bagga, president of the South Asian Bar Association of North America, said that requesting an attorney’s removal should be based on legal abilities. He said Roof’s comments highlight a challenge among public defenders, who often have to represent clients who don’t reflect their own views. “It’s really part of a lawyer’s oath to represent someone to the best of their ability regardless of their own beliefs, religion or background or origin,” Bagga said. [‘Well, I killed them, I guess’: Jury watches Dylann Roof’s confession to church massacre] Roof has been on death row since a jury convicted him of dozens of charges, including federal hate crimes, for the deaths of nine parishioners who had invited him into their Bible study at Charleston’s Emanuel African Methodist Episcopal Church. Federal prosecutors said Roof committed the massacre to try to start a race war, and they presented as evidence his videotaped confession, in which Roof made no effort to deny the killings. The two-hour video, played during the third day of Roof’s trial in December, showed him calm — laughing at times — as he confessed to the deadly shooting. He was nonchalant when he explained to FBI agents why he chose to gun down six women and three men. With a few swift motions of his right arm, he demonstrated how he pulled out his .45-caliber Glock and opened fire — taking 77 total shots. “Well yeah, I mean, I just went to that church in Charleston and, uh, I did it,” Roof told agents when they asked him to explain what happened. Roof wavered briefly when the agents asked him to describe exactly what he had done. “Well, I killed them, I guess,” he said. He also tried to justify the killings, saying what he did was “so minuscule” to what black people are “doing to white people every day all the time.” An image of Dylann Roof’s motion asking a judge to remove his court-appointed attorneys from his case. “I had to do it because somebody had to do something,” he told the agents. “Black people are killing white people every day on the street, and they are raping white women.” Prosecutors also introduced Roof’s jailhouse journal, in which he wrote that he does not regret what he did. “I have not shed a tear for the innocent people I killed,” he said. Roof’s new court filing isn’t the first time he has complained about his attorneys. During his trial, he sought to drop his defense attorney, David Isaac Bruck, whom Roof threatened to kill if he got out of jail. Bruck is also Jewish. Roof sought to argue on his own behalf during the trial’s sentencing phase, a portion of a capital murder case during which defense attorneys argue for a more lenient sentence. A judge later determined that Roof was competent to represent himself as long as his legal team was on standby. In the handwritten motion filed Monday, Roof said Bruck’s Jewish heritage “was a constant source of conflict” despite Roof’s efforts to “look past it.” [‘I’m just a sociopath,’ Dylann Roof declared after deadly church shooting rampage, court records say] The “difficulties” at his trial, Roof argued, should justify removal of his public defenders serving as his appellate attorneys. He said his appeal “should be worked on and written by lawyers with my best interests in mind.” Roof was also charged at the state level. He avoided a second death penalty trial after pleading guilty in March to nine counts of murder, three counts of attempted murder and a related weapons charge. He was given nine consecutive life sentences in April. Court records unsealed in May provided a glimpse into Roof’s mind. Experts who examined him said he was less concerned over his own fate and worried more about whether certain family members were eating together, how his cats were doing without him, what was written on his Wikipedia page and what he was going to wear in court. He also resisted the autism diagnosis from a psychologist hired by his defense team, saying autism was for “nerds” and “losers,” according to court records. The “state psychiatrist told me there is nothing wrong with me,” according to court records paraphrasing Roof’s statements. “I don’t have autism. I’m just a sociopath.” As Roof sits in a federal prison in Terre Haute, Ind., a mural in Ann Arbor, Mich., was vandalized with racist graffiti supporting Roof. “Free Dylann Roof, I Hate N——,” it said, according to the Michigan Daily. Lindsey Bever contributed to this report. Read more: Charleston church shooter: ‘I would like to make it crystal clear, I do not regret what I did’ Dylann Roof says it’s ‘not fair’ he has to hear so much from his victims’ families. They all had to endure his racist screed. Dylann Roof has been sentenced to death. Will the government ever be able to execute him?
– Dylann Roof, the convicted white supremacist murderer of nine black churchgoers in Charleston, SC, wants to dismiss and replace his legal team because they are his "political and biological enemies," he says in a handwritten letter filed with the 4th US Circuit Court of Appeals Monday. Roof, who's appealing his conviction and death sentence, says in the letter that his attorneys, Alexandra Yates and Sapna Mirchandani, "are Jewish and Indian respectively. It is therefore quite literally impossible that they and I could have the same interests relating to my case," the Post and Courier reports. "Because of my political views, which are arguably religious, it will be impossible for me to trust two attorneys that are my political and biological enemies," he continues. He also notes that his defense attorney at his federal trial was Jewish, and "his ethnicity was a constant source of conflict even with my constant efforts to look past it." (In fact, the defense team alleged Roof threatened to kill the lawyer if he was ever freed.) Roof represented himself during the sentencing phase of his trial. Federal prosecutors say his 2015 rampage at Emanuel African Methodist Episcopal Church was an attempt to start a race war. Per the Washington Post, the court will review Roof's request and make a decision. The attorneys haven't commented. NPR has Roof's letter in full.
due to prominent anatomical location , maxillofacial injuries and fractures are nearly always associated with moderate - to - severe road traffic accidents ( rta ) . panfacial or maxillofacial injuries may lead to derangement of the architecture and disruption of different components ( soft tissue , bony , and cartilaginous ) of the upper airway , often with little external evidence of the deformity . in many such situations , recent advancements in the discipline of oral and maxillofacial surgery and availability of new techniques and technologies have made rigid fixation with mini and microplate osteosynthesis possible in almost all facial fractures . delivery of anesthesia for maxillofacial surgeries is a challenge because the anesthesiologist has to share the upper airway field with the surgeon . temporary intraoperative occlusion of teeth ( intramaxillary and maxillomandibular fixation ) is needed to check the alignment of the fracture fragments , making orotracheal intubation unsuitable . most importantly , presence of altered airway anatomy and airway edema may compromise the airway . options in airway management in these patients include tracheostomy , nasotracheal intubation , retromolar intubation , and submentotracheal intubation . nasotracheal intubation is not recommended in presence of panfacial fracture , cervical spine injury , skull base fracture with or without cerebrospinal fluid rhinorrhea , systemic coagulation disorders , distorted nasal anatomy and when nasal packing is indicated . fiberoptic - guided nasotracheal intubation , though controversial , can be tried in selected patients . nasotracheal intubation may be impossible as deformity or fracture of nasal bones , cribriform plate of ethmoid or nasoorbital ethmoid complex are often associated . moreover , even a small bleed in presence of altered anatomy may lead to complete loss of vision through a fiberscope and may lead to an emergent situation . potential complications of nasotracheal intubation are mucosal dissection , injury to adenoids , meningitis , sepsis , sinusitis , epistaxis , dislodgement of bony fragments , and obstruction of the tube by the distorted airway anatomy or rarely intracranial intubation . in patients requiring simultaneous nasal or nasoorbital ethmoid reconstruction after the rigid fixation of mandible and/or maxilla , intraoperative switching over of the endotracheal tube ( ett ) from nasal to oral route is required which may compromise the surgical field sterility and may increase the possibility of pulmonary aspiration . possible difficulty in airway management , disruption of surgical repair , and obstruction to the operative field are additional limitations . elective short - term tracheostomy is the conventional and time - tested method for airway access in these patients . the procedure is difficult in obese patients , children , and patients with thyroid swelling . the incidence of immediate complications is 68% and they include hemorrhage , surgical emphysema , pneumothorax , pneumomediastinum , and recurrent laryngeal nerve palsy . the incidence of delayed complications is 60% and they include stomal and respiratory tract infections , blockage of the tube , dysphagia , difficulty with decannulation , tracheal stenosis , tracheoesophageal fistula , and suboptimal visible scar . compared to tracheostomy , submental intubation is associated with lesser postoperative complications and requires minimal postoperative care resulting in shorter duration of hospitalization . this procedure can be carried out even in a set up with limited resources . in 1983 , bonfils first documented the use of retromolar space for endotracheal intubation . retromolar space is a space between the back of the last molar and anterior component of ascending ramus of the mandible , where it crosses the alveolar margin . the space may not be identical bilaterally due to the dissimilarities in development of the third molars and either of the side may provide space to rest the ett . patients with le fort ii fracture , having both occlusive change and disruption of nasal architecture , are potential candidates for retromolar intubation . facial trauma related trismus is not uncommon and even with complete dental occlusion this space can be used for suctioning the oropharynx or repositioning the ett . both bonfils intubating fiberscope and flexible fiberscope have been used through this space in patients with difficult airway such as limited mouth opening , limited neck mobility , or cervical spine injury . mertinez - lage et al . reported retromolar route for endotracheal intubation in 39 patients requiring craniofacial and orthognathic surgeries , as a substitute for surgical airways in 1998 . with this simple , atraumatic , and rapid technique , intraoperative dental occlusion is possible and monitoring patency of the ett is also not so difficult . successful maxillofacial surgeries have been reported in good number of patients using this technique.[1012 ] tooth loss is not infrequent in le fort ii fractures and this gap can also be utilized to safely harbor the ett secured on the maxillary side . the primary requirement for a successful retromolar intubation is availability of adequate space in the retromolar area . the adequacy of retromolar space is judged by putting index finger of patient in the space and instructing him to occlude teeth slowly . after confirmation of adequate dental occlusion preoperatively and successful orotracheal intubation , the ett is moved to the contralateral maxillary side of the retromolar space or missing tooth space and secured with silk sutures . this space is usually spacious enough to anchor an ett of 7.0-mm i d in adult males . retromolar intubation , a comparatively less invasive and time - efficient technique , may be an acceptable approach to the patient as it essentially serves the very same purpose of submental intubation . space is usually adequate to accommodate the ett in children . as a child grows , larger tube size is needed and this space tends to become smaller with age and with eruption of molars . there are large individual variations in the retromolar space in adults especially when the third molars are impacted or completely erupted . if the retromolar space is not adequate , extraction of the third molar with a semilunar osteotomy in the area was suggested by mertinez - lage et al . to create enough space for resting the tube . the creation of retromolar space with an osteotomy was used to take almost 25 min in average . the technique was however more invasive , destructive , and time consuming and thus no longer practiced . osteotomy - related permanent bone loss , just to create a space for temporary accommodation of an ett , seems impractical . retromolar intubation without osteotomy took 4 min 33 s in average from induction to ventilator switch on in 84 patients for retromolar intubation with tooth fixation compared to mean procedural time of 9.9 min in 746 patients for submental intubation . in a series of 15 patients kruger et al . evaluated 2857 third molars of patients between 18 and 26 years of age and reported that nearly 42% third molars of maxillary side remained unerupted at 26 years of age and that approximately one - third of fully erupted impacted third molars had been uprooted . preoperative submental intubation in craniofacial injuries was first proposed by a spanish faciomaxillary surgeon , francisco hernandez altemir in 1986 . he proposed it as an alternative to short - term elective tracheostomy , where both oral and nasal route for endotracheal intubation were not feasible . this technique is applicable where anatomy is likely to become normal after the surgery and long - term postoperative ventilation or protection of airway is not anticipated . indications for submental intubation are maxillofacial injuries with associated fractures of nasal bone and skull base or use of temporary intermaxillary fixation in patients where nasotracheal intubation is not possible . the scope of this technique has extended far beyond the realm of faciomaxillary surgeries and it has been successfully used in orthognathic surgeries and elective aesthetic face surgeries as there is minimal distortion of the nasolabial soft tissue . it is also used in surgeries where both nasal and oral passages are used by the surgeons ( e.g. , repair of postcancrum oris defects [ figure 1 ] , oronasal fistula , selected cleft lip , and palate surgeries ) . repair of congenital malformations , skull base surgery , multiple or complex facial osteotomies , transfacial oncologic procedures of the cranial base , and pediculated craniofacial surgeries are current indications for submental intubation . postcancrum oris defect the contraindications of submental intubation are patients refusal , bleeding diathesis , laryngotracheal disruption , infection at the proposed site , gunshot injuries in the maxillofacial region , long - term airway maintenance , tumor ablation in maxillofacial region , and history of keloid formation . a comparison of different techniques of airway access in complex maxillofacial injury is tabulated as table 1 . the submental area , situated just below the chin , is demarcated by the anterior bellies of digastric muscles of both sides , chin at the apex , body of the hyoid bone at the base with the mylohyoid muscles flooring it [ figures 2 and 3 ] . usually superficial to mylohyoid , it contains a few anatomical structures like lymph nodes and thin vessels . surface anatomy of submental and submandibular region submental and submandibular area after dissection the submandibular region , just below the body and the ramus of the mandible , is bounded anteroinferiorly by the anterior belly of digastric , posteroinferiorly by the posterior belly of digastric muscles , and superiorly the inferior border of the mandible including the imaginary line projected to the mastoid process , being floored by mylohyoid and hyoglossus . superficial to mylohyoid and hyoglossus there lie the submandibular gland , hypoglossal nerve , and a few arteries [ figure 4 ] . submental and submandibular area after dissection both the routes ( anterior - submandibular and posterior - submandibular , depending on the incision chosen ) approach the floor of the mouth laterally to hyoglossus , an extrinsic quadrilateral muscle of tongue having origin at hyoid bone and insertion on the sides of the tongue . laterally , it relates to the lingual nerve with submandibular ganglion coursing downwards and forwards , submandibular gland ( deep part ) with its duct hooked by the lingual nerve , and the hypoglossal nerve coursing upwards and forwards to supply the tongue . in the posterior part of the submandibular area , not only the lingual and hypoglossal nerves are wide apart but also the submandibular gland exists instead of its duct . in the anterior part , those two nerves get closely approximated in convergence manner with sharing some fibers of hypoglossal and lingual nerve . in the anterior submandibular approach , there is immense chance of facing the two main nerves of tongue and the submandibular duct along with the sublingual gland is in close proximity to one another , which can get damaged unintentionally . if we approach at a posterior plane at the level of third molar ( posterior submandibular approach ) , there is less chance of such damage . only possible hindrance could be by the submandibular gland itself , which can be easily retracted without any anatomical or functional damage [ figure 5 ] . intubations through anterior part of digastric triangle ( right red arrow ) may damage the closely approximated lingual nerve , hypoglossal nerve and sublingual duct , whereas in the retromolar space ( left red arrow ) there are lesser chances of damaging them as they are far apart . the conventional submental intubation technique essentially involves creation of an orocutaneous tunnel and diverting the proximal end of the armoured ett through anterior floor of the mouth . appropriate broad spectrum antibiotic , preferably amoxicillin and clavulanic acid , is given intravenously 1 h prior to the procedure . in patients allergic to penicillin , tight seal of the appropriate - sized flexometallic etts is made easily detachable from the universal connector . measures to tackle any emergent situation ( e.g. , cricothyrotomy and transtracheal jet ventilation ) are kept ready , especially for those patients who can not withstand brief period of cessation of ventilation . refinements such as double tube method or airway exchanger technique reduce the time between detachment of the circuit and reattachment . general anesthesia and orotracheal intubation with appropriate - sized armoured ett area around proposed site of incision is prepped with 10% povidone iodine solution and is draped with sterile dressings . after local infiltration of skin and soft tissue of the proposed site with 2% lignocaine with adrenaline , a skin incision is made in the right submental region parallel to the inferior border of the corresponding mandible [ figure 6 ] . incision made in the midline in submental region approximately 1.5 cm incision is good enough for easy passage of ett up to 7.5 mm size and 2 cm incision is required for ett of larger diameter . right - sided incision is always advantageous as it permits better visualization of the intraoral part of the ett by left handed laryngoscopy . however , selection of the side is usually done so as to avoid the site of injury and mandibular fracture . blunt dissection is carried out with a medium - sized curved artery forceps or kelly clamp along the lingual surface of the mandible through subcutaneous tissue , platysma , investing layer of deep cervical fascia , mylohyoid muscle in between the two heads of digastrics muscles to sublingual mucosa [ figure 7 ] . blunt dissection using artery forceps a paramedian oral incision is made over the tented mucosa created by the tip of the artery forceps . the patient is ventilated with 100% oxygen and 1% isoflurane for 5 min , before replacing the tube through the incision , to tolerate the period of ventilatory pause . after denitrogenation , breathing circuit is disconnected and universal connector is detached from the tube . the tip of the pilot balloon cuff is pulled through the submental incision first [ figure 8 ] . pilot balloon exteriorized through the orocutaneous tunnel then the tip of the artery forceps is again introduced through the incision to take out the distal end of the ett in similar fashion and the ett is placed in the sulcus between the tongue and the mandible in the floor of the mouth . while pulling , the intraoral part of the flexometallic ett is kept steady with a magill 's forceps or the index finger of an assistant . the connector is reattached and the ett reconnected with the breathing circuit . the position of the tube is confirmed by direct laryngoscopy , chest auscultation , and capnography . the skin exit point of the ett is marked with permanent ink . the ett is secured to the skin using stay suture with 2 - 0 heavy silk . additional fixation by placing a tie suture between the ett and the universal connector can be applied for further safety . transparent adhesive dressing is applied over the skin to avoid displacement while manipulating the mandible as well as to visualize the ink mark [ figures 9 and 10 ] . endotracheal tube rerouted through submental space and fixed to the skin transparent dressing applied to observe tube dislodgement a throat pack is applied if necessary . capnography has specific importance for alerting the anesthesiologist about tube compression , accidental extubation , or endobronchial intubation with jaw movement during surgery . if regular tube is used , assisted manual ventilation during surgery can pick up early evidence of tube compression or kinking . at the end of the surgery , tracheal extubation is done in the operation theatre or in the postoperative recovery room through the submental route , when the standard criteria of extubation are fulfilled and patient is awake and maintaining airway reflexes . soft - tissue edema can compromise airway in the immediate postoperative period , all patients should be dealt as difficult extubation cases . in situation when there is need for long - term airway maintenance or ventilatory support , the submentotracheal intubation is converted back to orotracheal intubation by pulling the proximal end of the ett and pilot balloon through the incision . after local infiltration of the wound with 2% lignocaine and adrenaline , the platysma is closed with an absorbable suture and skin closed with monofilament suture . the skin suture applied should be not too tight to allow drainage of tissue fluid . the mucosal wound is left to heal by secondary intention . during the postoperative period , care of the intraoral wound is done by maintenance of oral hygiene using 0.12 % chlorhexidine mouth wash six hourly . the broad spectrum antibiotics are continued postoperatively . in case of wound infection or suppuration , cutting one or two stitches usually resolves the complication . as some of the complications of the submental intubation may appear late , the patients are followed up till discharge , and after 1 , 3 , and 6 months postoperatively . since first - published report of altemir , several modifications of submental / submandibular endotracheal intubation have been tried with an expectation of improved outcome . gadre and waknis considered transmylohyoid as more appropriate terminology , as in this technique the ett can pass through the mylohyoid muscle anywhere between the first mandibular molars of either side anterior to the massetor muscle , instead of limiting to the submental triangle . in patients with compound comminuted fracture of symphysis and parasymphysis , the conventional submental technique may result in significant stripping of lingual periosteum that can jeopardize blood supply . most authors are of the opinion that subperiosteal tube placement , as proposed by altemir in his first report , is not essential . the placement of ett in the anterior part of the submandibular region and submento - submandibular intubation seem more appropriate nomenclatures . in strict midline approach , ( both mylohyoid muscles meet in the midline in an avascular plane ) the chance of bleeding is less . moreover , transfer of ett is easier as structures are less cramped . use of two ett ( one anterograde and the other retrograde ) is claimed to be superior because there is less chance of hypoxia if there is difficulty in retrieval and no need of detaching the connector . after conventional orotracheal intubation with a regular ett , another reinforced ett is introduced through the submental incision from exterior to the oral cavity and negotiated in the oropharynx with a mcgill forceps . hanamoto et al . used sterile polypropylene cylinder of 10-ml syringe , through the submental incision into the oral cavity . the distal end of the orotracheal tube was connected with the proximal end of the second tube passed through the submental tunnel . one serious drawback of retrograde technique is that it may result in introduction of infection to lower airways . risk of sepsis can be due to the contaminated pilot balloon passing through the incision wound during extubation . cutting the end of the ett and replacement with another universal connector has also been advocated . conversion of the orotracheal to submento - tracheal intubation can be done faster with use of a tube exchanger . drolet et al . suggested the use of a tube exchanger to exchange a damaged ett through the submental route . it facilitates exchanging the tube without loss of airway in difficult airway situations even with a steeper angle than oral or nasal route . during the procedure the patient can be ventilated through the port into the airway exchanger minimizing the chances of desaturation . injury to the pilot balloon while retrieving through the tunnel can be averted by inserting the deflated pilot balloon into the ett . the ett is further cleared of any blood or secretion after it is taken out . rule of 2 - 2 - 2 2 cm long incision , 2 cm away from the midline , 2 cm medial and parallel to the mandibular margin has been suggested by nyarady et al . nylon guiding tube for exteriorizing the ett has less chance of injury to the associated structures . dilators of percutaneous dilatation sets can be used to create mucocutaneous fistula as an alternative to blunt dissection and are claimed to produce minimal scar and minimal bleeding . a 100% silicon wire reinforced tube primarily intended for intubation through intubating laryngeal mask airway ( ilma ) is a better option as it has an easily removable universal connector . a 1.5-cm skin incision , 1 in . below and 0.5 in . anterior to the angle of the mandible is found to be more advantageous as posterior placement of the tube assures unobstructed surgical field . the preformed curvature helps in positioning of the tube as it conforms to the anatomy of the region . utilized surgical glove finger to cover the proximal end of the ett , which helped in preventing the entry of blood and soft tissue during its passage through orocutaneous tunnel . similarly , adeyemo et al . used nylon tube sac to cover the open end of the tube during transfer through the tunnel . the submental route has also been used for the lma with reinforced tube in specific situations such as laryngotracheal disruption , voice professionals refusing endotracheal intubation , and patients with unstable cervical fractures posted for faciomaxillary surgery . however , movement of patient 's head should be very gentle and great care is needed to prevent dislodgement of the lma . the use of combitube sa ( tyco - kendall , mansfield , ma ) through the wide submental incision or the external injury site makes provision for adequate dental occlusion , unimpeded surgical access , and ease of ventilation in severe maxillofacial injuries . moreover , the inflated proximal balloon helps to allay pain and minimizes bleeding by spontaneous reduction of fracture fragments . there is difficulty in passing the tube through submental incision and chance of hypoxia if retrieval of tube and reestablishing connection is delayed . there are incidences of superficial infection , abscess , sepsis , damage to the lingual nerve , hypertrophic scar , orocutaneous fistula , and mucocele . superficial infection of the submental incision site is documented as the most common complication , whereas airway compromise is stated to be most important potential complication . fortunately , most of the reported superficial infections responded to local measures and minimal intervention like partial removal of stitches . trauma to submandibular and sublingual gland and ducts and salivary fistula have also been described . suctioning is difficult through the tube and can be done after extension of the neck with well - lubricated suction catheter . the postoperative period may be complicated by airway edema , obstruction , hematoma , and reexploration . yoon et al . reported of accidental detachment of a deflated pilot balloon while manipulating the balloon during converting the submental intubation back to orotracheal . it was managed by cutting a pilot balloon from a new ett and by connecting it to the first tube by a 20 g needle connector . submento - tracheal intubation is generally better tolerated by the awake patient than orotracheal intubation [ figure 11 ] . chance of biting the ett by the patient with disruption of the surgical repair is always a threat in oral extubation . awake patient tolerating the tube well ( permission obtained from the patient ) though adequate mouth opening is an essential part of conventional submental intubation , successful retrograde submental intubation ( combination of retrograde intubation and submental intubation ) is also reported in a patient with faciomaxillary trauma with restricted mouth opening due to bilateral temporomandibular joint dislocation . adequate retromolar space is an essential prerequisite to introduce the suction catheter and necessary arrangements must be ready to cope with any emergency situation . there is a trend of rising attention on this useful , but underutilized mode of airway access over the last 25 years . nearly 90% of the publications on this topic have been reported in pubmed in the last decade . a brief summary of review of literature on the submental intubation in pubmed is tabulated as table 2 . a recent literature review comprising 842 patients and 41 articles cites the success rate of this technique as 100% with duration of the procedure varying between less than 4 min to 30 min . this study also reports of development of a hypertrophic scar in 3 patients out of 842 patients . as aesthetic acceptance of the scar is highly unpredictable , nasal route of airway access , if possible , is always a better choice . the utility of submental intubation is guarded in elective midfacial osteotomy , although there are reports of successful submental intubation in no less than 100 patients . submental intubation seems to be an attractive and adaptive option for intraoperative airway control in selected complex craniofacial injuries . though it demands some surgical skill , the technique is simple , rapid , and easy to learn . the overall increase in operating period by about 20 min is one of its limitations . there is still no consensus regarding superiority of one technique over another as a mode of securing airway in complex craniofacial injury repair . paucity of published literature ( case reports and case series ) and quality of evidence limit definite recommendation on its use . patient 's ability to cooperate with the procedure , liaison between the surgeons and the anesthesiologists , experience of airway managers to deal with the situation , and benefits of single versus multiple surgical interventions are important considerations . prolonged period of time is required for the adequate planning , preparation of the patient , personnel and procedure , which limits the utility of this technique in emergency situations .
airway management in patients with faciomaxillary injuries is challenging due to disruption of components of upper airway . the anesthesiologist has to share the airway with the surgeons . oral and nasal routes for intubation are often not feasible . most patients have associated nasal fractures , which precludes use of nasal route of intubation . intermittent intraoperative dental occlusion is needed to check alignment of the fracture fragments , which contraindicates the use of orotracheal intubation . tracheostomy in such situations is conventional and time - tested ; however , it has life - threatening complications , it needs special postoperative care , lengthens hospital stay , and adds to expenses . retromolar intubation may be an option , but the retromolar space may not be adequate in all adult patients . submental intubation provides intraoperative airway control , avoids use of oral and nasal route , with minimal complications . submental intubation allows intraoperative dental occlusion and is an acceptable option , especially when long - term postoperative ventilation is not planned . this technique has minimal complications and has better patients and surgeons acceptability . there have been several modifications of this technique with an expectation of an improved outcome . the limitations are longer time for preparation , inability to maintain long - term postoperative ventilation and unfamiliarity of the technique itself . the technique is an acceptable alternative to tracheostomy for the good per - operative airway access .
suppose that @xmath4 $ ] is a polynomial of degree @xmath0 , and let @xmath5 be the set of critical points of @xmath6 . define the _ post - critical set _ @xmath7 , where @xmath8 denotes the @xmath9th iterate of @xmath6 . note that @xmath10 consists of all points over which the map @xmath11 is ramified for at least one @xmath9 . when @xmath10 is finite , we call @xmath6 _ post - critically finite_. choose a point @xmath12 , and denote by @xmath13 the set of all preimages of @xmath14 under some iterate of @xmath6 . then @xmath13 is a complete rooted @xmath1-ary tree whose @xmath9th level is given by @xmath15 . the fundamental group @xmath16 acts on each set @xmath15 by monodromy , and thus gives a subgroup of @xmath17 that we call the _ iterated monodromy group _ of @xmath6 , and write @xmath18 . in the twenty years since their introduction , iterated monodromy groups have become a powerful tool used in a variety of settings . they are both computable and have deep connections to the dynamics of the underlying polynomial . indeed , the action of a set of generators of @xmath18 on @xmath13 can be given by a simple finite automaton that depends largely on the structure of the set @xmath10 ( see section [ background2 ] for more details ) . to illustrate the connections to dynamics , one can associate to @xmath18 a _ limit dynamical system _ whose points are equivalence classes of left - infinite paths in @xmath13 , and whose map is the shift map . this dynamical system is topologically conjugate to the action of @xmath6 on its julia set ( * ? ? ? * section 3.6 and theorem 6.4.4 ) . for this reason the group @xmath19 has become known as the _ basilica group _ , [ basilica ] since the top half of its julia set bears a striking resemblance to the profile of the basilica di san marco in venice . applications of iterated monodromy groups abound . defined in a more general setting , they have been used by bartholdi and nekrashevych to resolve the well - known twisted rabbit " problem of j. hubbard @xcite . they have also attracted interest for their purely group - theoretic properties ; for instance , the basilica group is the first known example separating the classes of amenable groups and sub - exponentially amenable groups @xcite . the more general class of groups generated by finite automata includes the renowned grigorchuk group @xcite , the first example of a group of intermediate growth . the monograph @xcite gives an overview of iterated monodromy groups and their applications , as well as an extensive bibliography . our interest in iterated monodromy groups comes from arithmetic , where properties of the action of @xmath18 on the boundary of @xmath13 yield information about interesting sets of prime ideals ( see section [ connections1 ] for more details ) . in particular , we are interested in elements of @xmath18 that fix at least one point on the boundary of @xmath13 , or equivalently , at least one infinite branch of @xmath13 . our main result is that such elements are rare for a large class of @xmath6 . let us introduce some notation . we may identify @xmath13 with the set @xmath20 of all finite words ( including the empty word ) over an alphabet @xmath21 containing @xmath1 letters . the root of @xmath13 corresponds to the empty word , and @xmath15 corresponds to @xmath22 , the set of all words of length @xmath9 . the boundary of @xmath20 is the set @xmath23 of _ ends _ of @xmath20 . let @xmath24 be a group of automorphisms of @xmath20 , and denote by @xmath25 the image of @xmath24 under the restriction map @xmath26 . define @xmath27 note that the fraction above is non - increasing , since all lifts to @xmath28 of an element of @xmath25 with no fixed points again have no fixed points . thus the limit in exists . we may also describe @xmath29 by considering the closure @xmath30 of @xmath24 in @xmath31 , which is a compact topological group and thus comes with a natural probability measure @xmath32 ( the normalized haar measure ) . it is straightforward to show that @xmath33 note that @xmath29 is determined by @xmath24 rather than @xmath30 , even though we have made reference to @xmath30 in . we use the notation @xmath30 for the closure of @xmath24 because the latter coincides with the inverse limit of the groups @xmath25 . following the terminology of @xcite , define @xmath34 $ ] to be _ exceptional _ if there exists a finite , non - empty set @xmath35 with @xmath36 . our main result is the following . [ monodromy ] let @xmath37 $ ] be a post - critically finite polynomial of degree at least two , with iterated monodromy group @xmath24 . if @xmath6 is not exceptional , then @xmath38 . exceptional polynomials have appeared as a distinguished class in a variety of settings ; for instance , the affine orbifold lamination attached to certain exceptional polynomials has an isolated leaf ( * ? ? ? * section 2 ) ( see also@xcite for special properties of such polynomials ) . it is not difficult to show that if @xmath6 is exceptional , then @xmath39 ( see p. ) . if @xmath40 , then @xmath6 is linearly conjugate to @xmath41 , where @xmath42 is the chebyshev polynomial of degree @xmath1 ( see proposition [ chebclass ] ) . note that linear conjugacy preserves the conjugacy class in @xmath31 of @xmath18 . if @xmath43 , then @xmath6 has a fixed point @xmath44 all of whose preimages are critical except @xmath44 itself , so @xmath6 is conjugate to a polynomial of the form @xmath45 where @xmath46 and @xmath47 . we also compute @xmath29 for exceptional polynomials with @xmath40 . note that @xmath42 is conjugate to @xmath48 when @xmath1 is even . [ exceptprop ] if @xmath49 $ ] is conjugate to @xmath42 for @xmath1 even , then @xmath50 . if @xmath6 is conjugate to @xmath41 for @xmath1 odd , then @xmath51 . thus the only post - critically finite polynomials @xmath6 for which @xmath52 remains unknown are non - chebyshev maps conjugate to a map of the form . we remark that the power maps @xmath53 are not exceptional , and hence have @xmath54 , in contrast to chebyshev polynomials . when @xmath6 is quadratic , it must be conjugate to @xmath55 for @xmath56 . the only exceptional polynomial of this form is @xmath57 , since those of the form have degree at least 3 . theorem [ monodromy ] and proposition [ exceptprop ] thus give the result that furnished the original motivation for this project : [ quadpoly ] let @xmath58 be post - critically finite , and let @xmath24 be its iterated monodromy group . then @xmath38 unless @xmath59 is the chebyshev polynomial @xmath60 , in which case @xmath61 . to prove theorem [ monodromy ] , we study groups of automorphisms of rooted trees , and draw heavily on a characterization due to v. nekrashevych ( * ? ? ? * theorem 6.10.8 ) of which such groups are @xmath18 for some post - critically finite @xmath6 . along the way we derive some results that apply more generally . for instance , define @xmath62 to be _ spherically transitive _ if it acts transitively on @xmath22 for each @xmath63 . every iterated monodromy group of a polynomial contains a spherically transitive element ( see theorem [ char ] and lemma [ trans ] ) , which is furnished by monodromy at infinity . this element plays crucial role in our analysis . [ gen ] suppose that @xmath64 has a spherically transitive element . then @xmath65 . we prove theorem [ gen ] in section [ fpprocess ] , where we define a stochastic process encoding information about fixed - point - free elements of @xmath25 . the presence of a spherically transitive element implies this process is a martingale ( theorem [ mart ] ) , and we establish theorem [ gen ] using a basic martingale convergence theorem . we give two other results that lead up to the proof of theorem [ monodromy ] . a salient feature of @xmath20 is its self - similarity , and we use this to describe elements of @xmath31 recursively . let @xmath62 , and for a vertex @xmath66 consider the subtrees @xmath67 and @xmath68 with root @xmath69 and @xmath70 , respectively . both are naturally isomorphic to @xmath20 , and identifying them gives an automorphism @xmath71 , called the _ restriction _ of @xmath72 at @xmath69 . see section [ background1 ] for examples and further definitions . we call @xmath24 _ contracting _ if there is a finite set @xmath73 such that for each @xmath74 , there is @xmath75 such that all restrictions of @xmath72 at words of length at least @xmath76 belong to @xmath77 . roughly , this property means that the action of @xmath72 is relatively restrained , at least close to the boundary of @xmath20 . in particular , many computations in @xmath24 can be reduced to finite considerations ; see section [ background1 ] for more details . it is known that iterated monodromy groups of post - critically finite polynomials are contracting ( * ? ? ? * theorems 3.9.12 and 6.10.8 ) . let [ ndefs ] @xmath78 when @xmath24 is contracting , @xmath79 is finite ; see proposition [ contracting ] . [ crystal ] suppose that @xmath64 is contracting and has a spherically transitive element . if every @xmath80 fixes infinitely many ends of @xmath20 , then @xmath38 . it is not hard to show that when @xmath24 is contracting , @xmath79 is torsion ( see the end of section [ fpprocess ] ) , and this gives [ torfree ] suppose that @xmath64 is contracting and has a spherically transitive element . if @xmath24 is torsion - free , then @xmath38 . the basilica group @xmath81 is known to be torsion - free @xcite , and so corollary [ torfree ] proves that @xmath82 . theorem [ crystal ] and corollary [ torfree ] are proven using only the tools from section [ fpprocess ] , which do not use specific facts about iterated monodromy groups . on the other hand , in order to use theorem [ crystal ] to prove theorem [ monodromy ] , we apply a characterization of iterated monodromy groups of post - critically finite polynomials due to nekrashevych ( * ? ? ? * theorem 6.10.8 ) ( we give a restatement in theorem [ char ] ) . this gives a natural finite generating set @xmath83 for @xmath18 . we introduce a _ kneading graph _ associated to @xmath83 , and use it to show that every element of @xmath79 is conjugate to a power of an element of @xmath84 ( theorem [ n1 ] ) . thus we reduce questions about fixed points of the action of elements of @xmath79 on @xmath20 to the study of the action of elements of @xmath83 on @xmath20 , and these are directly related to the orbits of the critical points of @xmath6 ( theorem [ compmon ] ) . in section [ sec : last ] we use a strong property of @xmath83 given in theorem [ char ] to show that only very special configurations of @xmath83 allow for elements of @xmath79 to fix a finite number of ends of @xmath20 . while most of the proofs here are group - theoretic , the consequences are of interest to number theorists , and hence we have made an effort to make the exposition relatively self - contained . after giving more details on links between our results and number theory ( section [ connections1 ] ) , we give in section [ background1 ] just the background necessary to prove theorem [ gen ] , theorem [ crystal ] , and corollary [ torfree ] . these proofs are in section [ fpprocess ] . for the remainder of the paper , more background is required , which we describe in section [ background2 ] . sections [ sec : treelike ] , [ sec : kneadgraph ] , and [ sec : last ] contain the rest of the proofs . we give here some links between iterated monodromy groups and density questions for sets of dynamical interest in arithmetic contexts . work is ongoing to exploit these connections to produce density results . let @xmath34 $ ] be post - critically finite . the action of @xmath85 on the set @xmath15 , where @xmath14 is outside the post - critical set , is given by monodromy , and we refer to this quotient as @xmath25 . on the other hand , the galois group @xmath86 , where @xmath87 is the splitting field of the polynomial @xmath88 $ ] , has a natural action on the @xmath89 roots of @xmath90 over @xmath91 . it is well - known ( see e.g. ( * ? ? ? * theorem 8.12 ) ) that @xmath92 , and the corresponding actions on @xmath93 and @xmath94 are conjugate subgroups of @xmath95 . we summarize this in the following proposition , which is essentially ( * ? ? ? * proposition 6.4.2 ) . [ cxgal ] the profinite iterated mondromy group @xmath96 is isomorphic to the galois group of @xmath97 over @xmath91 , where @xmath98 . moreover , the corresponding actions on the preimage trees @xmath99 and @xmath100 are conjguate . proposition [ cxgal ] prompted the introduction of iterated monodromy groups , as a tool for computing the group @xmath101 @xcite . in the remainder of this section , we give connections of iterated monodromy groups to arithmetic probelms , which proceed via the link to galois theory in proposition [ cxgal ] . let @xmath102 be a perfect field of characteristic @xmath103 , let @xmath104 be an algebraically closed field containing @xmath102 , and suppose that @xmath105 $ ] has degree @xmath0 that is prime to @xmath106 . for the moment we do not assume that @xmath6 is post - critically finite . let @xmath87 be the splitting field of @xmath107 over @xmath108 , and put @xmath109 . for @xmath110 , define the _ arithmetic monodromy group _ @xmath111 to be the galois group of @xmath87 over @xmath108 . the field of constants of @xmath87 is @xmath112 , which we denote by @xmath113 . the _ geometric monodromy group _ @xmath25 is the normal subgroup of @xmath111 whose elements restrict to the identity on @xmath114 . clearly @xmath115 is isomorphic to the galois group of @xmath116 , and hence we have an exact sequence @xmath117 for each @xmath110 . we may also take a specialization @xmath118 , thereby obtaining a specialized form of : @xmath119 note that the extension @xmath116 of constants is independent of specialization . for @xmath110 , the groups @xmath111 and @xmath25 both act naturally on the set @xmath120 of roots of @xmath107 over @xmath108 , while @xmath121 and @xmath122 act on the set @xmath123 of roots of @xmath124 over @xmath102 . we thus define @xmath125 with similar definitions for @xmath126 and @xmath127 we remark that the set @xmath128 has a natural structure as a complete @xmath129-ary rooted tree , while the same is true of @xmath130 provided that @xmath124 is separable for all @xmath63 ( or equivalently , there are no critical points of @xmath6 mapping to @xmath131 under any iterate of @xmath6 ) . we can thus identify @xmath128 with @xmath20 , and @xmath132 with subgroups of @xmath31 . with this identification , is the same as . if @xmath4 $ ] is a post - critically finite polynomial , its coefficients must satisfy algebraic relations imposed by the self - intersections of the orbits of the critical points , and hence @xmath6 is defined over a finite extension @xmath102 of @xmath133 . we may take @xmath134 , and then with @xmath135 becomes @xmath136 let @xmath102 be a global field , that is , a finite extension of @xmath137 or a finite extension of the function field @xmath138 of @xmath139 over the finite field with @xmath140 elements , and we take the ring of integers @xmath141 to be the integral closure in @xmath102 of @xmath137 or @xmath142 $ ] . we wish to have a notion of size for a set of prime ideals in @xmath141 . [ dirichlet ] let @xmath102 be a global field and @xmath143 be a set of primes in @xmath141 . the _ dirichlet density _ of @xmath143 is @xmath144 where @xmath145 is the number of elements in the field @xmath146 . the chebotarev density theorem allows one to relate the dirichlet density of various naturally - occuring sets of primes in @xmath141 to group - theoretic properties of the galois groups of certain extensions of @xmath102 . the following theorem is an instance of this . [ numfieldden ] let @xmath102 a number field with ring of integers @xmath141 , and let @xmath147 $ ] with @xmath148 . let @xmath149 be the set of primes dividing at least one element of the sequence @xmath150 . suppose that @xmath124 is separable for all @xmath63 , and let @xmath151 be as in . then @xmath152 note that the conclusion is independent of the choice of @xmath153 . in the case of number fields as above , we may also replace @xmath154 with natural density , namely @xmath155 and part of the conclusion of the theorem is that this limit exists . theorem [ numfieldden ] says that @xmath156 gives the generic " value of the density of prime divisors of an orbit of @xmath6 translated by a constant @xmath131 . indeed , if @xmath157 , then one can use the hilbert irreducibility theorem to show that for any @xmath158 , @xmath159 for all but a thin set of @xmath131 . in the case where @xmath6 is post - critically finite , we have the exact sequence , and in light of theorem [ monodromy ] , one needs to study the extension of constant fields @xmath160 and understand how it interacts with @xmath18 . indeed , if @xmath161 is a finite extension of @xmath102 , one could replace the ground field @xmath102 by @xmath161 and obtain the desired result . however , it seems unlikely that this is the case in most circumstances . for instance , when @xmath162 and @xmath163 , we have that @xmath164 . let @xmath165 be the finite field with @xmath140 elements , let @xmath166 $ ] , and let @xmath167 . clearly the forward orbit @xmath168 of any such @xmath169 is contained in a finite extension of @xmath165 , whence it must be finite . we thus have two fundamental behaviors : if there is a @xmath170 with @xmath171 we call @xmath169 _ purely periodic _ under @xmath6 , while if there is no such @xmath172 then we call @xmath169 _ pre - periodic _ under @xmath6 . let @xmath173 be the purely periodic points . note that by construction @xmath6 must be post - critically finite , since all its orbits are finite . define the dirichlet density of a set @xmath174 to be @xmath175 where @xmath176 $ ] , and @xmath177 . this is essentially identical to definition [ dirichlet ] ; the @xmath178 term is necessary because there are @xmath179 conjugates of @xmath169 corresponding to the prime of @xmath142 $ ] with root @xmath169 . we sketch an argument showing how @xmath180 is given by statistics of an arithmetic monodromy group as in , where @xmath181 . note that @xmath182 if and only if some branch of the tree of preimages @xmath183 is contained in the base field @xmath184 . let @xmath185 be the prime ideal generated by the minimal polynomial of @xmath169 over @xmath184 . then a branch of @xmath183 is contained in @xmath184 if and only if @xmath186 fixes a root of @xmath107 for each @xmath63 ( denote by @xmath143 the set of such @xmath185 ) . here @xmath187 is the conjugacy class of elements of @xmath111 that act on the residue class field @xmath188 as @xmath189 . the chebotarev density theorem for function fields ( * ? ? ? * theorem 9.13a ) then gives that the dirichlet density of @xmath143 is bounded above , for each @xmath63 , by the proportion of @xmath190 that fix at least one root of @xmath107 . thus this density is bounded above by @xmath156 . it is then straightforward to show that this implies @xmath191 . in this section we give the background required to prove the resutls in section [ fpprocess ] . we draw on the exposition in ( * ? ? ? * chapter 1 ) , including following the convention there of writing group actions on the left . from now on we suppose that our alphabet @xmath21 is given by @xmath192 , and we let @xmath193 denote the symmetric group on @xmath1 letters . then there is a natural isomorphism @xmath194 where @xmath195 denotes the wreath product , that takes @xmath72 to @xmath196 , where @xmath197 is the action of @xmath72 on @xmath21 ( i.e. , on the first level of @xmath20 ) . in other words , we may describe @xmath72 by specifying its restriction at each element of @xmath21 and its action on @xmath21 . we call this the _ wreath recursion _ describing @xmath72 . we generally drop the parentheses and equate @xmath72 with its image under @xmath198 , writing @xmath199 we write the identity element as @xmath200 , and when the permutation @xmath201 is the identity , we omit it . hence the identity element of @xmath31 is given in wreath recursion by @xmath202 . note that the element @xmath203 is also the identity , since by induction it acts trivially on @xmath22 for all @xmath9 , and thus acts trivially on @xmath20 . given @xmath204 , we can make explicit its action on any @xmath22 thanks to the following formulas , which are straightforward to prove : @xmath205 for any @xmath206 . one can multiply elements in wreath recursion form using the normal multiplication in a semi - direct product : @xmath207 where @xmath208 and @xmath209 . if we take @xmath66 of length @xmath9 , we may consider as giving the wreath recursion of @xmath210 acting on @xmath22 . this gives @xmath211 [ counter1 ] let @xmath212 and take @xmath201 to be the non - trivial element of @xmath213 . let @xmath214 , @xmath215 and @xmath216 . from , we have @xmath217 and @xmath218 . by induction this gives @xmath219 . however , the element @xmath220 is spherically transitive , i.e. , acts on each @xmath22 as a @xmath221-cycle , and in particular has infinite order . this is a consequence of proposition [ sphertrans ] . in section [ background2 ] we show that @xmath24 is isomorphic to the iterated monodromy group of the chebyshev polynomial @xmath60 . [ basilica1 ] let @xmath212 and take @xmath201 to be the non - trivial element of @xmath213 . let @xmath222 , @xmath223 and @xmath216 . this is the basilica group , mentioned on page . if we write @xmath224 , then from , the wreath recursion for @xmath225 acting on @xmath226 is @xmath227 where @xmath228 hence from , @xmath229 acts on @xmath226 as @xmath230 . it follows that the restrictions of @xmath231 to words of length 2 are all of the form @xmath232 for @xmath233 . if @xmath225 is torsion of order @xmath9 , then all restrictions of @xmath231 are trivial , and so @xmath234 for some @xmath233 , a contradiction . hence @xmath225 has infinite order , though it is not spherically transitive . as an illustration of the preceding ideas , we give a characterization of spherically transitive elements of @xmath31 . the proof is left as an exercise . [ sphertrans ] let @xmath21 have @xmath1 elements and @xmath62 . for each @xmath235 , let @xmath236 denote the action of @xmath237 on @xmath21 , and let @xmath238 . then @xmath72 is spherically transitive if and only if @xmath239 is a @xmath1-cycle for every @xmath63 . note that by convention @xmath240 , and @xmath241 is the action of @xmath72 on @xmath21 . in the case @xmath212 , @xmath239 is the identity precisely when the number of @xmath235 with @xmath242 is even . thus the lemma says that @xmath72 is spherically transitive when @xmath242 for an odd number of @xmath235 , for all @xmath63 . for the element @xmath220 in example [ counter1 ] , it is easy to see that @xmath242 for only one @xmath69 in each @xmath243 . a group @xmath64 is _ self - similar _ if @xmath244 for all @xmath74 and @xmath245 . we call @xmath24 _ contracting _ if there is a finite set @xmath73 such that for every @xmath74 , @xmath246 for all @xmath66 sufficiently long . the smallest set satisfying this condition is called the _ nucleus _ of the group . in contracting groups , one can reduce many computations in @xmath24 to considerations involving only a finite set . for instance , as pointed out in @xcite , solving the so - called word problem ( determining whether a given product of @xmath9 generators is trivial ) can be done in polynomial time in a contracting group . we now consider the set of _ stable _ elements of @xmath24 , @xmath247 [ contracting ] if @xmath64 is contracting , then @xmath248 is finite and the nucleus of @xmath24 is equal to @xmath249 by definition , the nucleus of @xmath24 consists of the elements of @xmath72 for which there exists @xmath250 with @xmath251 for arbitrarily long words @xmath252 . if @xmath253 for some non - empty @xmath69 and @xmath254 is the @xmath9-fold concatenation of @xmath69 with itself , then from we have @xmath255 for all @xmath63 . moreover , any @xmath256 with @xmath257 for some @xmath258 must also occur as the restriction of @xmath72 at arbitrarily long words . hence the set in is contained in the nucleus , and in particular @xmath248 is finite . on the other hand , if @xmath256 is in the nucleus , let @xmath250 with @xmath259 for arbitrarily long words @xmath252 . let @xmath260 be the size of the nucleus and @xmath261 be such that @xmath262 is in the nucleus when @xmath263 has length at least @xmath261 . we may take the length of @xmath252 to exceed @xmath264 . hence if @xmath265 is the length-@xmath102 initial word of @xmath252 , then @xmath266 is in the nucleus for more than @xmath260 values of @xmath102 , and hence @xmath267 for some @xmath268 . therefore @xmath269 and there is a word @xmath270 with @xmath271 . it is known that standard actions on @xmath20 of iterated monodromy groups of post - critically finite polynomials are always contracting ( * ? ? ? * theorem 6.4.4 ) , and proposition [ kneadstab ] gives a method for computing @xmath248 for a class of groups including iterated monodromy groups . for the group from example [ counter1 ] , we have that @xmath272 , and hence @xmath24 has nucleus @xmath273 . for the basilica group ( example [ basilica1 ] ) , we have @xmath274 ( see the remark following proposition [ kneadstab ] ) , and in this case @xmath248 coincides with the nucleus . as noted in the introduction , the profinite completion @xmath30 of @xmath24 with respect to the @xmath25 comes equipped with a natural probability measure that projects to the discrete measure on each @xmath25 . in this section we define a stochastic process that is , an infinite collection of random variables defined on a common probability space that encodes information about the number of fixed points in @xmath22 of elements of @xmath25 . we then adapt techniques of @xcite to show that this process is a martingale provided that @xmath24 contains a spherically transitive element . finally , we apply a martingale convergence theorem that leads to the proofs of theorem [ gen ] , theorem [ crystal ] , and corollary [ torfree ] . given @xmath74 where the group @xmath24 acts naturally on a set @xmath275 , we denote by @xmath276 the number of elements of @xmath277 with @xmath278 . define a stochastic process @xmath279 on @xmath30 by taking @xmath280 , where @xmath281 is the natural projection @xmath282 and @xmath25 acts on @xmath22 . we call this the _ fixed point process _ of @xmath24 , and write it @xmath283 . because @xmath284 for any @xmath285 , we have that @xmath286 is given by @xmath287 we denote by @xmath288 the expected value of the random variable @xmath289 . a stochastic process with probability measure @xmath32 and random variables @xmath279 taking values in @xmath290 is a _ martingale _ if for all @xmath291 and any @xmath292 , @xmath293 provided @xmath294 . [ mart ] let @xmath64 have a spherically transitive element . then @xmath283 is a martingale . we must show that @xmath295 where @xmath296 satisfy @xmath297 because the @xmath298 take integer values , each @xmath299 must be an integer . by definition , the left - hand side of is @xmath300 put @xmath301 by , the expression in is equal to @xmath302 . this in turn may be rewritten @xmath303 let @xmath304 be the image under @xmath305 of the spherically transitive element of @xmath30 assumed to exist . then @xmath306 acts trivially on @xmath243 , and hence @xmath275 is invariant under multiplication by powers of @xmath307 , and therefore is a disjoint union of cosets of @xmath308 . note that because @xmath309 acts transitively on @xmath22 , @xmath308 must act transitively on each set @xmath310 for @xmath235 . now take @xmath311 , and let @xmath312 be the set of elements of @xmath22 lying above elements of @xmath243 fixed by @xmath72 . note that because @xmath313 , we have @xmath314 . if @xmath315 , then @xmath316 for some unique @xmath317 . there is a unique @xmath318 such that @xmath319 , and thus @xmath320 . if @xmath321 is the function that takes the value @xmath200 when @xmath278 and @xmath3 otherwise , we have shown that @xmath322 and hence @xmath323 inverting the order of summation and using that @xmath324 for @xmath325 , we have @xmath326 but @xmath275 is the disjoint union of cosets of @xmath308 , and hence @xmath327 therefore the expression in equals @xmath328 . martingales are useful tools because they often converge in the following sense : let @xmath279 be a stochastic process defined on the probability space @xmath329 with probability measure @xmath32 . the process _ converges _ if @xmath330 we give one standard martingale convergence theorem ( see e.g. ( * ? ? * section 12.3 ) for a proof ) . [ martconv ] let @xmath331 be a martingale whose random variables take nonnegative real values . then @xmath332 converges . since the random variables in @xmath283 take nonnegative integer values , we immediately have the following : [ evconstcor ] let @xmath64 contain a spherically transitive element . then @xmath333 in particular , any @xmath334 fixing infinitely many ends of @xmath20 must have @xmath335 , and hence lie in a set of measure zero . this proves theorem [ gen ] . we may now give a short proof of theorem [ crystal ] . assume the hypotheses of that theorem , and let @xmath77 be the nucleus of @xmath24 . suppose that @xmath336 fixes some end @xmath337 of @xmath20 . let @xmath338 for each @xmath63 , and consider the sequence of restrictions @xmath339 . for @xmath9 large enough , we have @xmath340 , and @xmath341 fixes the end @xmath342 since @xmath72 fixes @xmath252 . because @xmath77 is finite , there must be @xmath343 with @xmath344 . let @xmath345 , and note that for @xmath346 we have @xmath347 and @xmath348 . hence @xmath349 , and by hypothesis fixes infinitely many ends of @xmath20 . inserting @xmath350 on the beginning of each of these ends , we obtain infinitely many ends of @xmath20 fixed by @xmath72 . hence by corollary [ evconstcor ] , @xmath72 lies in a set of measure zero , proving the theorem . to derive corollary [ torfree ] , note that if @xmath80 , then @xmath351 and @xmath253 for some non - empty @xmath66 . from it follows that @xmath352 and @xmath353 for all @xmath63 , and hence @xmath354 for all @xmath63 . because @xmath24 is contracting , @xmath248 is finite by proposition [ contracting ] , and thus two distinct powers of @xmath72 are equal , implying that @xmath72 is torsion . therefore if @xmath24 is torsion - free then @xmath79 is trivial , and corollary [ torfree ] follows from theorem [ crystal ] . recall from section [ mono1 ] that if @xmath6 is a post - critically finite polynomial with post - critical set @xmath10 , then @xmath18 acts naturally on the tree @xmath355 of preimages of any @xmath12 . if @xmath6 has degree @xmath1 , then we may take @xmath356 , and choose a bijection @xmath357 . this extends to an isomorphism @xmath358 ( ( * ? ? ? * proposition 5.2.1 ) ) that conjugates the action of @xmath18 to that of some @xmath64 on @xmath20 . we call this a _ standard action _ of @xmath18 on @xmath20 , and it gives an explicit way to compute a recursive formula for elements of @xmath18 in the form of a wreath recursion ( * ? ? ? * proposition 5.2.2 ) ( see also ( * ? ? ? * proposition 2.2 ) ) . the action of @xmath18 on @xmath2 is generated by the action of the generators of @xmath16 on @xmath2 . for each @xmath359 there is a generator of @xmath16 , and under a standard action there is a corresponding @xmath360 . the next result follows from ( * ? ? ? * theorem 6.8.3 ) . for @xmath34 $ ] and @xmath361 , denote by @xmath362 the order of vanishing of @xmath363 . clearly @xmath364 , with @xmath365 if and only if @xmath366 is a critical point of @xmath6 . [ compmon ] let @xmath34 $ ] be a post - critically finite polynomial , with post - critical set @xmath10 . let @xmath64 be a standard action of @xmath18 on @xmath20 , and for @xmath359 let @xmath367 be the element corresponding to @xmath44 . then the action of @xmath72 on @xmath21 contains one @xmath368-cycle for each @xmath369 with @xmath370 . let @xmath371 be the cycle corresponding to @xmath372 . if @xmath373 , then @xmath374 for each @xmath375 . if @xmath376 , then there is a unique @xmath377 such that @xmath378 is the element of @xmath24 corresponding to @xmath372 , and @xmath374 otherwise . although we do nt regard @xmath379 as being in @xmath10 , theorem [ compmon ] nonetheless applies to it . because @xmath380 , it is a point of multiplicity @xmath1 , and we have that @xmath381 acts as a @xmath1-cycle on @xmath21 , with restriction to some @xmath245 giving @xmath382 and the other restrictions being trivial . it follows from proposition [ sphertrans ] that @xmath381 is spherically transitive . the fact that @xmath24 contains a spherically transitive element is also a consequence of theorem [ char ] and lemma [ trans ] . as an illustration of this result , we show that the group in example [ counter1 ] is a standard action of the iterated monodromy group of @xmath57 on @xmath20 , where @xmath383 . we have @xmath384 , so that @xmath385 . now @xmath386 and @xmath387 , implying that @xmath388 acts on @xmath21 as a 2-cycle . because @xmath389 , the restrictions of @xmath388 are trivial . on the other hand @xmath390 , so @xmath391 acts trivially on @xmath21 . because @xmath392 but @xmath393 , the restriction of @xmath391 to one element of @xmath21 is trivial , while the other one is @xmath388 . either choice gives the same group up to conjugacy in @xmath20 ( indeed , up to conjugacy in @xmath24 , since conjugating by @xmath388 exchanges the restrictions of @xmath391 ) . a very useful description of @xmath62 in terms of its wreath recursion comes via automata theory . the set @xmath394 of all restrictions of @xmath72 may be viewed as the set of states of an automaton . being in a state @xmath395 for some @xmath396 and receiving an input letter @xmath245 , the automaton types on the output tape @xmath397 and proceeds to the state @xmath398 , which by is just @xmath399 . in this way the action of @xmath72 on any @xmath66 may be determined . we formalize this in the following definition : an _ automaton _ @xmath83 over the set @xmath21 is given by * the set of states , which we denote also by @xmath83 ; * a map @xmath400 . if @xmath401 , then @xmath366 and @xmath402 as functions of @xmath403 are called the _ output _ and _ transition function _ , respectively . we say that @xmath83 is _ invertible _ if each @xmath404 acts on @xmath21 as a permutation . the _ moore diagram _ of an automaton @xmath83 provides a good method of visualization . it is a directed labeled graph whose vertex set is the set @xmath83 of states of the automaton . if @xmath401 , then there is an arrow from @xmath225 to @xmath402 labeled by @xmath405 . if @xmath83 is invertible , the moore diagram of the inverse automaton is given by formally replacing each state @xmath225 by @xmath406 and changing each arrow labeling from @xmath405 to @xmath407 . given an automaton @xmath83 over a set @xmath21 , it is easy to see that the states of @xmath83 define elements of @xmath31 . indeed , we can recover the wreath recursion for @xmath404 by noting that if @xmath401 then @xmath408 and @xmath409 . in this case we say that @xmath410 is generated by the automaton @xmath83 . by theorem [ compmon ] , a standard action of the iterated monodromy group of a post - critically finite polynomial is generated by a set that is closed under restrictions . hence the automaton generating such a group is finite . see figure [ fig : bas ] for an example . [ bounded ] we say that @xmath62 is _ finite - state _ if it is defined by a finite automaton , or equivalently if @xmath411 is a finite set . we call @xmath72 _ bounded _ if it is finite - state and the sequence @xmath412 is bounded . we call @xmath72 _ finitary _ if @xmath413 for all @xmath9 sufficiently large , or equivalently if there exists @xmath414 such that @xmath237 is trivial for all words of length at least @xmath414 . finitary automorphisms will play a major role in sections [ sec : kneadgraph ] and [ sec : last ] . the main fact we will use about the more general notion of bounded automorphisms is the following special case of a theorem of nekrashevych and bondarenko : @xcite , ( * ? ? ? * theorem 3.9.12 ) [ bndedthm ] let @xmath64 be generated by a finite automaton whose states define bounded automorphisms of @xmath20 . then @xmath24 is contracting . we require a strong result of nekrashevych that characterizes the @xmath64 that are isomorphic to a standard action of the iterated monodromy group of a post - critically finite polynomial . this characterization is purely in terms of a finite automaton that generates @xmath24 . to state this result , we require the notion of a _ tree - like multi - set of permutations_. recall that a multi - set of permutations of a set @xmath21 is a map @xmath415 from a set @xmath416 of indices to the set @xmath417 of permutations of @xmath21 . thus for instance distinct indices may give the same permutation . we denote the set @xmath418 by @xmath2 . the _ cycle diagram _ associated to @xmath2 is an oriented 2-dimensional cw - complex whose set of @xmath3-cells is @xmath21 . for each cycle @xmath419 of each @xmath420 , there is a 2-cell whose boundary passes through @xmath421 and no other elements of @xmath21 , and whose order on the boundary corresponds to the order in the cycle . two different 2-cells can only intersect at 0-cells . we call the _ reduced cycle diagram _ of @xmath2 the diagram obtained by deleting the 2-cells corresponding to fixed points of the @xmath281 . a multi - set @xmath2 of permutations of a set @xmath21 is said to be _ tree - like _ if the cycle diagram of @xmath2 is contractible . for an example of a tree - like multi - set , see figure [ fig : cycdiag ] . note that we could add the identity to this multi - set any number of times and it would still be tree - like . however , adding any non - trivial element of @xmath422 would yield a non - tree - like multi - set . another way to visualize the action of a multi - set of permutations @xmath2 on a set @xmath21 is via its _ cycle graph_. we define it to be a bipartite graph obtained from the reduced cycle diagram by coloring each vertex of the former white , and replacing each 2-cell by a black vertex connected to the white vertices on the boundary of the 2-cell . see figure [ fig : cycgraph ] for the cycle graph corresponding to the multi - set from figure [ fig : cycdiag ] . note that our definition differs slightly from that of @xcite , where the cycle graph is not defined to be bipartite , but is otherwise identical . the cycle graph and cycle diagram are clearly homotopically equivalent , and thus a multi - set of permutations is tree - like if and only if its cycle graph is a tree . in section [ sec : treelike ] we give several results on tree - like sets of permutations . we may now state the characterization of iterated monodromy groups : ( * ? ? ? * theorem 6.10.8 ) [ char ] a subgroup @xmath64 is isomorphic to a standard action of the iterated monodromy group of a post - critically finite polynomial if and only if @xmath24 is the group generated by a finite invertible automaton @xmath83 with the following properties : 1 . for each non - trivial @xmath404 , there is a unique arrow into the state @xmath225 . in other words , there is a unique @xmath423 and @xmath245 with @xmath424 . 2 . for each @xmath404 and each cycle @xmath419 of the action of @xmath225 on @xmath21 , the restriction @xmath425 is non - trivial for at most one @xmath426 . the multi - set of permutations defined by the set of states of @xmath83 acting on @xmath21 is tree - like . let @xmath427 be non - trivial states of @xmath83 with @xmath428 satisfying @xmath429 and @xmath430 for @xmath431 . then there is no @xmath432 with @xmath433 and @xmath434 . for example , the automaton given in figure [ fig : bas ] satisfies all the conditions of theorem [ char ] . we do not use even close to the full strength of theorem [ char ] . indeed , we require only the far easier direction , which is that if @xmath24 is isomorphic to a standard action of an iterated monodromy group , then @xmath410 , where @xmath83 satisfies conditions ( 1)-(4 ) . moreover , we do not use condition ( 2 ) . we introduce a definition following the terminology of @xcite : [ kneadingdef ] a _ kneading automaton _ is a finite invertible automaton satisfying conditions ( 1)-(3 ) of theorem [ char ] . in this section we present several results that will play roles in the proofs of our mains theorems . the first two appear in @xcite . * proposition 6.7.5 ) [ neklem2 ] let @xmath83 be a kneading automaton . then for any @xmath63 , the multi - set of permutations defined by the states of @xmath83 acting on @xmath22 is tree - like . * corollary 6.7.7 ) [ trans ] if @xmath83 is a kneading automaton , then the product of the states of @xmath83 ( taken in any order ) is a spherically transitive element of @xmath31 . [ treefixed ] let @xmath435 be a tree - like multi - set of permutations of a set @xmath21 , and let @xmath436 for some non - empty @xmath437 . suppose that @xmath438 for some @xmath245 . then @xmath439 for all @xmath440 . induct on @xmath441 . when @xmath442 the statement is trivial . suppose that @xmath443 and @xmath444 , and let @xmath445 and @xmath446 if @xmath447 , then necessarily @xmath448 . thus in the cycle graph of @xmath2 there is a path from the white vertex corresponding to @xmath449 to the white vertex corresponding to @xmath366 , given by the action of @xmath450 . there is a distinct path from the vertex corresponding to @xmath366 back to the vertex corresponding to @xmath449 , given by the action of @xmath451 . this contradicts the hypothesis that the cycle graph is a tree . therefore @xmath452 , and hence @xmath453 . applying the inductive hypothesis to @xmath450 gives that @xmath439 for all @xmath440 . [ treecyc ] let @xmath2 be a tree - like multi - set of permutations acting on a set @xmath21 with @xmath454 . then the reduced cycle diagram of @xmath2 has at most @xmath455 2-cells , with equality if and only if every element of @xmath2 acts on @xmath21 as a ( possibly empty ) disjoint product of transpositions . we induct on @xmath1 . if @xmath212 , then the reduced cycle diagram of @xmath2 has a single 2-cell , and the unique element of @xmath2 acting non - trivially on @xmath21 acts as a transposition . hence the lemma holds . assume that @xmath456 , and consider the cycle graph of @xmath2 . because it is a tree , there must exist a vertex @xmath69 of degree 1 ( a _ leaf _ of the tree ) . this vertex must be white , since the black vertices by definition correspond to cycles and so have degree greater than one . note that @xmath69 is connected to a unique black vertex @xmath402 , and hence is fixed by all but one element @xmath281 of @xmath2 . consider the element @xmath457 obtained by deleting from @xmath281 the cycle containing @xmath69 . replacing @xmath281 by @xmath457 gives a new multi - set @xmath458 whose cycle graph is the same as that of @xmath2 , except that @xmath402 and all leaves connected to @xmath402 have been deleted . note this results in deleting at least one white vertex , namely @xmath69 , and this is the only white vertex deleted if and only if the deleted cycle of @xmath281 was a 2-cycle . thus @xmath458 is tree - like and acts on a set @xmath459 with @xmath460 ; moreover we have equality if and only if the only cycle in an element of @xmath2 that is not in an element of @xmath458 is a 2-cycle . we may apply the inductive hypothesis to get that there are at most @xmath461 black vertices in the cycle graph of @xmath458 , with equality if and only if all elements of @xmath458 are ( possibly empty ) disjoint products of transpositions . but this cycle graph contains exactly one fewer black vertex than the cycle graph of @xmath2 , and hence the latter has at most @xmath462 black vertices , with equality if and only if all elements of @xmath2 are ( possibly empty ) products of disjoint 2-cycles . the number of black vertices in the cycle graph of @xmath2 is by definition the same as the number of 2-cells in the reduced cycle diagram of @xmath2 . [ treeperms ] let @xmath435 be a tree - like multi - set of permutations of a set @xmath21 with @xmath463 . 1 . for any @xmath464 , we have @xmath465 . if @xmath466 for some @xmath464 , then @xmath451 is the identity for all @xmath467 . we begin by noting that by definition the cycle graph ( and thus the reduced cycle diagram ) of @xmath2 is a contractible tree , and hence connected . if the cycle graph ( equivalently , reduced cycle diagram ) of some subset @xmath275 of @xmath2 is also connected , then the two cycle graphs must coincide , and hence all elements of the multi - set @xmath468 are the identity . consider the reduced cycle diagram of the multi - set @xmath469 , where @xmath464 . it is a ( possibly disconnected ) planar graph , and hence by euler s formula satisfies @xmath470 where @xmath471 and @xmath472 denote the numbers of vertices , edges , and faces ( counting the face at infinity ) , respectively , and @xmath372 denotes the number of connected components of the graph . now the vertex set is just @xmath21 , so @xmath473 . there are @xmath474 edges for each @xmath474-cycle of @xmath281 or @xmath475 , where @xmath476 ( recall that fixed points do not appear in the reduced cycle diagram ) . thus @xmath477 . finally , there is one face for each cycle of @xmath281 or @xmath475 , plus the face at infinity . because the reduced cycle diagram of @xmath469 is a subset of the reduced cycle diagram for @xmath2 , we have from lemma [ treecyc ] that @xmath281 and @xmath475 have at most @xmath478 cycles between them , and hence @xmath479 . therefore gives @xmath480 and assertion ( 1 ) follows . note that in we have equality if and only if @xmath481 , which occurs precisely when the cycle diagram of @xmath469 has @xmath478 2-cells . when this happens , we have by lemma [ treecyc ] that the number of 2-cells of the cycle diagram of @xmath469 is the same as the number of 2-cells of the cycle diagram of @xmath2 , and hence the two diagrams coincide . it follows that @xmath482 . we have thus shown that either @xmath482 or @xmath483 now implies that either @xmath482 or @xmath484 . in particular , either @xmath482 or @xmath485 . this together with the remarks at the beginning of the proof establish assertion ( 2 ) . in this section we exploit condition ( 1 ) of theorem [ char ] and the results of section [ sec : treelike ] to study the set @xmath486 first defined on p. . condition ( 1 ) of theorem [ char ] implies that if we delete the trivial state from the moore diagram of a kneading automaton @xmath83 ( along with all the arrows originating at the trivial state ) then then the resulting graph is a disjoint union of cycles with trees attached to them . we call such a diagram the _ reduced moore diagram _ of @xmath83 . see figure [ fig : kneadingaut ] . in particular , the states not in cycles have the property that all restrictions to sufficiently long words are the identity , and hence they define finitary automorphisms of @xmath20 . to each state @xmath225 in a cycle of the moore diagram we can associate its _ kneading sequence _ @xmath487 , which is the unique infinite word such that for each @xmath488 , @xmath489 belongs to the cycle containing @xmath225 . we refer to @xmath490 as the _ length-@xmath9 kneading sequence _ of @xmath225 . the ( infinite ) kneading sequence of any given state is periodic , with period dividing the length of the cycle in which the element lies . for instance , for the automaton in figure [ fig : kneadingaut ] , the kneading sequences of @xmath225 , @xmath402 , and @xmath372 are @xmath491 and @xmath492 , respectively , where the bars denote repeating . by hypothesis @xmath83 is invertible , and recall that the moore diagram of the inverse automaton is given by replacing each state @xmath225 by @xmath406 and changing each arrow labeling from @xmath405 to @xmath407 . hence @xmath406 is in a cycle of the moore diagram of the inverse automaton of @xmath83 if and only if @xmath225 is in a cycle of the moore diagram of @xmath83 . each such @xmath406 has a kneading sequence as before . let @xmath493 denote the collection of states of @xmath83 that are in cycles of the moore diagram , together with their inverses . let @xmath368 be the least common multiple of the periods of the kneading sequences of the elements of @xmath493 . the _ kneading graph _ of the automaton @xmath83 is the directed graph whose vertex set is the set of length-@xmath368 kneading sequences of the states belonging to @xmath493 . there is a directed edge from @xmath494 to @xmath495 if @xmath494 is the length-@xmath368 kneading sequence for some @xmath496 and @xmath497 . we label such an edge with the element @xmath225 . two kneading graphs are pictured in figure [ fig : kneadinggraph ] . recall that the set of _ stable _ elements of @xmath64 is @xmath247 when @xmath24 is generated by an automaton @xmath83 satisfying the hypotheses of theorem [ char ] , the kneading graph of @xmath83 provides an algorithm for determining @xmath248 and @xmath79 . this idea first appeared in ( * ? ? ? * lemma 3.2 ) , which deals with certain automata in the case @xmath212 . we require some terminology relating to the kneading graph . by a _ path _ we mean any sequence @xmath498 of vertices such that there is a directed edge from @xmath499 to @xmath500 or a directed edge from @xmath500 to @xmath499 , for all @xmath501 . note that this is more general than the usual notion of a path in a directed graph , since we permit paths to traverse edges against their direction . we further stipulate that our paths have _ no back - tracking _ , that is , each edge traversed is either distinct from the previous edge , or is the same as the previous edge and also in the same direction ( i.e. consists of going again around a cycle of length one ) . by a _ circuit _ , we mean a path with a common starting and ending vertex ; we allow repeats of vertices and edges . a _ cycle _ is a circuit that repeats only its common starting and ending vertex . recall that a kneading automaton is one satisfying conditions ( 1)-(3 ) of theorem [ char ] . [ kneadstab ] let @xmath24 be generated by a kneading automaton @xmath83 . then @xmath248 consists of words in @xmath502 obtained from the labels of paths in the kneading graph of @xmath83 , where one reads the inverse of the labeled element if one follows an arrow backwards . to assemble the word corresponding to a given path in the kneading graph , one copies the letters down from right to left . in addition , @xmath79 consists of the words obtained from labels of ciruits in the kneading graph of @xmath83 . for instance , the path of length 2 going from @xmath503 to @xmath504 in the kneading graph of the basilica group ( figure [ fig : kneadinggraph ] , left ) gives @xmath505 . the other path of length 2 gives @xmath506 , while the paths of length 1 yield @xmath507 , and @xmath508 . thus @xmath248 consists of these six elements plus the identity . since there are no circuits in the kneading graph , @xmath79 is trivial . recall that condition ( 1 ) of theorem [ char ] ensures that each @xmath404 not in a cycle of the reduced moore diagram is finitary , that is , has trivial restriction on all sufficiently long words in @xmath20 . if @xmath225 is in a cycle , the length-@xmath9 kneading sequence @xmath490 of @xmath225 is the unique word of length @xmath9 such that @xmath489 is not finitary . we often simply call @xmath490 the kneading sequence of @xmath225 when the length @xmath9 is clear from context . if @xmath404 has kneading sequence @xmath490 , then from we have @xmath509 if @xmath432 is finitary and @xmath74 is not , then for sufficiently large @xmath102 and any @xmath510 , we have @xmath511 . hence @xmath512 can not be finitary since @xmath72 is not finitary . by hypothesis @xmath489 is not finitary , and thus from we have that @xmath513 is not finitary , so that that @xmath514 is the kneading sequence for @xmath406 . because @xmath515 , multiplication by @xmath406 sends the kneading sequence of @xmath406 to the kneading sequence of @xmath225 . for @xmath516 let @xmath517 where the @xmath518 are ( not necessarily distinct ) elements of @xmath83 , @xmath519 , and this expression is minimal length among all words in @xmath502 giving @xmath72 . we denote @xmath102 by @xmath520 , and call it the length of @xmath72 . from it follows that @xmath521 for any @xmath522 and any @xmath66 . suppose that @xmath523 , so that there exists a non - empty @xmath66 with @xmath253 . if @xmath490 is the length-@xmath9 initial word of @xmath524 , then @xmath525 . hence each @xmath518 lies in a cycle of the moore diagram of @xmath83 , since otherwise at least one would be finitary , implying @xmath526 for @xmath252 sufficiently long . let @xmath368 be the least common multiple of the periods of the ( infinite ) kneading sequences of the @xmath518 . then @xmath527 none of the elements in the right - hand side of can be finitary , for otherwise @xmath526 for sufficiently long @xmath252 . hence @xmath528 is the ( length-@xmath368 ) kneading sequence for @xmath529 , @xmath530 is the kneading sequence for @xmath531 , @xmath532 is the kneading sequence for @xmath533 , and so on . thus @xmath72 determines a path in the kneading graph of @xmath83 , beginning at @xmath528 , proceeding to @xmath530 , then to @xmath532 , and so forth , ending at @xmath534 . there can be no back - tracking because of the minimality of . the path from @xmath528 to @xmath530 follows the arrow labeled @xmath535 if @xmath536 , and runs against the arrow labeled @xmath535 if @xmath537 . assembling the labels along this path from right to left as indicated in the statement of the proposition then yields @xmath72 . conversely , any path in the kneading graph beginning at a vertex @xmath528 yields a word @xmath538 . now @xmath528 is the ( length-@xmath368 ) kneading sequence of @xmath529 , and because @xmath528 consists of some number of full cycles of the periodic part of the infinite kneading sequence of @xmath529 , we have @xmath539 . similarly , @xmath540 . continuing in this manner we obtain @xmath541 , and hence @xmath523 . now take @xmath80 , so that there is some non - empty @xmath66 with @xmath542 and @xmath351 . then for any length-@xmath9 initial word @xmath490 of @xmath524 , we have @xmath525 and @xmath543 . hence if @xmath368 is as above , we have @xmath544 , and so the path in the kneading graph corresponding to @xmath72 is a circuit . conversely , any circuit yields @xmath72 with @xmath541 and @xmath545 . [ lengthone ] let @xmath83 be a kneading automaton . then every cycle in the kneading graph of @xmath83 has length one . let the cycle in question consist of the vertices @xmath546 , with @xmath547 and @xmath548 distinct . suppose that @xmath291 , so that @xmath549 . then assembling the labelings along this path as in proposition [ kneadstab ] gives an element @xmath550 with the @xmath518 distinct . note that @xmath551 is the ( length-@xmath368 ) kneading sequence of @xmath552 , and @xmath553 for each @xmath554 . in particular , @xmath555 gives @xmath556 by lemma [ neklem2 ] , the set of permutations given by the states of @xmath83 acting on @xmath557 is tree - like , and hence the cycle diagram of its action is contractible . if we replace some elements of @xmath83 by their inverses , the cycle diagram is only altered by changing the directions of some arrows ; in particular it is still contractible . hence the multi - set of permutations given by @xmath558 acting on @xmath557 is a subset of a tree - like multi - set . from lemma [ treefixed ] and we then have @xmath559 for all @xmath560 , which contradicts the fact that @xmath561 . [ comp ] let @xmath410 satisfy all the conditions of theorem [ char ] . then each component of the kneading graph of @xmath83 contains at most one cycle . by proposition [ lengthone ] , each cycle of the kneading graph of @xmath83 has length one . suppose that there are two such one - cycles at vertices @xmath494 and @xmath495 lying in a connected component of the kneading graph of @xmath83 , and let @xmath562 and @xmath563 be the elements labeling them . then @xmath564 for @xmath431 . moreover , @xmath494 and @xmath495 have length a multiple of the period of the kneading sequences of @xmath562 and @xmath563 , and so @xmath565 for @xmath566 . now @xmath494 and @xmath495 are in the same component of the kneading graph , and so there is a path connecting them . assembling the labelings along this path as in proposition [ kneadstab ] gives @xmath432 with @xmath567 and @xmath568 , ( see the construction in the converse portion of the proof of proposition [ kneadstab ] ) . but this contradicts condition ( 4 ) of theorem [ char ] . [ n1 ] let @xmath410 satisfy all the conditions of theorem [ char ] . then every element of @xmath79 is conjugate to a power of an element of @xmath569 . by proposition [ kneadstab ] , elements of @xmath79 correspond to circuits in the kneading graph of @xmath83 , which by definition have no back - tracking . by theorems [ lengthone ] and [ comp ] , each such cycle belongs to a component @xmath493 having at most a single cycle , which must have length one . thus @xmath493 is either a tree , or becomes a tree when we delete the edge forming the one - cycle . every non - trivial circuit in a tree involves back - tracking , and so if @xmath570 is a non - trivial circuit in @xmath493 , then @xmath493 must contain a one - cycle at a vertex @xmath277 , labeled by @xmath404 . clearly @xmath571 let @xmath551 be the starting point of @xmath570 , and note that if @xmath570 does not contain the one - cycle at @xmath277 , then it lies entirely within a tree , which is impossible . thus @xmath570 must proceed along the unique path to @xmath277 , go around the one - cycle at @xmath277 a non - zero number of times in the same direction each time , and return to @xmath551 the same way it came . if @xmath72 is the element labeling the path from @xmath551 to @xmath277 ( assembled as in proposition [ kneadstab ] ) , then @xmath572 is the element labeling the reverse path . hence the element labeling @xmath570 is conjugate to a power of @xmath225 . [ n1cor ] let @xmath410 satisfy all the conditions of theorem [ char ] , and suppose that every element of @xmath83 that is in a cycle of the reduced moore diagram either fixes no ends of @xmath20 or fixes infinitely many . then @xmath38 . condition ( 1 ) of theorem [ char ] implies that each @xmath404 is bounded ( see definition [ bounded ] ) . indeed , for each @xmath404 , the restrictions of @xmath225 to words of length @xmath9 consist of the endpoints of all paths of length @xmath9 in the reduced moore diagram ( following the arrows ) starting at @xmath225 . because every non - trivial state has a unique incoming arrow , there can be at most one such path ending in each state . hence @xmath573 is bounded by @xmath574 . by theorem [ bndedthm ] , @xmath24 is therefore contracting . by lemma [ trans ] , @xmath24 contains a spherically transitive element . we may thus apply theorem [ crystal ] , and so to show @xmath38 it is enough to show that every @xmath80 fixes infinitely many ends of @xmath20 . by theorem [ n1 ] , each @xmath80 is conjugate to @xmath231 for some @xmath575 and @xmath576 . because @xmath571 , @xmath225 lies in a cycle of the reduced moore diagram of @xmath83 and also fixes at least one end of @xmath20 . thus by hypothesis @xmath225 fixes infinitely many ends . but @xmath231 fixes at least as many elements of each @xmath22 as @xmath225 does , and hence @xmath231 fixes infinitely many ends of @xmath20 . recall that @xmath62 is finitary if all its restrictions at sufficiently long words are the identity . if @xmath72 is finitary , then it either fixes no ends of @xmath20 or fixes infinitely many such ends . indeed , if @xmath252 is an end fixed by @xmath72 and @xmath577 is the length-@xmath9 initial word of @xmath252 , then @xmath578 for all @xmath9 and we may take @xmath9 large enough so that @xmath579 . thus @xmath72 fixes all ends with initial word @xmath577 , which is an infinite set . throughout this section , when we write @xmath276 for @xmath62 , we mean the set of fixed points of the action of @xmath72 on @xmath21 ( not on the ends of @xmath20 ) . [ finitary ] let @xmath83 be a kneading automaton , and let @xmath580 be finitary with @xmath581 . then at least one of @xmath582 fixes infinitely many ends of @xmath20 . by part ( 1 ) of lemma [ treeperms ] , @xmath583 , and hence there exist elements @xmath584 that are fixed by either @xmath225 or @xmath402 . renaming if necessary , assume that @xmath225 fixes @xmath585 , and let @xmath586 . let @xmath587 if @xmath588 and @xmath589 if @xmath590 . applying part ( 1 ) of lemma [ treeperms ] again , there exist @xmath591 that are fixed by either @xmath562 or @xmath592 . renaming again if necessary , assume @xmath593 , and let @xmath594 . let @xmath595 if @xmath596 and @xmath597 if @xmath598 . proceeding in this manner yields a sequence @xmath599 of elements of @xmath83 and words @xmath600 such that @xmath601 and @xmath602 . because @xmath225 is finitary , there exists @xmath603 such that all @xmath604 are trivial for @xmath605 . hence @xmath225 fixes the word @xmath606 and @xmath607 , implying that @xmath225 fixes all ends of @xmath20 with initial word @xmath608 , which is an infinite set . [ nolongcycle ] let @xmath410 satisfy the conditions of theorem [ char ] , and suppose @xmath83 acts on a set @xmath21 with @xmath463 . let @xmath493 be a cycle of the reduced moore diagram of @xmath83 . if @xmath493 contains at least two elements and one of them fixes a non - empty , finite set of ends of @xmath20 , then @xmath1 is odd , @xmath493 is the only cycle , and up to conjugation in @xmath31 we have @xmath609 with @xmath610 where @xmath201 and @xmath307 are products of disjoint transpositions , @xmath201 fixes only @xmath611 , and @xmath307 fixes only @xmath612 . in particular , @xmath613 is infinite dihedral . let @xmath614 , so that there is an arrow in the reduced moore diagram from @xmath615 to @xmath616 for @xmath617 and also from @xmath618 to @xmath619 . let @xmath620 be the length-@xmath9 kneading sequence of @xmath619 , implying that @xmath621 is the kneading sequence of @xmath426 . then @xmath622 and from , @xmath623 hence @xmath615 fixes @xmath350 if and only if each @xmath624 fixes @xmath625 . assume that @xmath493 contains at least two elements and one of them fixes a non - empty , finite set of ends of @xmath20 . suppose first that @xmath615 does not fix @xmath350 for some @xmath560 . if @xmath615 fixes an end @xmath252 of @xmath20 , then @xmath252 can not be the ( infinite ) kneading sequence of @xmath615 . letting @xmath608 be the length-@xmath172 initial word of @xmath252 , we can thus take @xmath172 large enough so that @xmath626 is finitary . let @xmath627 , and note that for @xmath628 , @xmath629 is a restriction of @xmath402 at a word of length @xmath630 . because @xmath402 is finitary , we may take @xmath102 large enough so that @xmath631 . but @xmath615 fixes @xmath252 , and thus @xmath632 , ensuring that @xmath615 fixes all ends of @xmath20 with initial word @xmath265 . hence all @xmath615 either fix no ends of @xmath20 or infinitely many , a contradiction . suppose now that @xmath615 fixes @xmath350 for some @xmath560 ( equivalently , all @xmath560 ) . then for each @xmath633 there is some word @xmath608 with @xmath634 and @xmath635 . thus if any @xmath624 fixes infinitely many ends of @xmath20 , then the same conclusion holds for all the @xmath624 . if @xmath636 , then we claim @xmath637 to see why , note that @xmath638 for all @xmath560 by assumption , and if @xmath639 for some @xmath464 , then part ( 2 ) of lemma [ treeperms ] gives @xmath640 for the remaining @xmath102 . if @xmath212 , then we must have @xmath641 for all @xmath560 , and holds . if @xmath456 and @xmath642 , then @xmath640 and again holds . thus by there are @xmath643 with @xmath644 and @xmath645 ( here @xmath560 is allowed to equal @xmath172 ) . now @xmath646 and @xmath646 are distinct and finitary , and by lemma [ finitary ] at least one of them fixes infinitely many ends of @xmath20 . thus at least one of the @xmath615 fixes infinitely many ends of @xmath20 , and hence all do . this gives a contradiction . we have therefore shown that @xmath647 , and we write @xmath648 . let @xmath649 be the kneading sequence of @xmath225 , implying that @xmath650 is the kneading sequence of @xmath402 , and recall that both @xmath225 and @xmath402 must fix their kneading sequences . if @xmath651 , then there exist @xmath652 fixed by either @xmath225 or @xmath402 . as in the previous paragraph , we conclude that both @xmath225 and @xmath402 fix infinitely many ends of @xmath20 , a contradiction . hence @xmath653 , and from part ( 2 ) of lemma [ treeperms ] we have that every @xmath654 must act as the identity on @xmath21 . if @xmath655 is in the component of the reduced moore diagram containing @xmath493 , then it can not be part of @xmath493 , and neither can any of its restrictions . thus @xmath655 acts trivially on @xmath20 . if @xmath655 is in a component of the reduced moore diagram of @xmath83 that does not contain @xmath493 , then every element of this component must act trivially on @xmath21 . since components are closed under restriction , it follows that every element of this component is trivial . thus @xmath656 . now if @xmath657 , then there is some @xmath658 that we may assume without loss is fixed by @xmath225 . then @xmath659 is not in the cycle @xmath660 , and so @xmath661 , showing that @xmath225 fixes infinitely many ends of @xmath20 . thus @xmath402 must as well , which is a contradiction . therefore @xmath662 , and we must have @xmath663 . otherwise @xmath225 and @xmath402 have the same kneading sequence , and thus give two one - cycles in the same component of the kneading graph of @xmath83 , violating theorem [ comp ] . hence conjugating by an appropriate @xmath664 we may assume that @xmath665 and @xmath666 , giving the forms in . moreover , because @xmath667 we must have equality in , which implies that the reduced cycle diagram of @xmath668 has @xmath478 2-cells . by lemma [ treecyc ] it follows that @xmath201 and @xmath307 are products of disjoint transpositions , and hence @xmath1 must be odd . now @xmath669 , and so @xmath219 , and similarly @xmath670 . by lemma [ trans ] or proposition [ sphertrans ] we have that @xmath671 has infinite order , and conjugating @xmath671 by @xmath225 gives @xmath672 , which is @xmath673 . hence @xmath613 is infinite dihedral . [ titus ] let @xmath410 satisfy the conditions of theorem [ char ] , where @xmath83 acts on a set @xmath21 with @xmath463 . let @xmath674 be a 1-cycle in the reduced moore diagram of @xmath83 , and suppose that @xmath372 fixes at least two points of @xmath21 . if @xmath372 fixes a non - empty , finite set of ends of @xmath20 , then @xmath1 is even , @xmath493 is the only cycle , and up to conjugation in @xmath31 we have @xmath675 with @xmath676 where @xmath201 and @xmath307 are products of disjoint transpositions , @xmath677 , and @xmath678 . in particular , @xmath613 is infinite dihedral . suppose that @xmath372 fixes a non - empty , finite set of ends of @xmath20 . let @xmath245 be such that @xmath679 . if @xmath680 , then there are @xmath681 with @xmath682 and @xmath683 . then @xmath684 is finitary for @xmath566 , and by lemma [ finitary ] one of them fixes infinitely many ends of @xmath20 . it follows that @xmath372 fixes infinitely many ends of @xmath20 , a contradiction . hence @xmath685 . if @xmath686 , then @xmath687 , and as before we have a contradiction . thus @xmath688 for some @xmath689 . let @xmath690 . if @xmath691 , then there are @xmath692 with @xmath693 and @xmath694 . but @xmath695 is finitary for @xmath566 , and as before this gives a contradiction . hence @xmath696 , so that @xmath697 , and we may apply part ( 2 ) of theorem [ treeperms ] as in the proof of theorem [ nolongcycle ] to get that @xmath675 . if @xmath698 , then the restriction of @xmath225 at this fixed point must be trivial , which implies that @xmath372 fixes infinitely many ends of @xmath20 . . conjugating by an appropriate element @xmath664 allows us to move @xmath449 to @xmath3 and @xmath366 to @xmath200 , so that @xmath372 and @xmath225 have the forms in . because @xmath700 we must have equality in , which implies that the reduced cycle diagram of @xmath668 has @xmath478 2-cells . by lemma [ treecyc ] it follows that @xmath201 and @xmath307 are products of disjoint transpositions , and hence @xmath1 must be even . that @xmath613 is dihedral follows just as in the proof of theorem [ nolongcycle ] . we now have all the tools in place to finish proving our main result . recall that @xmath34 $ ] is exceptional if there exists a finite , non - empty set @xmath35 such that @xmath36 . suppose @xmath6 is non - exceptional , put @xmath701 , and let @xmath64 be a standard action of @xmath18 on @xmath20 . there is an automaton @xmath83 that generates @xmath24 and satisfies the conditions of theorem [ char ] . suppose that there is a cycle @xmath493 in the reduced moore diagram of @xmath83 such that some element of @xmath493 fixes a non - empty , finite set of ends of @xmath20 . if @xmath702 , then by theorem [ nolongcycle ] , @xmath24 is generated by two elements of the form . from theorem [ compmon ] it follows that the post - critical set of @xmath6 consists of @xmath703 where @xmath704 , @xmath705 , and neither @xmath44 nor @xmath706 is critical . moreover , @xmath707 is contained in the set of critical points of @xmath6 , and hence @xmath6 is exceptional ( indeed , by proposition [ chebclass ] it is conjugate to @xmath48 for some odd @xmath1 ) . if @xmath674 , then let @xmath245 be such that @xmath708 , and let @xmath709 be the set of fixed points in the action of @xmath372 on @xmath21 . by assumption @xmath709 is non - empty . if @xmath710 , then by theorem [ titus ] , @xmath24 is generated by two elements of the form . as before theorem [ compmon ] implies that @xmath6 is exceptional ( it is conjugate to @xmath42 for some even @xmath1 ) . if @xmath711 with @xmath689 , then @xmath712 must be finitary and hence fix either no ends or infinitely many ends by the remarks at the beginning of this section . thus @xmath372 either fixes no ends or infinitely many ends , contrary to our assumption . if @xmath713 , then by theorem [ compmon ] we have @xmath714 where @xmath44 is non - critical and @xmath715 is contained in the set of critical points of @xmath6 . once again @xmath6 must be exceptional ( it is conjugate to a polynomial of the form ) . therefore the reduced moore diagram of @xmath83 contains no such cycle @xmath493 . by corollary [ n1cor ] we have @xmath38 . in order to prove proposition [ exceptprop ] and the remarks preceding it , we make a brief study of exceptional polynomials ( see the discussion following lemma 2.3 in @xcite for similar remarks in the case of rational functions ) . [ exceptdisc ] suppose that @xmath6 has degree @xmath1 and is exceptional , with @xmath35 a finite set such that @xmath36 . following @xcite , we note that each preimage of a point in @xmath35 is either a critical point or in @xmath35 . hence , letting @xmath716 , we have @xmath717 where @xmath718 is the local degree of @xmath6 at @xmath372 . because @xmath6 has @xmath478 critical points up to multiplicity , this gives @xmath719 with equality holding if and only if @xmath6 has @xmath478 distinct critical points ( necessarily each having multiplicity 2 ) , all of which are contained in @xmath720 . we now give a complete characterization of the case @xmath40 . while our result is not new , the statement and proof differ in form from the standard treatments in the literature ( e.g. ( * ? ? ? * theorem 19.9 ) ) . recall that the chebyshev polynomial of degree @xmath1 is given by @xmath721 . its critical set @xmath722 consists of @xmath478 distinct points and satisfies @xmath723 , @xmath724 , @xmath725 , @xmath726 , and @xmath727 . [ chebclass ] let @xmath34 $ ] be exceptional , with @xmath728 and @xmath705 . then one of the following holds : 1 . @xmath729 , @xmath730 , and @xmath6 is conjugate to @xmath42 for some odd @xmath1 . @xmath731 , @xmath732 , and @xmath6 is conjugate to @xmath48 for some odd @xmath1 . 3 . @xmath733 , and @xmath6 is conjugate to @xmath42 for some even @xmath1 . we outline an algebraic approach , which differs somewhat from the well - known geometric arguments ( see e.g. ( * ? ? ? * theorem 19.9 ) ) . first note that by the definition of exceptional polynomial , @xmath734 and @xmath35 contains no critical points . by the discussion preceding the proposition , @xmath6 has @xmath478 critical points , all of which have multiplicity 2 . applying an appropriate affine conjugation , we may assume that @xmath735 and @xmath736 . in case ( 1 ) of the proposition , we then have @xmath737 where @xmath738 , and @xmath739 $ ] are monic and relatively prime . differentiating gives @xmath740 = r[2(z-1)h'h + h^2]$ ] , but the roots of @xmath741 are the same as the roots of @xmath742 , whence @xmath743 , where @xmath1 is the degree of @xmath6 . because @xmath72 and @xmath256 are relatively prime , we obtain @xmath744 and @xmath745 . differentiating again and substituting yields @xmath746 this differential equation gives a recurrence relation on the coefficients of @xmath72 ; with the assumption that @xmath72 is monic , this uniquely determines all coefficients of @xmath72 . because @xmath747 , we have by that @xmath748 , thereby determining @xmath474 , and thus also @xmath6 . however , @xmath42 clearly satisfies the same conditions as @xmath6 , and thus @xmath749 . part ( 1 ) of the proposition follows . the other parts proceed similarly . to prove proposition [ exceptprop ] , we require the following result . [ chebcomp ] let @xmath463 and suppose that @xmath64 is generated by @xmath660 where @xmath225 and @xmath402 are distinct and non - trivial , @xmath750 , and @xmath671 is spherically transitive . then @xmath751 , where @xmath474 is the number of elements in @xmath660 fixing at least one end of @xmath20 . because @xmath671 is spherically transitive , we have that for any @xmath63 its action on @xmath22 is a @xmath89-cycle . clearly conjugation of @xmath671 by @xmath225 gives @xmath673 , and it follows that if @xmath25 is the action of @xmath24 on @xmath22 then @xmath25 is dihedral of order @xmath752 and a complete list of its elements is given by the actions of @xmath753 and @xmath754 for @xmath755 . none of the elements of the form @xmath753 for @xmath756 can have fixed points in @xmath22 . an element of the form @xmath754 is conjugate to @xmath402 if @xmath102 is odd and to @xmath225 if @xmath102 is even . now the action of @xmath225 either has a fixed point in @xmath22 for all @xmath9 ( if @xmath225 fixes an end of @xmath20 ) or has no fixed points in @xmath22 for @xmath9 large enough , and similar statements hold for @xmath402 . thus for @xmath9 sufficiently large , the number of elements of @xmath25 fixing at least one point of @xmath22 is @xmath757 . dividing by @xmath758 and letting @xmath759 gives @xmath751 . let @xmath59 have degree @xmath1 , and let @xmath64 be a standard action of @xmath18 on @xmath20 . if @xmath59 is conjugate to @xmath42 for @xmath1 even , then it follows from theorem [ compmon ] that @xmath24 is generated by two elements of the form . proposition [ chebcomp ] then applies to show @xmath61 . if @xmath59 is conjugate to @xmath48 for @xmath1 odd , then it follows from theorem [ compmon ] that @xmath24 is generated by two elements of the form . if @xmath59 is conjugate to @xmath42 for @xmath1 odd , then @xmath24 is generated by two elements of the form @xmath760 with assumptions as in . in either case proposition [ chebcomp ] applies to show @xmath761 . i would like to thank lasse rempe , juan rivera - letelier , and mikhail lyubich for helpful comments and references to the literature on exceptional maps . i also extend my thanks to the institute for computational and experimental research in mathematics , where i presented and received valuable feedback on some of these results as part of the semester on complex and arithmetic dynamics .
the iterated monodromy group of a post - critically finite complex polynomial of degree @xmath0 acts naturally on the complete @xmath1-ary rooted tree @xmath2 of preimages of a generic point . this group , as well as its pro - finite completion , act on the boundary of @xmath2 , which is given by extending the branches to their ends " at infinity . we show that in most cases , elements that have fixed points on the boundary are rare , in that they belong to a set of haar measure @xmath3 . the exceptions are those polynomials linearly conjugate to multiples of chebyshev polynomials and a case that remains unresolved , where the polynomial has a non - critical fixed point with many critical pre - images . the proof involves a study of the finite automaton giving generators of the iterated monodromy group , and an application of a martingale convergence theorem . our result is motivated in part by applications to arithmetic dynamics , where iterated monodromy groups furnish the geometric part " of certain galois extensions encoding information about densities of dynamically interesting sets of prime ideals .
it is known that the brownian motion of particles in the presence of asymmetric structures and in non - equilibrium conditions may result in directed motion . this phenomenon , called brownian motor or brownian ratchet is ubiquitous in the living cells . the known examples of this phenomenon are the electric potential difference through the ion channels and the movement of the kinesin motor protein along the microtubule @xcite . this phenomenon has attracted great interest in recent years , due to its applications in the separation of particles @xcite and making pumps @xcite and motors @xcite in fine dimensions . the directed motion of particles arisen by an asymmetric structure as a non - equilibrium phenomenon has been observed in systems covering a broad range of scales such as macroscopic elastic discs @xcite , mesoscopic gears @xcite , microscopic colloidal systems @xcite , moving cells @xcite and ions @xcite . in the case of polymers , directed translocation of a polymer through a curved bilayer membrane @xcite and polymer passage through a membrane in the presence of chaperons @xcite have been studied . despite extended studies in this field , the polymer motion through asymmetric structures such as a cone - shaped channel is poorly understood . it has been shown that cone - shaped channels have important applications in rectifying the ionic currents and in the simulation of biological ion channels @xcite . polymer translocation through cone - shaped channels has also been studied , in the literature @xcite . it has been observed that the dependencies of the translocation time and the capture rate on the applied voltage and the polymer length are qualitatively similar to those of the cylindrical channels . the most important point in the case of cone - shaped channels is the very high strength of the electric field in the narrow entry of the channel . it causes the ionic current through the channel to be affected only by the few monomers in the narrow entry of the channel @xcite . this point is important in the application of these channels in dna sequencing . one important challenge of dna sequencing by the ordinary channels such as @xmath0-hemolysin protein channel is the simultaneous effect of 10 - 15 nucleic acids inside the channel on the ionic current @xcite . however , in the cone - shaped protein channel , mspa , only two nucleic acids which are close to the channel apex simultaneously affect the current @xcite . in dna translocation experiments through mspa , the difference between the effects of the four types of the nucleic acids on the ionic current is larger , compared to the experiments using the cylindrical channels @xcite . this is also an important advantage of the cone - shaped channels for dna sequencing @xcite . in this paper , the translocation of a flexible polymer through a cone - shaped channel in the presence of no external driving field is studied , theoretically and by computer simulation . during the translocation process , a force of entropic origin acts on the part of the polymer which is inside the channel . this force originates from the entropic tendency of the polymer toward the larger entry of the channel . the translocation time is a decreasing function of this force . we set out to obtain the effective force , @xmath1 , acting on the polymer as a function of the channel apex angle and the channel length and compare it with the simulation results . for a given length of the cone - shaped channel , we calculate the effective force for the two cases of small and large apex angles of the channel , theoretically . during the translocation of a polymer through a narrow channel in a wall ( from the cis to the trans side ) , it is known that the monomers of the polymer segment passed to the trans side accumulate near the wall , in front of the channel exit . therefore , to calculate the effective force acting on a polymer which translocates through a cone - shaped channel of small base diameter ( from the apex to the base ) , we consider the channel as a closed cavity confining a segment of the polymer . in the case of large base diameters , however , we consider one of the polymer ends fixed in the apex of the cone - shaped channel and obtain the confinement free - energy and then the exerted force to the polymer . as a further check of the reliability of the results , the force is calculated from two different methods in the case of large diameters of the channel base . for a given length of the channel , we find that the effective force is a non - monotonic function of the channel apex angle . combination of the results obtained from the two cases of small and large apex angles shows that the force as a function of the apex angle has a maximum and then a minimum . we also obtain the dependence of the force on the channel length . for each value of the channel apex angle , the force is a monotonically increasing function of the channel length . these results are supported by our simulation data for the translocation time of a polymer through a cone - shaped channel . the theory also predicts the polymer length inside the channel , which is in agreement with the simulation results . the rest of the paper is organized as follows . in sec . [ theory ] , we present the theory , for the two cases of small and large apex angles . the force on the polymer and the average number of the monomers inside the channel are discussed . in sec . [ simulation ] , the simulation results of the polymer translocation through a cone - shaped channel are presented and compared with the theory . finally , in the last section , the paper is summarized , the results are discussed and some notes are presented on the polymer equilibrium during its translocation through the cone - shaped channel . , as a function of the cone apex angle . as can be seen , the effective force has a maximum at small apex angles . inset : schematic of a polymer confined in a frustum by two walls perpendicular to the channel axis . ( b ) the effective force as a function of the frustum length , for a channel of apex angle @xmath2 . for a given value of the apex angle , the effective force monotonically increases with the frustum length . @xmath3 for both panels ( a ) and ( b ) . ] simulation results ( in the next section ) show that the polymer translocation through a cone - shaped channel is a driven process . when one end of a polymer is fixed in the channel apex , a driving force is exerted on the polymer , by the channel . this force results from the increase in the polymer entropy with moving to the wider parts of the channel . during the polymer translocation , also , the very narrow apex of the channel divides the polymer into separate parts with separate free energies . the free energy of the polymer part inside the channel decreases with the polymer movement toward the wide entry of the channel . indeed , the system is in nonequilibrium and writing a total free energy for the polymer is meaningless . hence , in this section , we consider the part of the polymer which is inside the channel as a separate polymer in a conical channel and calculate its free energy . the derivative of the free energy gives the force exerted on the polymer , which is the driving force in the polymer translocation . we discuss the assumption of polymer in equilibrium in sec . [ discuss ] . it has already been shown that during the polymer translocation through a narrow channel , the monomers of the polymer segment passed to the opposite side crowd close to the channel @xcite . accordingly , for the case of small apex angles of the channel , we assume that the channel is a closed volume containing a part of the polymer . with this assumption , we calculate the force acting on the polymer due to the asymmetric shape of the channel . in the case of large apex angles , we calculate the force acting on a polymer whose end is fixed in the apex of a cone - shaped channel and calculate the driving force on the polymer , using two different methods . by combining the results of the two cases , we conclude the qualitative behavior of the force acting on the polymer as a function of the channel apex angle . consider a flexible polymer , consisting of @xmath4 spherical monomers of diameter @xmath5 , which is confined inside a frustum ( see the inset of fig . [ closed - force](a ) ) . the confinement free energy of a polymer inside a closed space of volume @xmath6 is known to be @xmath7 , where @xmath8 is the flory exponent @xcite . in our case , the volume of the confining space is equal to @xmath9 . therefore , the free energy of confinement can be written as @xmath10 here , @xmath0 is half of the apex angle of the channel , and @xmath11 is the distance between the region in which the polymer is confined and a cross section of the cone with diameter @xmath12 . @xmath13 is the length of the region that confines the polymer . when the confinement region is moved toward the wider parts of the channel ( increasing @xmath11 in the inset of fig . [ closed - force](a ) ) , the entropy of the polymer increases and its confinement free - energy decreases . in other words , if one removes the two confining walls , which are perpendicular to the channel axis , the polymer moves to the wider part of the channel to gain more entropy . this is the origin of the force exerted to the segment of a polymer inside the channel , in the course of its translocation through a cone - shaped channel . to obtain the driving force in the polymer translocation through a cone - shaped channel of length @xmath13 and tip diameter @xmath12 , we calculate the derivative of the confinement free - energy ( eq . [ small - energy ] ) with respect to @xmath11 , at @xmath14 ; @xmath15 . one should note here that for a translocating polymer , @xmath4 in eq . [ small - energy ] is the number of monomers inside the channel , not the total length of the polymer . the force , @xmath1 , acting on the polymer segment inside the channel is obtained as @xmath16 in this equation , the number of monomers inside the channel , @xmath4 , should be substituted from eq . [ n ] ( see below ) . in figs . [ closed - force](a ) and [ closed - force](b ) , the force , @xmath1 , is shown as a function of the channel apex angle , @xmath0 , for a given length of the channel , and as a function of the channel length , @xmath13 , for a given value of @xmath0 . as can be seen in fig . [ closed - force](a ) , the force as a function of the apex angle has a maximum at small values of @xmath0 . the existence of this maximum can be explained regarding the two determining factors in the force : the strength of the polymer confinement in the channel and the magnitude of the asymmetry in the channel shape . at small apex angles , the channel volume is small and the strength of the confinement is high . by increasing the apex angle and hence the asymmetry of the channel shape , the force of entropic origin exerted to the polymer increases . by more increasing the apex angle , the confinement effect of the channel on the polymer weakens and the strength of the force decreases . the maximum value of the force corresponds to a value of the apex angle for which the combination of the confinement effect and the asymmetric shape of the channel has the optimum driving effect . in the case of large apex angles , considering the polymer as a confined one in a closed volume is not reasonable . instead , we consider a polymer that one of its ends is fixed in a cross section of a long cone - shaped channel and use the blob method to calculate the effective driving force . for a conical channel , the diameter of a blob depends on its position along the channel . it is equal to the channel diameter at each position ; @xmath17 ( see fig . [ open - conic ] ) . here , @xmath11 is the distance of the fixed end of the polymer from a cross section of the channel with diameter @xmath12 . following ref . @xcite , we write the confinement free energy of the polymer as @xmath18 , which gives @xmath19.\ ] ] here , @xmath13 is the polymer size along the channel . the number of monomers inside each blob is @xmath20 @xcite . also , the number density of the monomers inside each blob is @xmath21 , and the cross - section area of each blob scales as @xmath22 . hence , the number of monomers inside a region of thickness @xmath23 inside the channel ( the region colored in gray in fig . [ open - conic ] ) can be written as @xmath24 @xcite . the size of the polymer along the channel can be obtained by the equality of the integral of @xmath25 over @xmath26 and the total number of the monomers , @xmath4 ; @xmath27.\ ] ] the force exerted to the polymer can be calculated from the derivative of the confinement free energy , eq . [ large - energy ] , with respect to @xmath11 , at @xmath14 . here , one should note that @xmath4 is the total number of the monomers on the polymer and its value is constant . instead , the polymer size along the channel axis , @xmath13 , depends on the parameters such as @xmath11 ( eq . [ n ] ) . from eq . [ large - energy ] , we can calculate the force @xmath1 ; @xmath28 . the derivative of @xmath13 with respect to @xmath11 is found from eq . [ n ] , @xmath29 . accordingly , @xmath30.\ ] ] to calculate the force in the polymer translocation through a cone - shaped channel , @xmath13 in eq . [ large - force ] should be substituted by the channel length . the assumption of constant @xmath4 and variable @xmath13 in the calculation of the force is also valid for the case of polymer translocation . because , at each moment , the polymer segment inside the channel does not feel the finite length of the channel . the exerted force can also be calculated in another way , whose algebraic calculations are more complicated . this re - calculation is useful for checking the results . for this end , the blobs are defined as spheres tangent to the channel wall that can not penetrate into each other . the same method for defining the blobs has been used in ref . the blobs are assumed as spheres tangent to the cone , so , the radius of a blob with its center at position @xmath26 is @xmath31 ( see fig . [ open - conic2](a ) ) . confinement free energy of the polymer is proportional to the number of these blobs . to count the blobs and calculate the confinement free energy , we use the geometrical relation between the positions of two consecutive blobs inside the channel ; @xmath32 ( see fig . [ open - conic2](a ) ) . using this recursive relation , one can find the explicit relation between the position of a blob , @xmath33 , and its number along the channel , @xmath34 ; @xmath35 , where @xmath36 . the first and the last blobs are tangent to the beginning and the end of the channel , respectively . hence , their positions along the channel , @xmath37 and @xmath38 , can be obtained from the relations @xmath39 and @xmath40 , respectively . @xmath41 is the total number of the blobs ( see fig . [ open - conic2 ] ) . using the equations for @xmath33 , @xmath37 and @xmath38 , the number of the blobs , @xmath41 , and the confinement free energy are obtained as @xmath42-\log\left[2a\sin\alpha + d_0\cos\alpha\right]}{\log(1+\sin\alpha)-\log(1-\sin\alpha)}.\ ] ] the polymer extension along the channel axis can be obtained from the constraint @xmath43 , where @xmath44 is the number of monomers inside the @xmath34th blob . substituting @xmath45 and @xmath46 , and using the equation for @xmath33 , one obtains @xmath47^{\frac{1}{\nu}}-\left[2a\sin\alpha + d_0\cos\alpha\right]^{\frac{1}{\nu}}}{(1+\sin\alpha)^{\frac{1}{\nu}}-(1-\sin\alpha)^{\frac{1}{\nu}}}.\ ] ] using the derivatives of eqs . [ large - energy2 ] and [ n2 ] , we have @xmath48.\ ] ] note that eq . [ large - force2 ] is quite similar to the result obtained in eq . [ large - force ] , and differs only on the coefficient @xmath49 . on the channel axis , and the relation between the positions of two consecutive blobs are shown in the figure . ( b ) , ( c ) the first blob is tangent to the beginning of the channel , and the last blob is tangent to its end . ] when the diameter of the first blob is smaller than the length of the channel , @xmath50 , we expect the results obtained from this method to be the same as those of the previous method . the value of @xmath51 does not depend on the channel length , but it increases with the apex angle , rapidly . as is shown in fig . [ open - force ] , the two methods give the same dependence of the force on the channel length . also , dependence of the force on the apex angle calculated from the two methods is the same at small apex angles , and becomes different only at large angles . in the case of large apex angles , despite the case of small ones , the force increases monotonically with the apex angle ( fig . [ open - force](a ) ) . the force increases with the channel length , @xmath13 , and becomes constant , at larger values of @xmath13 ( fig . [ open - force](b ) ) . indeed , at large values of @xmath13 , the channel diameter becomes larger than the radius of gyration of the polymer and it does not have any confining effect on the polymer . , and ( b ) the channel length , with the apex angle @xmath52 . in these figures , @xmath3 . the force is calculated from the two methods described in the text . when the apex angle is small , the two methods have the same results . the force increases with the cone angle and length and becomes constant afterwards . ] the length of the polymer segment inside the channel , @xmath4 , versus the channel length and the apex angle is shown in figs . [ blobs - n](a ) and [ blobs - n](b ) . although the values of @xmath4 obtained from the two methods are different , they have the same dependence on the channel length and the apex angle . for comparison with the simulation results presented in the next section , power - law functions are fitted to the results . in fig . [ blobs - n](a ) , two power - law functions with exponents @xmath53 and @xmath54 are fitted to the result of the theory , at small and large apex angles , respectively . also , as is shown in fig . [ blobs - n](b ) , the curve obtained from the theory is followed well by a power - law function with exponent @xmath55 . ) , and ( b ) the length of the channel ( for @xmath52 ) . to make the comparison easier , the polymer length obtained from the second method is multiplied by 3 , in both panels ( a ) and ( b ) . as can be seen , the two methods predict the same dependence of the polymer length on the channel length and the apex angle . power - law fits to the curves are shown , for later comparison with the simulation results ( see sec . [ simulation ] ) . ] we use molecular dynamics ( md ) simulations to check our theory . we use the coarse - grained bead - spring model for the polymer . the interaction between the monomers is the short - ranged lennard - jones repulsive potential @xmath56 where @xmath57 is the monomer diameter and the md length scale , and @xmath58 is the lennard - jones energy scale . the monomers are connected by the harmonic potential , @xmath59 , in which @xmath60 is the spring constant , @xmath61 is the distance between two consecutive monomers along the polymer and @xmath62 is their equilibrium distance . the cone - shaped channel and the two walls are modeled by the lennard - jones potential between them and the monomers ( see the schematic of the channel and the polymer in the inset of fig . [ conic - biased ] ) . the time step of the simulations is @xmath63 , where @xmath64 is the md time scale , and @xmath65 is the monomer mass . the simulations are performed in the nvt ensemble , using the langevin thermostat , at the constant temperature @xmath66 . the langevin equation @xmath67 is integrated , for describing the monomers motion , where @xmath68 is the friction coefficient , and @xmath69 is the external force from the channel and the walls . @xmath70 is the gaussian white noise , which follows the fluctuation - dissipation theorem . the simulations are done with espresso @xcite . at the beginning of the simulation , the monomers are arranged on the axis of the channel , such that the lengths of the two segments of the polymer , which are outside the channel from the two sides , are the same . then , we fix the monomer in the channel apex and let the other monomers to equilibrate . after that , we release the fixed monomer and set the simulation time equal to zero . the passage time is the time that the last monomer of the polymer lies in the channel apex . in our simulations , the diameter of the channel apex is @xmath71 and the polymer has 100 monomers , unless otherwise stated . when the channel is cylindrical ( @xmath72 ) , the polymer exits the channel with the same probabilities from the two sides of the channel . the polymer passage time in this case scales with the polymer length by the exponent 2.2 . this is consistent with the previous results for unbiased polymer translocation @xcite . instead , in the translocation from the cone - shaped channel , the polymer leaves the channel from the base in all our simulations and the passage time scales with the polymer length with the exponent 1.4 ( see fig . [ conic - biased ] ) . this exponent is close to the exponent 1.6 predicted in the literature , for the driven translocation ( of infinitely long polymers ) @xcite . it also completely matches the exponent for the driven translocation of shorter polymers @xcite . this shows that the force exerted from the cone - shaped channel is determining and the polymer translocation through this channel is a driven process . and the exponent is close to the case of the driven polymer translocation @xcite . the error - bars are resulted from averaging over 21 runs of the simulation . inset : schematic of the cone - shaped channel , the two walls and the polymer used in the simulation . ] the polymer passage time versus the channel apex angle is shown in fig . [ tau - sim](a ) , for @xmath73 . upon increasing the channel angle , @xmath0 , from @xmath74 , the passage time decreases and shows a minimum at @xmath75 . then the passage time increases up to its maximum value at @xmath76 and then decreases at larger angles . note that at small angles , the channel base diameter is small and the simulation data should be described by the case of small apex angles of the theory , eq . [ small - force ] . the force obtained from this equation , first increases and then decreases with the apex angle , as shown in fig . [ closed - force](a ) . considering that the passage time is a decreasing function of the force , it would have the same behavior as the simulation result . however , at large apex angles , the channel base diameter is large and eq . [ large - force ] should be used to describe the force exerted to the polymer . the force that is obtained from the case of large apex angles of the theory increases monotonically with the angle ( fig . [ open - force](a ) ) . this is in agreement with the passage time reduction at large apex angles . combination of the predictions of the two cases of small and large apex angles of the theory describes the simulation result , reasonably well . in the theory , in the case of the small apex angles , we assumed that the channel is a closed volume . to justify this assumption , the average density of the monomers in the range of distance @xmath77 from the channel base is calculated in the simulations ( see the inset of fig . [ tau - sim](a ) ) . it can be seen that the monomers density close to the channel base is higher , in the case of small apex angles . the sudden decrease in the value of the monomers density occurs where the base diameter becomes of the order of the monomer diameter ; @xmath78 . ) , and ( b ) the channel length ( for @xmath52 ) . the passage time is a non - monotonic function of the apex angle , but decreases monotonically with the length . the inset of panel ( a ) shows the monomers density outside the channel base . it shows that the monomers crowd close to the channel . the error - bars are resulted from averaging over 15 runs of the simulation . for the zero apex angle , 42 runs of the simulation are done . ] the polymer passage time versus the channel length is shown in fig . [ tau - sim](b ) , for the apex angle @xmath52 . this time decreases with the channel length and then becomes constant . from the theory , the force in the case of small base diameters increases with the channel length , at small values of @xmath13 ( eq . [ small - force ] and fig . [ closed - force](b ) ) . also , in the case of large base diameters , the force increases with @xmath13 monotonically ( eq . [ large - force ] and fig . [ open - force](b ) ) . these are in agreement with the result shown in fig . [ tau - sim](b ) . it is worth studying the number of monomers of the polymer segment inside the channel during its passage through the channel . the mean value of the monomers inside the channel as a function of the channel angle and length is shown in figs . [ nin](a ) and [ nin](b ) . the number of monomers versus the channel length can be described by a power - law function with exponent @xmath79 . however , the number of monomers against the tangent of the channel apex angle can be fitted with two different exponents @xmath80 and @xmath81 , for the small and large apex angles , respectively . these figures are plotted for the same parameters as those of fig . [ nin ] and the fit exponents are nearly close . ) , and ( b ) the channel length ( for @xmath52 ) . as is shown , the fit exponents are close to those of fig . [ blobs - n ] . ] an important assumption in our theoretical calculation of the entropic force exerted to the polymer is that the polymer is in equilibrium inside the channel . however , it is known that the polymer passage through a channel is a non - equilibrium process , and it has been shown that the two ends of the polymer out of the channel can not equilibrate during the passage process @xcite . here , we investigate the validity of the assumption that the polymer segment inside the channel is in equilibrium during the passage process . for this end , we compare the relaxation time of the this segment with the time needed for the polymer to traverse the channel length , @xmath13 . if the former time is smaller , the assumption is reasonable . the velocity of the polymer passing the channel can be written as @xmath82 , where @xmath1 is the entropic force exerted to the polymer , @xmath83 the total length of the polymer , and @xmath68 the friction constant for each monomer . in this relation , the friction force acting on the polymer is calculated from the rouse model @xcite . therefore , the time needed for the polymer to traverse the channel length would be @xmath84 @xcite . the relaxation time of the polymer segment inside the channel from the rouse model is @xmath85 , where @xmath4 is the length of the polymer segment inside the channel . here , one should note that by using such scaling relations , that their numerical pre - factors are not known , judgment on our equilibrium assumption is not possible . for this reason , we perform simulations to check the polymer equilibrium inside the channel . in separate simulations , one end of a polymer is kept fixed at the channel apex and after equilibration of the polymer , the density profile of the monomers inside the channel is measured . the density profile of the monomers inside the channel is also obtained during the passage of a long polymer ( in the simulations of sec . [ simulation ] ) . comparison of the results of the two simulations shows the assumption that the polymer segment inside the channel is in equilibrium , is reasonable , in the range of the parameters used in our studies . this shows that in our simulations the polymer length inside the channel is small enough to equilibrate in a short time . in summary , the effective force of entropic origin acting on a polymer in the course of its passage through a cone - shaped channel was calculated and compared with the results of coarse - grained md simulations . the force was obtained for the two cases of small and large apex angles of the channel . combination of the results of the two cases showed that the effective force exerted to the polymer is a non - monotonic function of the channel apex angle . the force increases monotonically with the channel length . the simulations showed the importance of the force exerted by the channel . it was shown that the simulation results for the polymer passage time through a cone - shaped channel can be described with the theory . also , it was shown that the simulation and the theoretical results for the polymer length inside the channel during the passage of a long polymer are in good agreement . the simulation results also support the assumption that the polymer segment inside the channel is in equilibrium during the polymer passage .
entropy - driven directed translocation of a flexible polymer through a cone - shaped channel is studied theoretically and using computer simulation . for a given length of the channel , the effective force of entropic origin acting on the polymer is calculated as a function of the apex angle of the channel . it is found that the translocation time is a non - monotonic function of the apex angle . by increasing the apex angle from zero , the translocation time shows a minimum and then a maximum . also , it is found that regardless of the value of the apex angle , the translocation time is a uniformly decreasing function of the channel length . the results of the theory and the simulation are in good qualitative agreement .
urinary tract infections ( utis ) are among the most prevalent bacterial infections in humans . this constitutes a substantial financial and social burden on healthcare providers in developed countries such usa , and even more so for developing countries . the most prominent member of the family of enterobacteriaceae is the number one cause of utis . it is not uncommon for utis to be treated empirically with broad - spectrum antibiotics spurring more antibiotic resistance . the dissemination of resistance elements has been aided to a great extent by horizontal gene transfer . the latter process uses a number of biological tools , most notably of these tools are integrons . not only can integrons harbor a number of resistance gene cassettes in tandem , but also provide a local promoter for their transcription . furthermore , integrons are capable of expanding their collection of promoterless gene cassettes through the actions of specialized site - specific recombination enzymes ; inti . therefore they operate as fully equipped site - specific recombination systems which can reside on other mobile genetic elements such as transposons and plasmids to horizontally transfer resistance encoding genes between bacterial species , particularly within the enterobacteriaceae family . trimethoprim was a widely - used and cheap antibiotic for treating utis , it inhibits the enzyme dihydrofolate reductase , which is involved in the cellular biosynthesis and growth . to neutralize this inhibition bacterial cells make use of modifications in the gene encoding dihydrofolate reductase ( dfr ) resulting in trimethoprim resistance . the association between integrons and bacterial resistance necessitates frequent identification and monitoring of integrons on the local level . since improper use of antibiotics imposes higher levels of selective pressure , this type of epidemiological studies is most needed in developing countries such as syria where antibiotics misuse is commonplace . with the total lack of data from our region , the objective of this study was to investigate the molecular epidemiology of integrons and certain resistance genes among isolates of uropathogenic e. coli in aleppo , syria . additionally we set to uncover the level of association between mdr and esbl production with the presence of integrons , thus providing the basis for better healthcare decisions in this context . urine samples were collected from uti patients during the study period in order to provide 104 unique isolates , which were studied to uncover antibiotic resistance phenotypes , as previously published by the current authors . out of the total number of tested isolates 75 ( 72.1% ) were resistant to trimethoprim , and only these were taken for further phenotypic and molecular investigations in this study . table 1 is an antibiogram that summarizes susceptibility patterns of trimethoprim - resistant isolates . with tigecycline and imipenem eliciting zero resistance , and < class 1 integrons were detected in 41 out of 75 trimethoprim resistant isolates , which amounts to 54.66% . evidence of esbl production was found in 46 isolates ( 61.33% ) , while 53 isolates ( 70.66% ) could be classified as multidrug resistant . the highly significant association between carrying class 1 integrons and testing positive for esbl production is shown in table 2 , which also displays a similarly significant association between multidrug resistance and class 1 integrons ( p < 0.0001 ) . 40 mdr isolates ( out of 53 ) were found to be esbl producers , and only six esbl - producing e. coli isolates did not belong to the mdr category . thus the association between the two characteristics is highly significant ( p = 0.0002 ) . p values indicating significance are in bold . upon investigating the presence of trimethoprim - resistance genes dfra7,17 and dfra1 in the tested group we found that 53 isolates ( 70.66% ) harbored dfra7,17 while dfra1 was detected in 12 isolate ( 16% ) only . the presence of dfra7,17 genes was tightly linked to class 1 integrons ( p < 0.0001 ) , this was not the case with dfra1 as shown in table 2 . a number of highly significant associations were found between decreased susceptibility to certain antibiotics and carrying the genes dfra7,17 . this was evident with cephalosporins ( cefepime , ceftazidime , cefotaxime and ceftriaxone ) , tobramycin , ciprofloxacin , nalidixic acid and trimethoprim / sulfamethoxazole with p values = 0.01 . the presence of dfra1 genes did not correlate significantly with decreased susceptibly to the antibiotics used in this study . very strong associations ( p < 0.0001 ) were noted between resistance to cephalosporins , nalidixic acid , ciprofloxacin and trimethoprim / sulfamethoxazole and the presence of class 1 integrons in tested isolates . in fact the same general trend was observed with the majority of commonly used antibiotics except for chloramphenicol , tetracycline and ampicillin - sulbactam ( table 3 ) . integrons can serve as a vital tool for bacterial survival against antibiotics because they offer a unique platform for assembling and expressing multiple genetic elements in the bacterial cell . integrons are associated with in - house recombination / integration systems and equipped with a promoter for effective transcription . the structure of these genetic elements is very dynamic because it is affected by a number of factors that differ from one region to another , most importantly , antibiotics choice and misuse . moreover , there is a severe paucity of data on integrons and related genetic elements from the middle east in general ; despite the high relevance of such information for a region where antibiotic surveillance is rarely practiced . four classes of integrons ( 14 ) have been identified so far , but the clinical significance of classes 2 , 3 and 4 in the context of antibiotic resistance is dwarfed by that of class 1 integrons . this study reports for the first time the prevalence of class 1 integrons in aleppo , syria to be 54.66% among upec isolates from in- and outpatients . international information regarding integron frequencies vary between different geographical locations and clinical settings , with a majority of studies focusing on outpatients . one geographically comprehensive study has been conducted in 16 western european countries and canada , it detected class 1 integrons in 57.6% of tested trimethoprim - resistant upec isolates from non - hospitalized patients . despite the apparent similarity in the levels of class 1 integron frequencies , the european / canadian figure is notably higher because it involves community - acquired infections only . studies involving samples from hospitalized patients report higher levels of class 1 integrons , for example a korean study showed in 2004 that 69% of trimethoprim - resistant isolates harbored class 1 integrons . both studies seem to report higher frequencies than the current study , probably due to geographic and temporal differences . with total lack of comparable research in syria the closest point of reference geographically and temporally would be a relatively recent study from lebanon . according to the latter study 30% of upec isolates from two hospitals in lebanon showed evidence of class 1 integrons , and nearly all these isolates ( 96.7% ) were resistant to trimethoprim / sulfamethoxazole . the prevalence of integrons in gram negative bacilli in northwestern turkey was investigated by sandalli et al . with 27 integron - positive isolates out of 72 community - acquired e. coli infections ( urinary and otherwise ) . a lower level of integron prevalence was reported by an iranian study which detected integrons in 16.6% of isolates from children with utis , and only less than half of these isolates carried inti1 . this low prevalence may be due to limiting the study to non - hospitalized children from a sparsely urbanized locality ( jahrom , iran ) . interestingly an up - to - date study from the same country reported a much higher rate of integrons reaching 50.3% in isolates from uti patients , which is quite comparable to the data presented in this study . however , there are many caveats in formulating solid conclusions about such country to country differences due to diverse sampling strategies which can result in highly variable frequencies . this study focused in particular on the most commonly encountered trimethoprim - resistant genes in the context of medical practice ; dfra1 and dfra7,17 . the frequencies of dfra1 and dfra7,17 were found to be 16% and 70.66% respectively . as with integron prevalence values ; different studies gave diverse accounts from different parts of the world . our results regarding the higher prevalence of dfra7,17 over dfra1 echoed many findings in a number of similar studies from lebanon , denmark , the netherlands , korea and australia . conversely , dfra1 appeared to be more prevalent than dfra7,17 in spain , portugal , france , belgium and turkey . these differences can be attributed to a number of factors ranging from diverse sampling schemes to the genetic drift affecting horizontally transferred resistance genes . the extremely high correlation between reduced susceptibility to individual antibiotics , mdr and esbl production on one side and harboring integrons on the other in aleppo corroborated well with the results from several international studies , albeit to a lesser extent . examples include fallah et al . and farshad et al . from iran , and mathai et al . from southern india . in fact all integron studies reported strong association between class 1 integrons and antibiotic resistance genes such as dfra and aada genes . thus decreased susceptibility to antimicrobials is likely to be the result of antibiotic resistance genes being carried along the same vectors ( transposable elements and conjugative plasmids ) as integrons the high significance of the correlation between low susceptibility to cephalosporins and presence of class 1 integrons in this study probably reflects the widespread misuse of this class of antibiotics in syria . thus detecting class 1 integrons can have a predictive value of co - resistance to antibiotics in this context . ampicillin - sulbactam was the only -lactam that did not show decreased susceptibility with integrons ; however this is likely to be caused by the extremely high resistance to this agent so that any changes in susceptibility would be practically undetectable . susceptibility to tetracycline and chloramphenicol did not change significantly with presence of class 1 integrons , because the corresponding resistance genes may have been lost from integron - carrying plasmids due to their limited use in syria . in conclusion , this study presents unprecedented data about the frequency of class 1 integrons in the aleppo governorate in syria along with dfra1 and dfra7,17 that mediate trimethoprim resistance . there is urgent need for expanding this type of investigations into the molecular epidemiology of genetic elements underlying antibiotic resistance in this part of the world . additionally more effort is required for disseminating this information locally and internationally and formulating relevant guidelines accordingly in order to attain better levels of healthcare provision . this study was conducted at three university hospitals in aleppo , syria from september to november 2011 . non - repetitive isolates were collected from these patients and they have been selected from a larger group on the basis of being trimethoprim resistant . the tested cohort consisted of 43 outpatients and 32 inpatients , of the latter 8 were catheterized , and all obtained isolates were non - repetitve . patients medical history was obtained to infer hospitalization status ( inpatients being hospitalized for 48 h ) . written informed consent was obtained from patients who provided the samples . the study protocol including the consent procedure was approved by the scientific council and the ethical committee at the university of aleppo . the diagnosis of utis was based on microscopic findings of > 5 white blood cells/ high power field and a colony count of 10 cfu / ml of a single pathogen using standard procedures . urine samples were inoculated onto nutrient and macconkey agar with 0.001 ml calibrated loops by a semi - quantitative technique . escherichia coli was identified by conventional biochemical tests using mini api id32e system ( biomerieux , 32400 ) . antimicrobial susceptibility testing was performed by standard disc diffusion method on mueller - hinton agar as recommended by the guidelines of clinical and laboratory standards institute ( clsi ) . nineteen antibiotics were used ( oxoid , codes are listed in table 1 ) as well as trimethprim ( oxoid , ct0076b ) . multi - drug resistance ( mdr ) esbl production was determined using disc diffusion method and double disc synergy diffusion test ( ddst ) according to clsi guidelines . we used ( e. coli atcc 25922 ) as a negative control and ( e. coli atcc 35218 ) as a positive control . bacterial dna was extracted from a single e. coli colony using qiaprep spin miniprep kit ( qiagen gmbh , 27104 ) according to manufacturer 's instructions , then stored at 20c as a template dna stock . while the primer pairs [ dfr1-f , dfr1-r ] and [ dfr7&17-f , dfr7&17-f ] ( vbc - biotech , custom primers ) were used for the detection of dfra1 and dfra7,17 respectively , according to grape at al.table 4 represents primer sequences in full . the statistical package for the social sciences ( spss ) version 19.0 was used . and the significance of associations were established using the fisher s exact test ( p < 0.05 was considered significant ) . this study was conducted at three university hospitals in aleppo , syria from september to november 2011 . non - repetitive isolates were collected from these patients and they have been selected from a larger group on the basis of being trimethoprim resistant . the tested cohort consisted of 43 outpatients and 32 inpatients , of the latter 8 were catheterized , and all obtained isolates were non - repetitve . patients medical history was obtained to infer hospitalization status ( inpatients being hospitalized for 48 h ) . written informed consent was obtained from patients who provided the samples . the study protocol including the consent procedure was approved by the scientific council and the ethical committee at the university of aleppo . the diagnosis of utis was based on microscopic findings of > 5 white blood cells/ high power field and a colony count of 10 cfu / ml of a single pathogen using standard procedures . urine samples were inoculated onto nutrient and macconkey agar with 0.001 ml calibrated loops by a semi - quantitative technique . escherichia coli was identified by conventional biochemical tests using mini api id32e system ( biomerieux , 32400 ) . antimicrobial susceptibility testing was performed by standard disc diffusion method on mueller - hinton agar as recommended by the guidelines of clinical and laboratory standards institute ( clsi ) . nineteen antibiotics were used ( oxoid , codes are listed in table 1 ) as well as trimethprim ( oxoid , ct0076b ) . multi - drug resistance ( mdr ) esbl production was determined using disc diffusion method and double disc synergy diffusion test ( ddst ) according to clsi guidelines . we used ( e. coli atcc 25922 ) as a negative control and ( e. coli atcc 35218 ) as a positive control . bacterial dna was extracted from a single e. coli colony using qiaprep spin miniprep kit ( qiagen gmbh , 27104 ) according to manufacturer 's instructions , then stored at 20c as a template dna stock . while the primer pairs [ dfr1-f , dfr1-r ] and [ dfr7&17-f , dfr7&17-f ] ( vbc - biotech , custom primers ) were used for the detection of dfra1 and dfra7,17 respectively , according to grape at al.table 4 represents primer sequences in full . the statistical package for the social sciences ( spss ) version 19.0 was used . and the significance of associations were established using the fisher s exact test ( p < 0.05 was considered significant ) .
horizontal gene transfer ( hgt ) introduces advantageous genetic elements into pathogenic bacteria using tools such as class1 integrons . this study aimed at investigating the distribution of these integrons among uropathogenic e. coli ( upec ) isolated from patients in aleppo , syria . it also set to uncover the frequencies of the clinically relevant dfra1 and dfra17,7 , as well as various associations leading to reduced susceptibility . this study involved 75 trimethoprim - resistant e. coli isolates from in- and outpatients with urinary tract infections ( utis ) from 3 major hospitals in aleppo . bacterial identification , resistance and extended - spectrum--lactamase ( esbl ) production testing were performed according to clinical laboratory standards institute guidelines . detection of integrons and dfra genes was done using pcr and statistical significance was inferred through 2 ( fisher s ) test . class1 integrons were detected in 54.6% of isolates while dfra1 and dfra17,7 were found in 16% and 70.6% of tested samples respectively . furthermore , only dfra17,7 were strongly associated with class1 integrons , as were reduced susceptibility to the majority of individual antibiotics , multidrug resistance and esbl production . this study demonstrated the high prevalence of class1 integrons among upec strains in aleppo , syria , as well as their significant associations with mdr . this data give information for local healthcare provision using antibiotic chemotherapy .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Expedited Consideration of Proposed Rescissions Act of 1993''. SEC. 2. EXPEDITED CONSIDERATION OF CERTAIN PROPOSED RESCISSIONS. (a) In General.--Part B of title X of the Congressional Budget and Impoundment Control Act of 1974 (2 U.S.C. 681 et seq.) is amended by redesignating sections 1013 through 1017 as sections 1014 through 1018, respectively, and inserting after section 1012 the following new section: ``expedited consideration of certain proposed rescissions ``Sec. 1013. (a) Proposed Rescission of Budget Authority.--In addition to the method of rescinding budget authority specified in section 1012, the President may propose, at the time and in the manner provided in subsection (b), the rescission of any budget authority provided in an appropriations Act. Funds made available for obligation under this procedure may not be proposed for rescission again under this section or section 1012. ``(b) Transmittal of Special Message.-- ``(1) Not later than 3 days after the date of enactment of an appropriation Act, the President may transmit to Congress a special message proposing to rescind amounts of budget authority provided in that Act and include with that special message a draft bill or joint resolution that, if enacted, would only rescind that budget authority. ``(2) In the case of an appropriation Act that includes accounts within the jurisdiction of more than one subcommittee of the Committee on Appropriations, the President in proposing to rescind budget authority under this section shall send a separate special message and accompanying draft bill or joint resolution for accounts within the jurisdiction of each such subcommittee. ``(3) Each special message shall specify, with respect to the budget authority proposed to be rescinded, the matters referred to in paragraphs (1) through (5) of section 1012(a). ``(c) Limitation on Amounts Subject to Rescission.-- ``(1) The amount of budget authority which the President may propose to rescind in a special message under this section for a particular program, project, or activity for a fiscal year may not exceed 25 percent of the amount appropriated for that program, project, or activity in that Act. ``(2) The limitation contained in paragraph (1) shall only apply to a program, project, or activity that is authorized by law. ``(d) Procedures for Expedited Consideration.-- ``(1)(A) Before the close of the second day of continuous session of the applicable House after the date of receipt of a special message transmitted to Congress under subsection (b), the majority leader or minority leader of the House of Congress in which the appropriation Act involved originated shall introduce (by request) the draft bill or joint resolution accompanying that special message. If the bill or joint resolution is not introduced as provided in the preceding sentence, then, on the third day of continuous session of that House after the date of receipt of that special message, any Member of that House may introduce the bill or joint resolution. ``(B) The bill or joint resolution shall be referred to the Committee on Appropriations of that House. The committee shall report the bill or joint resolution without substantive revision and with or without recommendation. The bill or joint resolution shall be reported not later than the seventh day of continuous session of that House after the date of receipt of that special message. If the Committee on Appropriations fails to report the bill or joint resolution within that period, that committee shall be automatically discharged from consideration of the bill or joint resolution, and the bill or joint resolution shall be placed on the appropriate calendar. ``(C) A vote on final passage of the bill or joint resolution shall be taken in that House on or before the close of the 10th calendar day of continuous session of that House after the date of the introduction of the bill or joint resolution in that House. If the bill or joint resolution is agreed to, the Clerk of the House of Representatives (in the case of a bill or joint resolution agreed to in the House of Representatives) or the Secretary of the Senate (in the case of a bill or joint resolution agreed to in the Senate) shall cause the bill or joint resolution to be engrossed, certified, and transmitted to the other House of Congress on the same calendar day on which the bill or joint resolution is agreed to. ``(2)(A) A bill or joint resolution transmitted to the House of Representatives or the Senate pursuant to paragraph (1)(C) shall be referred to the Committee on Appropriations of that House. The committee shall report the bill or joint resolution without substantive revision and with or without recommendation. The bill or joint resolution shall be reported not later than the seventh day of continuous session of that House after it receives the bill or joint resolution. A committee failing to report the bill or joint resolution within such period shall be automatically discharged from consideration of the bill or joint resolution, and the bill or joint resolution shall be placed upon the appropriate calendar. ``(B) A vote on final passage of a bill or joint resolution transmitted to that House shall be taken on or before the close of the 10th calendar day of continuous session of that House after the date on which the bill or joint resolution is transmitted. If the bill or joint resolution is agreed to in that House, the Clerk of the House of Representatives (in the case of a bill or joint resolution agreed to in the House of Representatives) or the Secretary of the Senate (in the case of a bill or joint resolution agreed to in the Senate) shall cause the engrossed bill or joint resolution to be returned to the House in which the bill or joint resolution originated. ``(3)(A) A motion in the House of Representatives to proceed to the consideration of a bill or joint resolution under this section shall be highly privileged and not debatable. An amendment to the motion shall not be in order, nor shall it be in order to move to reconsider the vote by which the motion is agreed to or disagreed to. ``(B) Debate in the House of Representatives on a bill or joint resolution under this section shall not exceed 4 hours, which shall be divided equally between those favoring and those opposing the bill or joint resolution. A motion further to limit debate shall not be debatable. It shall not be in order to move to recommit a bill or joint resolution under this section or to move to reconsider the vote by which the bill or joint resolution is agreed to or disagreed to. ``(C) Appeals from decisions of the Chair relating to the application of the Rules of the House of Representatives to the procedure relating to a bill or joint resolution under this section shall be decided without debate. ``(D) Except to the extent specifically provided in the preceding provisions of this subsection, consideration of a bill or joint resolution under this section shall be governed by the Rules of the House of Representatives. ``(4)(A) A motion in the Senate to proceed to the consideration of a bill or joint resolution under this section shall be privileged and not debatable. An amendment to the motion shall not be in order, nor shall it be in order to move to reconsider the vote by which the motion is agreed to or disagreed to. ``(B) Debate in the Senate on a bill or joint resolution under this section, and all debatable motions and appeals in connection therewith, shall not exceed 10 hours. The time shall be equally divided between, and controlled by, the majority leader and the minority leader or their designees. ``(C) Debate in the Senate on any debatable motion or appeal in connection with a bill or joint resolution under this section shall be limited to not more than 1 hour, to be equally divided between, and controlled by, the mover and the manager of the bill or joint resolution, except that in the event the manager of the bill or joint resolution is in favor of any such motion or appeal, the time in opposition thereto, shall be controlled by the minority leader or his designee. Such leaders, or either of them, may, from time under their control on the passage of a bill or joint resolution, allot additional time to any Senator during the consideration of any debatable motion or appeal. ``(D) A motion in the Senate to further limit debate on a bill or joint resolution under this section is not debatable. A motion to recommit a bill or joint resolution under this section is not in order. ``(e) Amendments Prohibited.--No amendment to a bill or joint resolution considered under this section shall be in order in either the House of Representatives or the Senate. No motion to suspend the application of this subsection shall be in order in either House, nor shall it be in order in either House to suspend the application of this subsection by unanimous consent. ``(f) Requirement to Make Available for Obligation.--Any amount of budget authority proposed to be rescinded in a special message transmitted to Congress under subsection (b) shall be made available for obligation on the day after the date on which either House defeats the bill or joint resolution transmitted with that special message. ``(g) Definitions.--For purposes of this section-- ``(1) the term `appropriation Act' means any general or special appropriation Act, and any Act or joint resolution making supplemental, deficiency, or continuing appropriations; and ``(2) continuity of a session of either House of Congress shall be considered as broken only by an adjournment of that House sine die, and the days on which that House is not in session because of an adjournment of more than 3 days to a date certain shall be excluded in the computation of any period.''. (b) Exercise of Rulemaking Powers.--Section 904 of such Act (2 U.S.C. 621 note) is amended-- (1) by striking ``and 1017'' in subsection (a) and inserting ``1013, and 1018''; and (2) by striking ``section 1017'' in subsection (d) and inserting ``sections 1013 and 1018''; and (c) Conforming Amendments.-- (1) Section 1011 of such Act (2 U.S.C. 682(5)) is amended-- (A) in paragraph (4), by striking ``1013'' and inserting ``1014''; and (B) in paragraph (5)-- (i) by striking ``1016'' and inserting ``1017''; and (ii) by striking ``1017(b)(1)'' and inserting ``1018(b)(1)''. (2) Section 1015 of such Act (2 U.S.C. 685) (as redesignated by section 2(a)) is amended-- (A) by striking ``1012 or 1013'' each place it appears and inserting ``1012, 1013, or 1014''; (B) in subsection (b)(1), by striking ``1012'' and inserting ``1012 or 1013''; (C) in subsection (b)(2), by striking ``1013'' and inserting ``1014''; and (D) in subsection (e)(2)-- (i) by striking ``and'' at the end of subparagraph (A); (ii) by redesignating subparagraph (B) as subparagraph (C); (iii) by striking ``1013'' in subparagraph (C) (as so redesignated) and inserting ``1014''; and (iv) by inserting after subparagraph (A) the following new subparagraph: ``(B) he has transmitted a special message under section 1013 with respect to a proposed rescission; and''. (3) Section 1016 of such Act (2 U.S.C. 686) (as redesignated by section 2(a)) is amended by striking ``1012 or 1013'' each place it appears and inserting ``1012, 1013, or 1014''. (d) Clerical Amendments.--The table of sections for subpart B of title X of such Act is amended-- (1) by redesignating the items relating to sections 1013 through 1017 as items relating to sections 1014 through 1018; and (2) by inserting after the item relating to section 1012 the following new item: ``Sec. 1013. Expedited consideration of certain proposed rescissions.''. SEC. 3. APPLICATION. Section 1013 of the Congressional Budget and Impoundment Control Act of 1974 (as added by section 2) shall apply to amounts of budget authority provided by appropriation Acts (as defined in subsection (g) of such section) that are enacted during the One Hundred Third Congress. SEC. 4. TERMINATION. The authority provided by section 1013 of the Congressional Budget and Impoundment Control Act of 1974 (as added by section 2) shall terminate effective on the date in 1994 on which Congress adjourns sine die.
Expedited Consideration of Proposed Rescissions Act of 1993 - Amends the Congressional Budget and Impoundment Control Act of 1974 to allow the President an additional method of rescinding budget authority by the transmittal to the Congress, for expedited consideration, of one or more special messages proposing to rescind all or part of any item of budget authority provided in an appropriation bill. Limits the amount subject to rescission to 25 percent of the amount appropriated. Sets forth House and Senate procedures for the expedited consideration of such a proposal.
SECTION 1. SHORT TITLE. This Act may be cited as the ``Americans with Disabilities Act Restoration Act of 2006''. SEC. 2. FINDINGS. Congress finds the following: (1) Physical and mental impairments are natural parts of the human condition as are race, gender, national origin, and sex. (2) Discrimination results when individuals with actual or perceived physical or mental impairments are met with attitudinal, societal, and physical barriers in society. (3) The use of mitigating measures by an individual does not change the fact that the individual has a physical or mental impairment, nor should the use of a mitigating measure by an individual insulate covered entities from liability for discriminatory practices and policies. (4) The Americans with Disabilities Act of 1990 has not been interpreted by the courts, including the Supreme Court, as intended by Congress. The courts have significantly limited the intended reach of the Americans with Disabilities Act, allowing many individuals with actual or perceived impairments to be subject to discrimination. (5) It is necessary to restore the intent of the Americans with Disabilities Act to fully remove the barriers that confront disabled Americans and to permit all people to fully participate in society. SEC. 3. DISABILITY DEFINED. Section 3 of the Americans with Disabilities Act of 1990 (42 U.S.C. 12102) is amended-- (1) by amending paragraph (2) to read as follows: ``(2) Disability.-- ``(A) In general.--The term `disability' means, with respect to an individual-- ``(i) a physical or mental impairment; ``(ii) a record of a physical or mental impairment; or ``(iii) a perceived physical or mental impairment. ``(B) Rule of construction.--The existence of a physical or mental impairment or record or perception of a physical or mental impairment shall be determined without taking into account an individual's use of mitigating measures or whether the impairment is episodic, short term, or long term.''; and (2) by redesignating paragraph (3) as paragraph (7) and inserting after paragraph (2) the following: ``(3) Physical impairment.--The term `physical impairment' means any physiological disorder or condition, cosmetic disfigurement, or anatomical loss affecting one or more of the following body systems: neurological; musculoskeletal; special sense organs; respiratory, including speech organs; cardiovascular; reproductive; digestive; genito-urinary; hemic and lymphatic; skin and endocrine. ``(4) Mental impairment.--The term `mental impairment' means any mental or psychological disorder such as mental retardation, organic brain syndrome, emotional or mental illness, and specific learning disabilities. ``(5) Record of physical or mental impairment.--The term `record of physical or mental impairment' means having a history of, or having been misclassified as having, a physical or mental impairment. ``(6) Perceived physical or mental impairment.--The term `perceived physical or mental impairment' means not having an impairment as set forth in paragraph (2)(A)(i) or (ii), but being regarded as having, or treated as having, a physical or mental impairment.''. SEC. 4. DISCRIMINATION ON THE BASIS OF DISABILITY. The Americans with Disabilities Act of 1990 (42 U.S.C. 12101 et seq.) is further amended-- (1) in section 2(b), by striking ``against individuals with disabilities'' each place it appears and inserting ``on the basis of disability''; and (2) in section 102(a), by striking ``against a qualified individual with a disability because of the disability of such individual'' and inserting ``against an individual on the basis of disability''. SEC. 5. QUALIFIED INDIVIDUAL. (a) Defense.--Section 103, by redesignating subsections (a) through (d) as subsections (b) through (e), respectively, and inserting before such subsection (b) (as so redesignated) the following: ``(a) In General.--It may be a defense to a charge of discrimination under this title that the individual with a disability alleging discrimination is not a qualified individual, as such term is defined in section 101(8).''. (b) Qualified Individual.--Title I of the Americans with Disabilities Act of 1990 (42 U.S.C. 12111 et seq.) is further amended-- (1) in section 101(8)-- (A) in the paragraph heading, by striking ``with a disability''; and (B) by striking ``with a disability'' after ``individual'' both places it appears; (2) in section 102(b)(5), by striking ``with a disability'' after ``individual'' both places it appears; and (3) in section 104-- (A) in subsection (a)-- (i) in the subsection heading, by striking ``With a Disability''; and (ii) by striking ``with a disability'' after ``individual''; and (B) in subsection (b), in the matter preceding paragraph (1), by striking ``with a disability''. SEC. 6. RULE OF CONSTRUCTION. Section 501 of the Americans with Disabilities Act of 1990 (42 U.S.C. 12201) is amended by adding at the end the following: ``(e) Broad Construction.--In order to ensure that this Act achieves its purpose under section 2(b) of providing a comprehensive prohibition of discrimination on the basis of disability, the provisions of this Act shall be broadly construed to advance their remedial purpose.''.
Americans with Disabilities Act Restoration Act of 2006 - Amends the Americans with Disabilities Act of 1990 to revise the definition of disability and to define: (1) physical impairment; (2) mental impairment; (3) record of physical or mental impairment; and (4) perceived physical or mental impairment. States a rule of construction that the existence of such an impairment, record, or perception shall be determined without taking into account an individual's use of mitigating measures or whether the impairment is episodic, short term, or long term. Provides that it may be a defense to a charge of discrimination that the individual with a disability alleging discrimination is not a qualified individual as defined in such Act. Declares that this Act shall be broadly construed to advance its remedial purpose of providing a comprehensive prohibition against discrimination on the basis of disability.
The Justice Department cleared officer Darren Wilson in a Ferguson, Mo., civil rights probe, but in a separate report, the agency accused the police department of bias and cited offensive e-mails. (The Washington Post) The Justice Department cleared officer Darren Wilson in a Ferguson, Mo., civil rights probe, but in a separate report, the agency accused the police department of bias and cited offensive e-mails. (The Washington Post) The Justice Department on Wednesday released a report explaining why it will not pursue federal civil rights charges against Darren Wilson, the white police officer, who shot and killed Michael Brown, an unarmed black 18-year-old, in Ferguson, Mo., last August. The department found that Wilson’s actions “do not constitute a prosecutable violation” and there “is no evidence upon which prosecutors can rely to disprove Wilson’s stated subjective belief that he feared for his safety.” [Read: The DOJ report on the police department in Ferguson] In a second report on broader police practices, the Justice Department released seven racist e-mails written by Ferguson police and municipal court officials. A November 2008 e-mail, for instance, stated that President Obama could not be president for very long because “what black man holds a steady job for four years.” Another e-mail described Obama as a chimpanzee. An e-mail from 2011 showed a photo of a bare chested group of dancing women apparently in Africa with the caption, “Michelle Obama’s High School Reunion.” The Justice Department did not specifically identify who wrote the e-mails and to whom they were sent, but said they were written by police ad court supervisors who are currently employed by the city. Attorney General Eric H. Holder Jr. detailed the findings of a civil rights investigation into the Ferguson, Mo., police department, saying there is an "implicit and explicit racial bias" that accounts for the hostile relations between law enforcement and residents. (AP) The second report accused the police department in Ferguson, Mo., of racial bias and routinely violating the constitutional rights of black citizens by stopping drivers without reasonable suspicion, making arrests without probable cause and using excessive force, officials said. Federal officials opened their civil rights investigation into the Ferguson police department after the uproar in the St. Louis suburb and across the country over the fatal shooting of Brown last year. A grand jury in St. Louis declined to indict Wilson in November. [The seven racist e-mails the Justice Department highlighted in its report on Ferguson police] Although federal officials will not bring civil rights charges against Wilson, they see their broad civil rights investigation into the troubled Ferguson police department as the way to force significant changes in Ferguson policing. “As detailed in our report, this investigation found a community that was deeply polarized, and where deep distrust and hostility often characterized interactions between police and area residents,” said Attorney General Eric H. Holder Jr. “Our investigation showed that Ferguson police officers routinely violate the Fourth Amendment in stopping people without reasonable suspicion, arresting them without probable cause, and using unreasonable force against them. Now that our investigation has reached its conclusion, it is time for Ferguson’s leaders to take immediate, wholesale and structural corrective action. The report we have issued and the steps we have taken are only the beginning of a necessarily resource-intensive and inclusive process to promote reconciliation, to reduce and eliminate bias, and to bridge gaps and build understanding.” Holder is expected to speak about the reports Wednesday afternoon. In hundreds of interviews and in a broad review of more than 35,000 pages of Ferguson police records and other documents, Justice Department officials found that although African Americans make up 67 percent of the population in Ferguson, they accounted for 93 percent of all arrests between 2012 and 2014. Benjamin Crump, the attorney for Brown’s family, said the report into police practices confirms “what Michael Brown’s family has believed all along, and that is that the tragic killing of their unarmed teenage son was part of a systemic pattern of policing of African American citizens in Ferguson.” [Read: Department of Justice report on the Michael Brown shooting] The findings come as Justice Department officials negotiate a settlement with the police department to change its practices. If they are unable to reach an agreement, the Justice Department could bring a lawsuit, as it has done against law enforcement agencies in other jurisdictions in recent years. A U.S. official said that Ferguson officials have been cooperating. As part of its findings, the Justice Department concluded that African Americans accounted for 85 percent of all drivers stopped by Ferguson police officers and 90 percent of all citations issued. The review concludes that racial bias and a focus on generating revenue over public safety have a profound effect on Ferguson police and court practices and routinely violate the Constitution and federal law. “We owe it, not just to law enforcement, but to Michael Brown, Tamir Rice and Eric Garner to figure out what’s really going on here so it can be addressed,” said Jeff Roorda, a former Missouri state representative and a spokesman for the St. Louis Police Officers Association, referring to others killed by police officers in Cleveland and New York. “Reaching conclusions from statistics about traffic stops I don’t think draws the whole picture.” The Justice review also found a pattern or practice of Ferguson police using unreasonable force against citizens. In 88 percent of the cases in which the department used force, it was against African Americans. In Ferguson court cases, African Americans are 68 percent less likely than others to have their cases dismissed by a municipal judge, according to the Justice review. In 2013, African Americans accounted for 92 percent of cases in which an arrest warrant was issued. Justice investigators also reviewed types of arrests and the treatment of detainees in the city jail by Ferguson police officers. They found that from April to September 2014, 95 percent of people held longer than two days were black. The police department also overwhelmingly charges African Americans with certain petty offenses, the investigation concluded. For example, from 2011 to 2013, African Americans accounted for 95 percent of all “manner of walking in roadway” charges, 94 percent of all “failure to comply” charges and 92 percent of all “peace disturbance” charges, the review found. The shooting of Brown on a Ferguson street on Aug. 9 set off days of often violent clashes between demonstrators and police in the streets of Ferguson. Elected officials, protest organizers and community leaders renewed calls Tuesday for Ferguson Police Chief Thomas Jackson to resign — some adding that the department should be disbanded — and said the Justice Department probe should have gone further by investigating other municipal police forces in the area. “I would speculate that the same pattern and practices of Ferguson exist in every other department in St. Louis County,” said Adolphus Pruitt, the president of the St. Louis NAACP, which has filed racial discrimination complaints against county police. [DOJ report renews outrage in Ferguson] He added, “It’s time for the Ferguson police department to disappear.” Justice Department investigators spent about 100 days in Ferguson, observing police and court practices, including four sessions of the Ferguson Municipal Court. They conducted an analysis of police data on stops, searches and arrests, as well as data collected by the court, and met with neighborhood associations and advocacy groups. The investigators also interviewed city, police and court officials, including the Ferguson police chief and his command staff. In the past five years, the Justice Department’s civil rights division has opened more than 20 investigations of police departments, more than twice as many as were opened in the previous five. The department has entered into 15 agreements with law enforcement agencies, including consent decrees with nine of them, including the New Orleans and Albuquerque police departments. Kimberly Kindy, Sarah Larimer and Wesley Lowery contributed to this report. ||||| The Ferguson City Council is playing a game of chicken with the U.S. Department of Justice that it is going to lose. The city's leaders are fooling themselves if they think the consent decree the Justice Department submitted to them after months of negotiation is up for retooling. Ferguson does not have the upper hand in dictating terms of the decree, which is intended to reform the city's police department and court, to the federal government.
– The Justice Department is expected to release a report this week that will accuse Ferguson police of bias against black residents. Among the details trickling out in advance is the discovery of two racist jokes in emails written by Ferguson cops and municipal court officials, reports the St. Louis Post-Dispatch: One from 2008 says that President Obama won't serve a full term because "what black man holds a steady job for four years." Another from 2011: "An African-American woman in New Orleans was admitted into the hospital for a pregnancy termination. Two weeks later she received a check for $5,000. She phoned the hospital to ask who it was from. The hospital said, 'Crimestoppers.'" The Justice Department will make the case that an atmosphere in which these kinds of jokes were circulated—the authors are not identified—is partly to blame for police bias. The big stat: African-Americans make up 67% of the population but account for 93% of arrests. Justice Department officials are trying to negotiate a settlement with the Ferguson department to force improvements and could sue if no agreement is reached, reports the Washington Post. A separate investigation remains open into whether officer Darren Wilson will face federal civil rights charges for shooting Michael Brown—it's unlikely—and that decision may be announced along with the racial-bias findings, perhaps as early as tomorrow, reports the Post-Dispatch.
Veteran journalist Mark Halperin sexually harassed women while he was in a powerful position at ABC News, according to five women who shared their previously undisclosed accounts with CNN and others who did not experience the alleged harassment personally, but were aware of it. "During this period, I did pursue relationships with women that I worked with, including some junior to me," Halperin said in a statement to CNN Wednesday night. "I now understand from these accounts that my behavior was inappropriate and caused others pain. For that, I am deeply sorry and I apologize. Under the circumstances, I'm going to take a step back from my day-to-day work while I properly deal with this situation." MSNBC, where Halperin makes frequent appearances on "Morning Joe," said early Thursday that Halperin would leave his roles at that network and as an analyst at NBC News. "We find the story and the allegations very troubling," MSNBC said in a statement. "Mark Halperin is leaving his role as a contributor until the questions around his past conduct are fully understood." Widely considered to be one of the preeminent political journalists, Halperin, 52, has, among other career highlights, been political director at ABC News; co-authored the bestselling book "Game Change," which was made into an HBO movie starring Julianne Moore as Sarah Palin; and anchored a television show on Bloomberg TV. He is featured in Showtime's "The Circus," a show that chronicled the 2016 campaign cycle and the early days of the Trump presidency, and has a project in development with HBO, which, like CNN, is owned by Time Warner. But women who spoke to CNN say he also had a dark side not made public until now. The stories of harassment shared with CNN range in nature from propositioning employees for sex to kissing and grabbing one's breasts against her will. Three of the women who spoke to CNN described Halperin as, without consent, pressing an erection against their bodies while he was clothed. Halperin denies grabbing a woman's breasts and pressing his genitals against the three women. Related: The (incomplete) list of powerful men accused of sexual harassment after Harvey Weinstein The women who worked with Halperin and who spoke with CNN did not report to Halperin. However, Halperin made many decisions about political coverage at ABC News, and had a voice in some critical personnel decisions. None of the women have said, though, that he ever promised anything in exchange for sex, or suggested that he would retaliate against anyone. Still, while they no longer work with him, Halperin continues to wield influence in politics and media. The women who spoke to CNN said it was for this reason that they shared their accounts on the condition of anonymity. Others also said they still feel embarrassed about what happened to them and did not want to be publicly associated with it. "Mark left ABC News over a decade ago, and no complaints were filed during his tenure," ABC News said in a statement provided to CNN after this article was published. The first woman told CNN she was invited to visit his office in the early 2000s, when he was political director at ABC News, to have a soda, and said that while she was there with him he forcibly kissed her and pressed his genitals against her body. "I went up to have a soda and talk and -- he just kissed me and grabbed my boobs," the woman said. "I just froze. I didn't know what to do." When she did make her way out of his office, the woman told a friend at ABC News what had happened. That friend told CNN she remembered the woman telling her about the incident and seeing her visibly shaken. The second woman, another former ABC News employee, described a similar experience in his office during the 2004 campaign cycle. This woman said she was around 25 years old then, and wanted to be a "campaign off-air" -- ABC News' term for one of the reporters who travel embedded with presidential campaigns -- so she reached out to Halperin, who was a part of the decision-making process regarding those assignments at the time. "The first meeting I ever had with him was in his office and he just came up from behind -- I was sitting in a chair from across his desk -- and he came up behind me and [while he was clothed] he pressed his body on mine, his penis, on my shoulder," this woman told CNN. "I was obviously completely shocked. I can't even remember how I got out of there -- [but] I got out of there and was freaked out by that whole experience. Given I was so young and new I wasn't sure if that was the sort of thing that was expected of you if you wanted something from a male figure in news." The woman said Halperin continued to express a sexual desire for her in subsequent visits, despite being rebuffed. "It was more like him coming up too close to me and sort of along the lines of hugging me," she explained. She also alleged that Halperin propositioned her for sex on the campaign trail. "He would say, 'Why don't you meet me upstairs?' And I would say, 'That's not a good idea.' And he would push the request further," the person said. "Eventually I would just ignore him and go about my business." One of the woman's friends has told CNN that the woman told her about the first incident, in Halperin's office. She said her friend had told her some time after the incident that Halperin had pressed his genitals against her while she was seated in his office, but did not recall being told about unwanted touching during subsequent visits or the alleged propositions for sex. A third woman, also a former ABC News employee, told CNN she was on the road with Halperin when he propositioned her. "I excused myself to go to the bathroom and he was standing there when I opened the door propositioning [me] to go into the other bathroom to do something," she said. "It freaked me out. I came out of the ladies' room and he was just standing there. Like almost blocking the door." A fourth woman who worked with Halperin and was junior to him told CNN he once asked her late at night on the campaign trail to go up to his hotel room with him, and that she believed him to be propositioning her. She declined. The fifth woman who spoke to CNN was not an ABC News employee at the time of the incident she alleges. She was not comfortable sharing specifics of her story for publication, but said Halperin, while clothed, placed his erect penis against her body without consent. The women who spoke to CNN said that they did not report Halperin's behavior to management either because they feared retribution, given the level of power Halperin had at ABC and in the industry, or because they were embarrassed. In some cases, their fear of him and the sway he holds remains to this day. The woman in the first account, however, said she told a mentor at ABC News who said he wanted to escalate the issue to management. It is unclear if that ever occurred. CNN's investigation found that Halperin allegedly exhibited this type of behavior from the 1990s to the mid-2000s. CNN has not learned of any incidents after Halperin left ABC News. Halperin joined ABC News in the late 1980s. In 1997 Halperin was named political director of ABC News, and rose to prominence with the advent of The Note, a morning digest newsletter previewing the day in politics. The Note became a must-read for industry professionals, and it catapulted him to the upper echelons of the political scene. Halperin left ABC News in 2007 for Time magazine and joined Bloomberg in 2014 for a reported salary of $1 million. At Bloomberg, he co-anchored "With All Due Respect" with journalist John Heilemann. The show was simulcast on MSNBC for a period. Related: Mark Halperin leaves Bloomberg Halperin also found great success writing books. In 2010, he co-authored "Game Change" with Heilemann. The duo later published "Double Down: Game Change 2012," reportedly receiving a multi-million dollar advance. Halperin and Heilemann are currently working on a third installment about the 2016 election. "For the last 11 years, I have had to watch this guy find success in every other news organization," one of the women who said she experienced harassment told CNN. The allegations against Halperin come weeks after Hollywood mogul Harvey Weinstein was first publicly accused of sexual harassment and assault -- allegations that have prompted multiple police departments to launch investigations, the Academy of Motion Picture Arts to expel him, and his firing from the company he co-founded. (Weinstein denies all allegations of non-consensual acts.) While the allegations against Halperin do not mirror the allegations against Weinstein, the fact that the women who spoke with CNN have chosen to do so now does reflect the larger discussion in entertainment, media, politics and other industries since the Weinstein scandal began. Women -- and men -- are talking about things that have long been rumored but have never been brought to light. Some people are feeling emboldened to tell those stories now. Others are looking back and regretting the things they didn't do. One person who formerly worked with Halperin told CNN that while they were not aware of the extent of the alleged harassment, they believe they had heard enough to warrant reporting the whispers to management. "In retrospect, I was such a coward," the person said. "I wish I said something. I wish I had done something." ||||| Mark Halperin, a senior political analyst and frequent contributor for NBC News and MSNBC, acknowledged Wednesday night that he had engaged in "inappropriate" behavior around women he worked with while he was at ABC News and said he would "take a step back from my day-to-day work." Halperin apologized for having pursued "relationships with women that [he] worked with" in a statement to CNN, which quoted five anonymous women, four of them former ABC News employees, as saying Halperin sexually harassed them when he was a top political journalist at ABC News. "During this period, I did pursue relationships with women that I worked with, including some junior to me," Halperin told CNN. "I now understand from these accounts that my behavior was inappropriate and caused others pain. For that, I am deeply sorry and I apologize. Under the circumstances, I'm going to take a step back from my day-to-day work while I properly deal with this situation." Play Facebook Twitter Embed Journalist Mark Halperin accused of sexual harassment 0:56 autoplay autoplay Copy this code to your website or blog CNN said it hadn't learned of any incidents after Halperin left ABC News in 2007 after two decades. None of the women reported the alleged incidents — which NBC News has not verified — to ABC News management, according to CNN, which said it wasn't identifying the women because they feared retribution, although one woman said she told a mentor. Three of the women alleged inappropriate touching, which Halperin denied to CNN. None of the women who spoke to CNN said Halperin ever attempted to exchange anything for sex or suggested he threatened to retaliate against them. MSNBC said in a statement Thursday morning that in light of the allegations, Halperin would not be returning as a contributor for the time being. Mark Halperin, center, during a panel discussion of the second season of the Showtime documentary series 'The Circus' at The Newseum in Washington on May 3. Paul Morigi / Getty Images "We find the story and the allegations very troubling. Mark Halperin is leaving his role as a contributor until the questions around his past conduct are fully understood," the statement said. Halperin didn't immediately return calls seeking comment. In a statement to CNN, ABC News said: "Mark left ABC News over a decade ago, and no complaints were filed during his tenure." Halperin, who worked for NBC News, MSNBC, Bloomberg Politics and Time magazine after he left ABC News, rejoined NBC News and MSNBC as a contributor and senior political analyst in March. He is a regular guest on MSNBC's "Morning Joe" and on NBC News broadcasts. He is the co-author, with John Heilemann, of "Game Change," about the 2008 presidential campaign, and "Double Down: Game Change 2012." They were among the co-producers of the Showtime political documentary series "The Circus," the second season of which aired this year.
– Five women accuse journalist Mark Halperin of sexual harassment in a new report from CNN. The women, who shared their stories on condition of anonymity, describe varying offenses—including forcible kissing, groping, and being propositioned for sex—while 52-year-old Halperin was political director at ABC News a decade ago. Three of the women also say Halperin pressed up against them with an erection while clothed. Halperin, now an NBC News analyst, denies that claim in a statement, but admits "I did pursue relationships with women that I worked with, including some junior to me." He says, "I now understand from these accounts that my behavior was inappropriate and caused others pain. For that, I am deeply sorry and I apologize." In its own statement, NBC News calls the allegations "very troubling" and says Halperin will be removed as contributor "until the questions around his past conduct are fully understood." Meanwhile, ABC News notes "no complaints were filed" against Halperin before his departure from ABC in 2007. One of the women who spoke to CNN, however, says she told her mentor at ABC when Halperin forcibly kissed her, grabbed her breasts, and pressed his penis against her in his office. The mentor told her he wanted to notify management, the woman says, though it isn't clear if he ever did. Other accusers say they feared retribution if they spoke out, given Halperin's authority at ABC and in political journalism more broadly. Some still fear retribution, per CNN.
the inner region of the nearby post - core - collapse globular cluster ngc 6397 has been the target of several surveys aimed at the identification of rare or unusual stars likely to be created as the result of stellar interactions in its dense cluster core ( coll & bolton ( 2002 ) and references therein ) . recently grindlay et al . ( 2001 ) reported the detection with @xmath3 of 25 x - ray sources within 2@xmath4 of the cluster center . optical searches for variable stars in the inner regions of globular clusters , particularly those with strong central density cusps , have been hindered by crowding . image subtraction techniques in such strongly crowded fields have been successful in locating and studying variables ( olech et al . 1999 ; kaluzny , olech & stanek 2001 ) , especially when the data are taken with a fine plate scale with a stable and uniform point - spread function . in this paper we present the analysis of ground - based time series ccd photometry which was obtained to study the light curves of the optical counterpart to the binary millisecond pulsar psr j1740 - 5340 ( damico et al . 2001b ; ferraro et al . our results for the binary pulsar are given elsewhere ( kaluzny et al . 2002 ) . in this contribution we report on the results obtained for other variable stars located in the central part of the cluster . light curves for a total of 16 variables are presented and discussed . nine of these objects are new identifications . the photometric data were obtained with the 2.5-m du pont telescope at las campanas observatory . a field of @xmath5 arcmin@xmath6 was observed with the tek#5 ccd camera at a scale of 0.259@xmath7/pixel . this present analysis is limited to a sub - field @xmath8 arcmin@xmath6 centered approximately on the cluster core . most of the data were obtained on 6 nights during the period from ut may 1 to 8 , 2002 , with additional data obtained on ut june 3 , 2002 . conditions were non - photometric on all but one night with average seeing of 1.0@xmath7 in the @xmath9-band . the cluster was observed for a total of 32 hours through @xmath10 filters . exposure times were typically 30 sec ( @xmath11 ) , 15 sec ( @xmath9 ) , and 10 sec ( @xmath12 ) . frames were were co - added , and the total number of stacked images used in the present study was 69 ( @xmath11 ) , 196 ( @xmath9 ) , and 59 ( @xmath12 ) . for a few of the variables discussed below we have also extracted @xmath9-band time - series photometry from images with the best seeing , selecting 1176 out of the total of 1256 available . our observing material and photometric calibration procedure are described in detail in kaluzny et al . two methods were used to detect potential variable stars . both of them make use of the isis-1.2 image subtraction package ( alard & lupton 1998 ; alard 2000 ) . the first method is based on examination of images created by combining individual residual images , and relies entirely on tools included in the isis package . this method is well suited for the detection of variables with a high duty cycle and/or showing significant changes of flux ( bright variables or faint variables with large amplitude light curves ) . an advantage of this method is that it permits the detection of variables which can not be resolved with classical profile photometry in crowded fields . the second method relies on the examination of the light curves of all objects which could be measured on a reference frame with profile fitting software . for each filter a reference frame was constructed by averaging several stacked images of the best quality . the detection of stellar objects and the extraction of photometry was accomplished using the daophot / allstar software package ( stetson 1987 ) . the total of 4336 stars were measured on the @xmath9-band reference image . the limiting magnitude depends very much on the distance from the cluster core . the faintest measured stars have @xmath13 and the observed luminosity function starts to diminish at @xmath14 . the @xmath15 procedure in the isis package was used to extract differential light curves at the position corresponding to each star detected with the daophot / allstar package . differential light curves were then transformed to magnitudes and checked for variability . the light curves were searched for the presence of any periodic signal with the @xmath16 algorithm ( schwarzenberg - czerny 1989 ; schwarzenberg - czerny 1996 ) and were reviewed for possible eclipse - like events . a total of 16 variables were identified . nine of these are new detections . information on these variables is presented in table 1 . column 1 gives an assigned name following clement et al . ( 2001 ) , followed by the right ascension and declination of the variable . column 5 gives alternate designations for the variables , many of which have been previously identified as blue stragglers , uv bright objects , and/or candidate cataclysmic variables . table 1 also gives positional information for all 16 variables . columns 2 and 3 list equatorial coordinates , while column 4 lists pixel coordinates of the variables as found on @xmath17 archive image u5dr0401r for 12 of the 16 variables . the equatorial coordinates were derived from the frame solution included in the @xmath17 image header . we derived equatorial coordinates for the four variables laying outside the @xmath17 field from an astrometric solution to our @xmath9 band reference image based on 1022 stars with equatorial coordinates adopted from kaluzny ( 1997 ) . this solution has residuals not exceeding 1@xmath7 when compared to the @xmath17 astrometric solution . finding charts for variables v10 , v22 and v23 can be found in lauzeral et al . ( 1992 ; stars 11 , 8 and 16 in their fig . 1 ) , and a finding chart for variable v24 is shown in figure 1 . basic information on the photometric properties of the 16 detected variables is presented in table 2 . column 1 lists the variable name , followed by a classification of the light curve , the period of variability , mean @xmath11 , @xmath9 , and @xmath12 magnitudes , and the @xmath9-band full amplitude of variability . positions of the variables in the cluster color - magnitude diagram are shown in fig . the ellipsoidal variable v16 is the optical counterpart to the millisecond binary pulsar j1740 - 5340 ( dadmico et al . 2001 ; ferraro et al . photometry of this variable is discussed in kaluzny et al . grindlay et al . ( 2001 ) detected 9 possible cataclysmic variables ( cv ) in the central region of ngc 6397 with the @xmath3 telescope , naming the objects cv1 - cv9 . they identified the optical counterparts of cv1 - cv5 based on observations reported by cool et al . ( 1995 , 1998 ) , grindlay ( 1999 ) , and unpublished hst h-@xmath18 observations ( grindlay et al . 2001 ) . variable v12 can be unambiguously identified with cv1 using positional data provided in cool et al . ( 1998 ; see their table 1 ) . from an examination of the hst archive image u5dr0401r at the equatorial coordinates of v13 we conclude that this star is the optical counterpart of cv6 . on the @xmath17 image v13 is visible as an isolated object with a closest neighbor at a distance of @xmath19 . variables cv2 - cv5 and cv7 - cv9 could not be resolved on our reference images . we attempted to extract differential light curves using the isis package at the known positions of these stars . the light curves suffered from large photometric errors and showed no sign of any periodicity . the large errors result from the effects of several relatively bright stars near the variables . examination of the light curves of v12 = cv1 reveals the presence of a sine - like periodic modulation . the power spectrum calculated from the @xmath9 filter time series photometry based on individual exposures is presented in fig . two maxima of comparable strength are present at frequencies corresponding to periods of 0.4712 and 0.2356 days . grindlay et al . ( 2001 ) report that cv1 showed one total eclipse in x - rays through the 0.567 days observation obtained with @xmath3 , and therefore we may eliminate the shorter period from consideration . nightly light curves of cv1 phased with the period of 0.4712 days are displayed in fig . the average value of the formal error of a single data point is 0.042 mag . it is worth noting that the shape of the light curve as well as the average luminosity of the system are relatively stable over the interval covered by our observations . figure 5 shows phased @xmath0 light curves corresponding to photometry extracted from averaged images . neither the amplitude nor the shape of the light curves change noticeably with band - pass . the light curves are quite symmetric with two maxima of comparable height separated by half of the period . the two observed minima have comparable depths although the minimum occurring at phase zero is slightly sharper . these properties of the light curves of cv1 suggest that the observed variability is dominated in optical domain by the ellipsoidality effect caused by rotation of the roche lobe filling component , and that the secondary dominates optical flux of the system . such an interpretation is consistent with the relatively red colors of cv1 . the variable is located about 0.1 mag to the blue of the cluster main sequence on the @xmath20 plane and it is located slightly to the red of the main sequence on the @xmath21 plane ( see fig . 2 ) . at the time of our observations the optical flux generated by the accretion process apparently contributed a small fraction of the total optical luminosity of the binary . another interesting property of cv1 is its relatively long orbital period . among 318 cataclysmic variables which are listed in ritter & kolb ( 1998 ) there are only 14 objects with periods longer than 0.47 days . it is possible to get a robust and reliable estimate of the average density of the secondary component of cv1 from the formula in faulkner et al . ( 1972 ) and eggleton ( 1983 ) : @xmath22 where the period p is in hours and the density @xmath23 is in @xmath24 . we obtain @xmath25 @xmath26 . the binary is located well below the cluster turn - off and therefore we may expect that mass of the secondary does not exceed @xmath27 . theoretical models published recently by bergbusch & vandenberg ( 2001 ) give an average density @xmath28 @xmath26 , for a zams model of a 0.8 solar mass star with @xmath29=-2.0}$ ] . for a mass lower than @xmath30 m@xmath31 the expected density is even higher as @xmath32 . we conclude that the secondary component of cv1 is noticeably over - sized compared to a normal low - mass main - sequence star . knowing the distance modulus of the cluster we may derive an absolute luminosity of the variable . for @xmath33 and @xmath34 ( reid & gizis 1998 ) we obtain @xmath35 for the average absolute magnitude of cv1 in the @xmath9-band . it is tempting to use that information to derive the radius of the secondary star but we feel that uncertainties in relative intensity of the accretion generated flux to the total observed luminosity are too large . such uncertainties affect not only the estimate of the flux from the secondary but also any estimate of its effective temperature . these problems can be greatly reduced by observing the binary at near - ir wavelengths where the total flux of the system should be strongly dominated by the secondary star . we conclude this part of the discussion by noting that x - ray observations presented by grindlay et al . ( 2001 ) are consistent with the identification of cv1 as either an ordinary dwarf nova or a magnetic cv . the power spectrum of the @xmath9 band light curve of v13 = cv6 shows strong peaks at periods of 0.1176 and 0.2352 days . an examination of the light curves from individual nights indicates that they exhibit two minima of different shape separated by about 0.12 days . this is particularly clear in the light curve extracted from individual images collected on the night of ut may 1 , 2002 , which is presented in fig . the @xmath0 light curves of cv6 phased with a period of 0.2352 days are shown in fig . 7 . they are based on photometry extracted from averaged images . the shape and mean level of the light curves were quite stable during our observations . as for cv1 the variability of cv6 seems to be dominated by the ellipsoidality effect . the minimum occurring at phase 0.0 is narrower than the minimum at phase 0.5 . we interpret this as evidence that the bright accretion region surrounding the primary component of the binary is eclipsed at phase 0.0 . the variable is located about @xmath36 to the blue of the cluster main sequence in the @xmath20 plane ( see fig . 2 ) and in the @xmath21 plane it is located at the red edge of the cluster main sequence . assuming cluster membership for cv6 we estimate @xmath37 . the average density of the secondary component is @xmath38 @xmath39 , consistent with the density expected for a slightly evolved pop ii main sequence star of mass @xmath40 . in particular , models published by girardi et al . ( 2000 ) predict @xmath41 and @xmath37 for 0.7 solar mass star of age 11 gyr . our sample of variables includes four eclipsing binaries . in this section we comment briefly on their properties , a detailed analysis will require spectroscopic observations . variable v7 was identified as a w uma variable by kaluzny ( 1997 ) . this star has two close visual companions with @xmath42 and @xmath43 located at angular distances of @xmath44 and @xmath45 , respectively . despite the proximity of the companions the pixel scale of the observations together with the high signal - to - noise data meant that photometry could be measured for both companions in our @xmath9 band data . only the brighter companion could be measured while extracting photometry for the @xmath11 and @xmath12 bands . both companions were unresolved in the photometry reported by kaluzny ( 1997 ) , leading to an overestimation of the luminosity of v7 as reported in that paper . variable v19=pc-1 was identified by taylor et al . ( 2001 ) in a photometric survey for objects with excess @xmath46 flux . they also detected v7 in the course of their survey . v19 has a close visual companion of @xmath47 at angular distance of @xmath48 . that companion is measured in our photometry for all bands . phased @xmath9-band light curves of v7 and v19 are shown in fig . some intrinsic night - to - night changes of the shape of the light curve were observed for v7 . such behavior is not unusual for w uma type systems . there is some indication that the secondary minimum of v7 exhibits a `` flat - bottom '' indicating that this eclipse is total . for w uma type systems with total eclipses one may obtain reliable light curve solutions as totality removes the degeneracy between the mass ratio and inclination of the system ( mochnacki and doughty 1972 ) . these two contact binaries have similar colors and lie about @xmath49 mag above cluster main sequence on the color - magnitude diagram , suggesting that that they are most likely members of ngc 6397 . variable v14 is a relatively faint eclipsing binary located about 0.1 mag to the red of the cluster main sequence . its phased light curve is presented in fig . 9 , two shallow minima of different depth are seen . main sequence binaries with periods below 0.35 days almost always show ordinary ew type light curves , a signature of a contact configuration . if v14 is a member of ngc6397 then its red color and faint magnitude would suggest the components are of late spectral type with low masses and radii . in this case v14 could be a close but non - contact binary despite its short period . variable v18 is by far the most interesting of the four eclipsing binaries included in our sample . it is not only a likely blue straggler but it also shows a very unusual light curve ( see fig . 10 ) . at first glance it resembles the light curves of ordinary w uma type contact binaries . however the light curve of v18 shows clear signatures of eclipse ingress and egress , not observed in contact binaries . we conclude that v18 is a detached or semi - detached system composed of stars with very similar surface brightness . the light curve is similar in all 3 filters with some indication that eclipses become progressively shallower from the @xmath11 to the @xmath12 band by @xmath50 mag . examination of @xmath17 images of the cluster shows that v18 possesses a close visual companion at an angular distance of 0.19 @xmath7 . it has not been resolved in our profile photometry and hence its flux acts as a `` third '' light in the photometry of v18 . we have identified the companion as star 200338 in the data base published by piotto et al . ( 2002 ) , with @xmath51 and @xmath52 . we adjust these magnitudes to @xmath53 and @xmath54 to take into account differences in the zero points of the two sets of photometry of @xmath55 and @xmath56 for the @xmath9 and @xmath11 filters , respectively ( our magnitudes are brighter ) . since @xmath12-band data are not included in the piotto et al . ( 2002 ) study we estimate @xmath12-band photometry from the fact that the v18 companion lies on the cluster main sequence . from the @xmath57 photometry of ngc 6397 published by alcaino et al . ( 1997 ) we estimate that for @xmath58 the companion has @xmath59 . light curves presented in fig . 10 as well as magnitudes listed for v18 in table 2 are corrected for the contribution of the nearby companion . attempts to derive a reliable light curve solution for v18 are hampered by the lack of vital information on the mass ratio of the binary . we have calculated a grid of solutions for a wide range of assumed values of the mass ratio @xmath60 . index `` 1 '' refers to star eclipsed at phase `` 0 '' . the @xmath9-band light curve was was solved using wilson - devinney code ( wilson 1979 ) embedded in the minga minimizing package ( plewa 1988 ) . our preliminary results can be summarized as follows . for @xmath61 solutions imply a detached configuration with an inclination in the range @xmath62 deg . the ratio of component radii is constrained to the range @xmath63 . for @xmath64 solutions converge to semi - detached configurations with the less massive component filling its roche lobe . the @xmath65 statistic measuring the quality of fit of the synthetic light curve to the observations has a minimum near @xmath66 . a solution for that specific value of the mass ratio gives inclination @xmath67 deg and average relative radii of the components @xmath68 and @xmath69 . if the mass ratio is close to 0.2 and the system is detached , then one may wonder why both components have very similar effective temperatures . note that the color of the variable is essentially constant over the whole orbital period . however , if the mass ratio is close to unity then we face difficulty trying to explain how this blue straggler can be composed of two stars both of which are significantly bluer and more luminous than stars at the cluster turnoff . spectroscopic data providing information about mass ratio of the binary are needed to reliably determine its geometrical and absolute parameters . sx phe type variables are short period pulsating stars which can be considered pop ii counterparts of more metal rich @xmath70 sct type stars . it is not unusual to find them among blue stragglers in globular clusters ( rodriquez & lpez - gonzles 2000 ) . variables v10 and v11 were originally identified as sx phe stars by kaluzny ( 1997 ) . here we add three more objects to that group . light curves of all five variables show modulation of shape and amplitude indicating the presence of multimodal pulsations . a detailed analysis of these data will be published in a separate paper ( schwarzenberg - czerny et al . ; in preparation ) . here we note only that variables v10 and v15 have extremely short dominant periods , at 0.0215 days v15 has the shortest period known for an sx phe star . no sx phe stars with periods below 0.030 days are listed in the recently published catalog of rodriquez & lpez - gonzles ( 2000 ) . we considered the possibility that v15 is not an sx phe type star but rather a pulsating hot subdwarf . however its @xmath71 color is too red to be a bright sdb / o star ( note the position of v15 in fig . 2 ) . in this section we discuss briefly four variables which can not be classified with confidence based on the available data . variable v20 is one of the brightest blue stragglers identified in the cluster core by lauzeral et al . the power spectrum of its light curve shows two major peaks at periods of 0.861 and 0.436 days , with the higher peak corresponding to the longer period . the @xmath9 band light curves of v20 phased with each of these two periods are shown in fig . 11 . for the longer period the light curve has two minima , and this suggests that v20 is a low amplitude w uma type system . the feature visible at the second quadrature arises from a different light level observed on the single night of june 3 , roughly one month after the first observing run when most of the data were collected . the period of 0.861 days is relatively long for a contact binary belonging to a globular cluster ( rucinski 2000 ) . yet another possibility is that v20 is a close binary with variability due to the ellipsoidality effect . the light curve phased on the shorter period is slightly more noisy with a single minimum . the period of 0.436 days is far too long to classify v20 as a pulsating sx phe star . the variable is too hot to show spot - related activity as is observed for fk com or by dra type stars . however , it can be related to @xmath72 doradus stars , as it is discussed below for two other variables . phased @xmath9-band light curves for variables v17 and v24 are presented in fig . they both show low amplitude , sine - like modulation , with periods of 0.457 days and 0.525 days for v24 and v17 , respectively . on the color - magnitude diagram the stars are located about 0.15 mag to the red of the cluster turnoff . we propose that v17 and v24 are pop ii counterparts of @xmath2 doradus variables . the @xmath2 doradus stars have often multiple periods between 0.4 and 3 days and show sinusoidal light curves with amplitudes in optical domain of the order of 0.01 mag ( zerbi 2000 ; henry & fekel 2002 ) . their variability is due to non - radial @xmath73-mode pulsations and they are usually subgiants or , less frequently , main sequence stars of spectral type f0-f2 . the red edge of the instability strip for pop i @xmath2 doradus variables is located at @xmath74 ( henry & fekel 2002 ) . the dereddened colors of v17 and v24 are @xmath75 and @xmath76 , respectively . henry & fekel ( 2002 ) note that @xmath2 doradus variables show average @xmath77 amplitude ratios of @xmath38 1.3 . this distinguishes them from `` spotted '' variables which have amplitude ratios of @xmath38 1.1 and from ellipsoidal variables for which the ratio is @xmath38 1.0 . from our data we obtain @xmath78 and @xmath79 for v17 and v24 , respectively . more extended time series would allow a more accurate estimate of the @xmath77 ratio for both variables and would also help to search for multiperiodicity in the light curves . the stability of @xmath2 doradus light curves 100 - 200 cycles also distinguishes these stars from `` spotted '' variables . variable v22=bs8 is a bright blue straggler which was identified in the cluster core by lauzeral et al . the light curve of v22 can not be phased with a single period although during may run we observed four minima occurring in 1 and 2 day intervals . the light curve extracted from observations obtained on nights of 9 and 10 july , 1995 ( kaluzny 1997 ) shows variations with @xmath80 and a possible period of about 0.75 days . however that period does not fit the 2002 data . it is possible that v22 is a pulsating multiperiodic variable related to @xmath2 doradus stars or that it is a distant rr lyr variable of rrd type . in fig . 13 we show light curves from two nights on which the observed variations were most pronounced . we have used time series photometry obtained with a medium sized telescope to look for short period variables in the central part of the post - core - collapse cluster ngc 6397 . we show that by applying the image subtraction technique that it is possible not only to detect variable stars in very crowded fields but also to measure accurate light curves for objects with amplitudes as small as 0.01 mag . photometry of ngc 6397 obtained with @xmath17 imaging ( piotto et al . 2002 ) allows a check and , if necessary , a correction for contamination from possible visual companions which are unresolved in ground - based data . we present the first complete light curves and derive orbital periods for two cataclysmic variables in ngc 6397 . the total flux and variability of both of these cv s is dominated by the secondary components . the determination of cluster membership of the detected variables has relied on their positions in the cluster cmd . while the cluster is located on the sky near the galactic bulge at @xmath81 deg and @xmath82 deg , in the central region cluster stars must prevail strongly . however we can not exclude the possibility that some of the variables are field interlopers . the publication of proper motion catalogs ( cool & bolton 2002 ) will be exceptionally useful in resolving issues of cluster membership . jk was supported by the polish kbn grant 5p03d004.21 and by nsf grant ast-9819787 . it was supported by nsf grant ast-9819786 . we would like to thank alex schwarzenberg - czerny for providing us with his excellent period finding programs . alard , c. 2000 , , 144 , 363 alard , c. , & lupton , r. h. 1998 , , 503 , 325 alcaino , g. , liller , w. , alvarado , f. , kravtsov , f. , ipatov , a. , samus , n. , & smirnov , o. 1997 , , 114 , 1067 bergbusch , p.a . , & vandenberg , d.a . 2001 , , 556 , 322 clement , c. m. et al . 2001 , , 122 , 2587 . cool , a.m. , grindlay , j.e . , cohn , h.n . , lugger , p.m. , bailyn , c. 1998 , , 508 , l75 cool , a.m. , & bolton , a.s . 2002 , in `` stellar collisions , mergers and their consequences '' , asp conf . series , vol . m.shara , in press ( astro - ph/0201166 ) damico , n. , possenti , a. , manchester , r. n. , sarkissian , j. , lyne , a. g. , & camilo , f. 2001b , , 561 , l89 eggleton , p.p . 1983 , , 268 , 386 faulkner , j. , flannery , b.p . , & warner , b. 1972 , , 175 , l79 ferraro , f. r. , possenti , a. , damico , n. , & sabbi , e. 2001 , , 561 , l93 grindlay , j.e . , cool , a.m. , callanan , p.j . , baiylyn , c.d . , cohn , h.n . , & lugger , p.m. 1995 , , 455 , l47 grindlay , j. e. , heinke , c.o . , edmonds , p. d. , murray , s. s. , & cool , a. m. 2001 , , 563 , l53 harris , w. e. 1996 , , 112 , 1487 henry , w.g . , & fekel , f.c . 2002 , , 114 , 988 kaluzny , j. 1997 , , 122 , 1 kaluzny , j. , olech , a. , stanek , k.z . 2001 , , 121 , 1533 kaluzny , j. , rucinski , s.m . , & thompson , i.b . 2002 , astro - ph/0209345 kwee , k. k. , & van woerden , h. 1956 , , 12 , 327 lauzeral , c. , ortolani , s. , aurire , m. , melnick , j. 1992 , , 262 , 63 mochnacki , s.w . , & doughty , n.a . , 1972 , , 156 , 51 piotto , g. et al . 2002 , , 391 , 945 plewa , t. 1988 , acta astron . , 38 , 415 olech , a. , wozniak , p.r . , alard , c. , kaluzny , j. , thompson , i.b . 1999 , , 310 , 759 ritter , h. , & kolb , u. 1998 , , 129 , 83 reid , i. n. , & gizis , j. e. 1998 , , 116 , 2929 rodriguez , e. , lpez - gonzlez , m.j . 2000 , 359 , 597 rucinski , s.m . 2000 , , 120 , 319 schwarzenberg - czerny , a. , 1989 , , 241 , 153 schwarzenberg - czerny , a. , 1996 , 460 , l107 stetson , p. b. 1987 , pasp , 99 , 191 taylor , j. m. , grindlay , j. e. , edmonds , p. d. , & cool , a. m. 2001 , , 553 , l169 wilson , r. e. 1979 , , 234 , 1054 zerbi , f.m . , 2000 , in `` delta scuti and related stars '' , asp conf . 210 , eds . m. breger & m.h . montgomery , p. 322 lcccc name & r.a . & decl . & hst & i d + ( 1 ) & ( 2 ) & ( 3 ) & ( 4 ) & ( 5 ) + v7 & 17 40 43.74 & -53 40 35.6 & w4(100,222 ) & wf4 - 2 + v10 & 17 40 37.43 & -53 40 36.4 & & bs11 + v11 & 17 40 43.95 & -53 40 40.9 & w4(151,246 ) & bs9 + v12 & 17 40 41.42 & -53 40 19.6 & w1(489,504 ) & u23 , cv1 + v13 & 17 40 48.82 & -53 39 49.0 & w3(511,101 ) & u10 , cv6 + v14 & 17 40 46.31 & -53 41 15.9 & w4(548,348 ) & + v15 & 17 40 45.24 & -53 40 25.2 & w4(122,53 ) & bs10 + v16 & 17 40 44.44 & -53 40 42.0 & w4(190,223 ) & msp , wf4 - 1 + v17 & 17 40 43.63 & -53 41 16.8 & w4(384,522 ) & + v18 & 17 40 43.45 & -53 40 28.1 & pc(345,91 ) & + v19 & 17 40 44.66 & -53 40 23.8 & pc(386,266 ) & pc-1 + v20 & 17 40 41.51 & -53 40 33.7 & pc(697,276 ) & bs6 + v21 & 17 40 41.40 & -53 40 23.9 & pc(559,441 ) & bs7 + v22 & 17 40 41.02 & -53 40 42.2 & & bs8 + v23 & 17 40 39.21 & -53 40 46.9 & & bs16 + v24 & 17 40 38.97 & -53 40 23.3 & & + + + note : cols . ( 2)-(3 ) : units of right ascension are hours , minutes and seconds , and units of declination are degrees , arcminutes , and arcseconds . ( 5 ) pixel coordinates on the hst archive image u5dr0401r proceeded by a name of wfpc-2 camera ccd . ( 6 ) other names of variables used in grindlay et al . ( 2001 ) , taylor et al . ( 2001 ) , and lauzeral et al . ( 1992 ) lcccccc name & type & period & @xmath11 & @xmath9 & @xmath12 & @xmath83 + v7 & w uma & 0.2699(2 ) & 17.72 & 17.05 & 16.11 & 0.47 + v10 & sx phe & 0.03006 & 16.36 & 15.97 & 15.46 & 0.12 + v11 & sx phe & 0.03826 & 15.78 & 15.40 & 14.88 & 0.05 + v12 & cv & 0.472(2 ) & 18.51 & 17.95 & 16.96 & 0.37 + v13 & cv & 0.2352(5 ) & 20.04 & 19.35 & 18.27 & 0.45 + v14 & eclipsing & 0.3348(7 ) & 20.21 & 19.19 & 17.93 & 1.02 + v15 & sx phe & 0.02145 & 15.80 & 15.44 & 14.94 & 0.05 + v16 & ell & 1.35406 & 17.36 & 16.65 & 15.71 & 0.15 + v17 & @xmath2 dor ? & 0.525(5 ) & 16.81 & 16.17 & 15.32 & 0.025 + v18 & eclipsing & 0.7871(3 ) & 16.23 & 15.71 & 15.01 & 0.14 + v19 & w uma & 0.2538(2 ) & 17.75 & 17.09 & 16.16 & 0.06 + v20 & w uma ? , @xmath2 dor ? & 0.861(3 ) & 16.22 & 15.75 & 15.08 & 0.04 + v21 & sx phe & 0.03896 & 15.88 & 15.48 & 14.91 & 0.30 + v22 & ? & ? & 16.61 & 16.16 & 15.58 & 0.11 + v23 & sx phe & 0.03717 & 16.05 & 15.66 & 15.14 & 0.04 + v24 & @xmath2 dor ? & 0.457(2 ) & 17.07 & 16.45 & 15.60 & 0.02 + + note : periods are given in days . @xmath0 magnitudes are given at maximum brightness with exception of sx phe stars for which average magnitudes are listed . the last column gives the difference between observed extremes for the @xmath9 light curves : @xmath84 . for cvs the quoted magnitudes refer to 8 nights from the beginning of ut may , 2002 .
time series @xmath0 photometry is presented for 16 short - period variables located in the central region of the globular cluster ngc 6397 . the sample includes 9 newly detected variables . the light curve of cataclysmic variable cv6 shows variability with a period of 0.2356 days . we confirm an earlier reported period of 0.472 days for cataclysmic variable cv1 . phased light curves of both cvs exhibit sine - like light curves , with two minima occurring during each orbital cycle . the secondary component of cv1 has a low average density of 0.83 g @xmath1 indicating that it can not be a normal main sequence star . variables among the cluster blue stragglers include a likely detached eclipsing binary with orbital period of 0.787 days , three new sx phe stars ( one of which has the extremely short period of 0.0215 days ) , and three low amplitude variables which are possible @xmath2 doradus variables .
the alfa project ( adaptive optics with laser for astronomy ) is a collaboration between the max - planck - institute for astronomy ( mpia ) in heidelberg which supplies the adaptive optics ( wavefront sensing and correction ) , and the max - planck - institute for extraterrestrial physics ( mpe ) who provide the laser guide star ( lgs ) . the complete system is installed on the 3.5 m telescope on calar alto , spain . the adaptive optics subsystem is described in detail in kasper et al . ( 1999 ) , and essential aspects of the alfa laser are given in rabien et al . ( 1999 ) . in this contribution we want to give an overview on the technical system setup and the design of the laser beam relay . routine operation of the alfa system is done by calar alto staff . the user interface and software setup has therefore been designed to allow transparent operation without distracting the operator with technical details . still the possibility remains to fine - tune the system during operation , as well as to gather analysis data for future improvements . the complete system is already available to the astronomical community since mid 1998 . an additional aspect while operating the lgs is to account for airplanes crossing the laser beam . the system has been designed such that under no circumstances a pilot can be affected by the upgoing laser light . for this reason an aircraft detection system has been installed which immediately shuts off the laser whenever an object is located near the laser beam . the alfa laser subsystem has been designed to be computer controllable in most of its parts . all the control and analysis algorithms are implemented on industrial type vme - bus machines running the real time operating system vxworks . for interprocess communication , shared memory techniques are used . the experimental physics and industrial control system epics network is used for inter - machine communication using its network database capabilities . this system setup provides reliable operation of the laser while still being very flexible with respect to system hardware changes . the software controlling the aircraft detection system is running on a separate silicon graphics indigo workstation . all software development is done on a sun workstation which is also used as a file server holding all startup files necessary for booting the vme - bus machines . figure [ design_overview ] shows the locations of the computers and the tasks they are fulfilling . all machines are connected using the calar alto network system which is based on twisted - pair cabling . the vme - bus machines have been installed in locations where cable lengths , especially those of the analysis tools , could be dimensioned as short as possible in order to minimise noise . on each of the separate computers , software watchdogs are running which are also visualized in the graphical user interface . in the following sections , we will describe the different tasks and provide information on their operation . in the laser laboratory on the coud floor of the 3.5 m telescope , the machine vme - lab is located . it controls and monitors the two main lasers , their beam properties , and the initial beam injection into the relay system . the tasks running on this computer can be summarized as follows : * pump laser status monitoring * dye laser frequency adjustment * collimation of pilot and dye laser beams * control of the polarization state of the dye laser beam * initial beam injection into coud path and beam alignment on the optical bench in the laser laboratory , the beam passes a water - cooled shutter , a quarter - wave plate , and a pre - expander . a beamsplitter and two additional mirrors are used to direct a pilot beam into the path . this beam consists of a small fraction of laser light coupled out from the pump laser and is used for the lower loop which is described in section [ lowerloop ] . the two mirrors mc1 and mc2 are used for centering and pointing the dye laser and pilot beams into the relay . the hardware installed on the optical bench for adjusting the collimation of both lasers , the polarization state using a rotatable @xmath0-plate and the initial beam alignment consists of two _ newport mm 4005 _ motor controllers which are connected to the vme - bus machine using rs 232c serial lines . each of these controllers supplies connections to four independent motors . in figure [ laserlab ] the software control window of the laser laboratory is shown . these components have been put in a separate window to minimise confusion on the main gui since under normal operating conditions , their values need not be modified . to the lower left , the control parameters of the pump laser are displayed continuously . the focus of the pilot beam can be adjusted to properly focus on the lower loop detectors . on the right hand side , frequency tuning and modulation of the ring dye laser are performed , which are necessary for different experiments . those are described in davies et al . ( 1999 ) in this volume . the relay optics which transport the beam from the coud lab to the launch telescope are one of the most important parts of the laser system . they are continually undergoing revision , not only to increase their effectiveness , but also to make them simpler for guest observers to use . figure [ design_overview ] shows the path taken by the laser as it is directed by a succession of actively controlled mirrors . a shorter path with fewer reflections was considered , but involved directing the laser through the centre of the dome where it was not possible to baffle it ; it was rejected due to the severe implications for both safety and beam turbulence . to offset the loss from so many components in the path chosen , all mirror surfaces are dielectric broadband coated , and transmissive elements have an anti - reflection coating optimised to 589 nm . the complete relay system is controlled by the machine vme - cass which is mounted on to the cassegrain flange . the purpose is to keep the beam well aligned in the beam relay system and to reduce beam jitter . a 16 channel analog - digital converter digitizes position signals and feeds a _ newport pm 500 _ motion controller with corrective mirror motion commands . the relay system consists of a total of six mirrors . the two - axis tracking mirror s5 and s4 on the declination axis are part of the original coud train . mirror mt1 , the declination axis pick - off , marks the end of the lower loop . mt2 is a fixed mirror that directs the beam into the upper loop , consisting of mirrors mt3 and mt4 . before use , the entire relay must be aligned . the initial alignment , which is only needed at the beginning of an observing run , must be done manually . a set of 5 video cameras has been installed at critical points to assist with this . during an observing run , alignment is done automatically by two control loops which are described in the following sections . the control parameters for the loops can be adjusted during operation using a separate control window . a `` lower guide '' control loop is running on vme - cass to keep the laser beam well aligned on the coud axis . inside the yoke , all degrees of freedom ( parallel and angular displacement ) are monitored using position sensitive devices ( psd s ) which are fed by the pilot laser beam . the analog signals from the psd s are digitized at a rate of 1 khz . the difference of those signals to calibrated positions are used in a closed loop operation to move the mirrors esd and mt1 ( which are located in the laser laboratory and on the declination axes , respectively ; see figure [ design_overview ] ) such that proper beam alignment is assured . the s5 mirror is part of the original coud path of the telescope and has been incorporated in the laser beam relay . it is fully controlled by the telescope control system . during normal operation when the telescope is tracking on a scientific target , tracking errors of the mirror have to be accounted for by the lower guide control loop . in addition , the motion of the coud mirror is not continuous but overlayed with phases of fast motion on timescales of @xmath15 seconds which have to be corrected as well . a complication for the lower loop implementation is due to the fact that when slewing the telescope to a new object , the s5 mirror is not fast enough to keep track with the slewing motion of the telescope . depending on the relative offset , it takes up to one minute until tracking of the s5 mirror proceeds . during this phase , no proper beam alignment is assured and therefore there remains the possibility that the laser beam may leave the beam relay path . therefore , the status of the s5 mirror is monitored in addition to the total intensity and position of the pilot beam . this information is provided by the telescope control system using an epics network variable . when either the intensity drops below normal values , the position signal is lost or the status of the s5 mirror indicates slewing of the telescope , the dye laser beam is shut off in the laser laboratory . when slewing of the s5 mirror is finished , a scanning phase begins to automatically realign the pilot beam . only when the intensity and position signals indicate proper alignment is the shutter of the dye laser opened again . once the laser beam is picked off the declination axes , it enters the `` upper loop '' . this loop provides the final centering and alignment of the beam in the relay . it runs at high frequencies ( @xmath1100 hz ) to be able to correct for beam jitter introduced by turbulent air along the relay . the beam position and angular displacement are measured directly below the launch telescope using a psd and a triple - prism detector which sample the dye laser itself . just like the `` lower loop '' , the digitized values of the detectors are used to calculate correction commands for the two fast steering mirrors mt3 and mt4 . when the 3.5 m telescope is slewed to positions at extremely low altitudes or hour angles , flexure of the main telescope body becomes an important issue . even when properly aligned on the coud path , the beam may not hit the secondary of the launch telescope perfectly . to account for these extreme telescope positions , a scanning phase similar to the one of the lower loop is planned to be implemented in the near future . the laser light passes through the dome from the exit of the laser laboratory up to the main body of the telescope . temperature differences of several degrees are common between the surrounding air and the main body of the telescope . this results in turbulent air which distorts laser beam quality . in order to enhance beam quality , most of the beam relay system has therefore been baffled using black anodized aluminum tubes . this also serves as a safety precaution to prevent persons from looking directly into the collimated beam . in figure [ s5 ] we show a photograph of the first beam relay mirror s5 with the baffles coming from the laser laboratory and going to the main body of the telescope into the yoke . the baffle coming from the laser lab is closed by a window on the optical bench in the laser laboratory to prevent turbulent air streaming from the warmer laser lab to the s5 mirror due to a chimney effect . this mirror and its support itself are also planned to be covered completely . the launch telescope ( 50 cm primary and 5 cm secondary mirrors ) is situated 2.9 m off - axis from the main telescope primary . it is an afocal galilean - type beam expander through which the laser can be projected both on- and off - axis . the main disadvantage of launching on - axis is the power lost due to the obscuration by the secondary mirror . the power loss is 1.53% depending on the diameter of the exit beam , which can be varied in the range 2449 cm by adjusting the pre - expander . more significant power loss is from the edges when the beam diameter is similar to that of the primary , but this does not affect the central intensity of the resulting lgs which is the important parameter for good adaptive optics correction . the secondary mirror of the launch telescope is mounted on a piezo platform used for pointing the laser at science targets . it has a full field of view of 50 , with a positional accuracy of @xmath10.05 and a maximum steering frequency of 30 hz . focussing the laser in the mesosphere is done using a stepper motor with a resolution of @xmath1500 m in height . in the near future , a link from the ao bench will be installed providing lgs tip / tilt information so that the mirror can be used to compensate for beam jitter introduced in the atmosphere . the vme - tel machine is located in a rack next to a analysis breadboard mounted just below the entrance of the launch telescope . on this breadboard , several analysis tools are installed which are either controlled via serial lines or whose analog outputs are digitized by a 16 channel 12 bit analog - digital converter . in addition , the secondary mirror of the launch telescope is controlled . the different tools are : * offline : power measurement * offline : beam collimation * polarization status * beam wavefront * tip tilt of secondary * focus of launch telescope immediately before being projected into the atmosphere , the quality of the beam can be checked by a diagnostics bench . the offline analysis tools are inserted using a linear stage which moves mirrors to reflect the laser light onto them . the linear stage is controlled by a _ newport pm 500 _ motion controller . the polarization state is measured online with a division - of - amplitude device mounted near the secondary mirror of the launch telescope . on the specific layout of the analysis tools and measurement results , see rabien et al . ( 1999 ) in this volume . the complete laser system is controlled with a graphical user interface ( gui ) . the gui runs on the sun workstation located in the laser lab and can be displayed on any workstation or x - terminal on calar alto . during an observing run , control is performed using a separate x - terminal in the control room of the telescope . the gui ( figure [ laserleitstand ] ) has been designed such as to keep the abstraction level as low as possible in order to perform transparent operation . the several components of the laser system are visualized , from the laser laboratory to the lower right to the launch telescope at the upper left . all mirrors and other motor controlled devices can be moved using sliders . as a result of any slider movement , an epics variable is set which is monitored on the vme - bus machines which then command the motor movements . the two control loops and the signals of the beam position measurement devices are visualized in the central part of the gui . control parameters for the loops can be adjusted using a separate window , which is only necessary when reconfiguring the laser system or at the beginning of an observing run . the polarization status is displayed on the gui to the right of the launch telescope , as well as the offline power measurement . the signals of the various ccd cameras that have been installed along the beam relay path are displayed on a video monitor which is located in the control room next to the x - terminal . in the gui , it is possible to select which camera signal to be displayed using the buttons with the camera symbol . this enables the operator to check the beam position on the separate mirrors for beam alignment purposes . in the upper left part , the status of the different computers is displayed using their watchdog programs . this way the user is able to check the overall status of the system , enabling him to interfere in case of system failures . of particular concern for the safe operation of a lgs adaptive - optics system is the protection of aircraft passing the upgoing laser - beam . although the probability of hitting not only the aircraft , but also having the pilot look directly into the beam is extremely low , precautions have to be taken to avoid this situation because even an expanded 4 watt continuous wave laser could affect and distract the pilot for a few minutes . the scattered light of such a laser is also too dim to allow the pilot to see it before entering the beam . besides reliably detecting objects a useful system is also expected to generate only a very limited amount of false alarms , since every alarm immediately shuts down the laser beam and with that many minutes of integration time of the scientific instrument may be wasted . the requirements for the calar alto system are a rate of false alarms of less than 0.1 per hour . a spotter is not required at calar alto since the output power of the outgoing laser beam is too low , as opposed to pulsed systems with peak powers of up to 1 kw . all aircraft are required to use position and anti collision lights ( see federal aviation requirements ) . the anti collision light is a yellow , red or white light , flashing with a frequency of 40 to 100 flashes per minute . the position lights are red and green lights , located at the tips of the wings , and a white light at the tail . the detection system uses a ccd camera and an image processing system to detect this radiation . particularly difficult situations occur when the aircraft is passing the observatory at a low altitude . the time from entering the field of view of the camera to crossing the laser beam may be as low as 2 seconds and therefore the reaction time of the system should not be greater than 1 second . the frequency of the anti collision light may be too low for reliable detection , however , at these low altitudes , the position lights are easily visible . these ccd systems can also easily detect some satellites at low altitudes . for the alfa project , it was decided to mount the ccd camera behind the secondary mirror of the main telescope . aliens was developed by adaptive optics associates , inc . in boston , ma for the use on the alfa system and is integrated into the alfa software in a failsafe way . it is using the epics realtime - database to communicate with alfa and the telescope control software . the specifications for aliens are : * aircraft viewing angle : @xmath2 from horizontal plane * range : 0.5 to 30 km * aircraft velocity : @xmath3 m / s * operating conditions : night , clear sky * environmental conditions : temperature @xmath4c to @xmath5c , humidity @xmath6 * field of view : @xmath7 radius * rate of false alarms : @xmath8 per hour * wavelength range : 600 - 1000 nm the images analysed by aliens are taken with a ccd camera which is mounted on the front ring of the telescope behind the secondary mirror . the ccd chip has a total of 640x480 pixels . the field of view of the camera was chosen to be 20 degrees which is approximately the same field of view as seen through the dome slit from behind the secondary . the single images are transfered by an optical fiber link from the secondary mirror cell to a silicon graphics workstation which is equipped with a video capture board . there the images are digitized and further analysed . in figure [ aliens_block ] a block diagram of the hardware used in the aliens system is shown . the most simple approach to detect moving objects such as airplanes on optical images is to use their relative motion against the stars . to do this , two consecutive frames received from the ccd camera are subtracted from each other . as the 3.5 m telescope on calar alto is mounted equatorially , objects like stars should cancel out perfectly when subtracting two frames since there is no field rotation . so whenever there is a significant difference between two frames , this can be taken as a possible detection of an aircraft in the field of view . there are , however , some problems which forbid this simple approach . for instance , slowly moving objects like planets can trigger an alarm since they are very bright . even stars can trigger alarms due to scintillation as the transparency of the atmosphere is not constant as a function of time and causes the stars to flicker . it is therefore necessary to eliminate the signal of bright steady objects in a different way . this can be done by calculating a mask from a single image in which all bright objects are marked . when calculating the difference of two frames , objects that fall into the masked regions are neglected . this procedure has proven to nearly eliminate all false alarms due to stars . this mask can be recalculated after a specific amount of time has elapsed to account for planets , which move slowly but significantly during an observing night . a second , fixed mask can be used in addition to the one just described . this is useful to account for the different motions of the telescope itself and the dome . when the dome is starting to block a portion of the field of view seen by the ccd camera , faint stars in this area which are not included in the first mask will vanish and thus trigger a false alarm . secondly , the rayleigh backscatter of the projected laser beam is usually variable and should be disregarded in the aircraft detection algorithm . when accounting for both masks in the image subtraction process , the final criterion for an aircraft detection is that the difference of the two frames exceeds a certain threshold . this threshold is just slightly above the inherent read noise of the ccd chip to be as sensitive as possible . when an airplane has been detected , several actions are taken : first and most important , the shutter is closed by a software command . since alarms will be generated as long as the airplane is in the field of view of the camera , the shutter will be opened again after a specified time following the last aircraft alarm . in addition , a warning program is started to visually inform the observing astronomer that the lgs is not available . optionally , several frames can be saved in order to analyse the data and identify eventually false alarms . this is done to further improve the system and its reliability . a graphical user interface has been developed to control the system parameters and visually inspect the online data from the camera on a computer monitor ( see figure [ awatch ] ) . with the computer s x - window system , this user interface allows network transparent operation of the system with the data processing taking part on one of the workstations in the laser lab and a control of the system from one of the x - terminals in the telescope s control room . on the main user interface for the laser control , the status of aliens is displayed as well as a visual reminder for the laser operator . during the acceptance test near boston , ma , the system was operated for 5 hours without generating any false alarm . with the same settings of parameters and looking into a different direction it was able to reliably detect all of the visible aircraft . this included low altitude planes approaching boston airport , as well as high altitude planes with only their anti collision lights visible . since march 1998 , the system is in full operation whenever the lgs is used on calar alto . it proved very reliable during normal observing conditions with less than one false alarm per observing night . the false alarms are predominantly : shooting stars cause the laser to be shut off for about 1 second since they cross the field of view very rapidly . false alarms due to clouds cause no major problem to the operation of the laser since under cloudy conditions , usage of the lgs is not possible due to the faint brightness of the laser beacon . the laser shutter is closed whenever there are problems due to beam alignment . this is caused by the raised background in the images because the laser does not shoot into the night sky but is blocked by the telescope dome . when doing the initial beam alignment in closed dome , it is therefore necessary to stop aliens in order to be able to visually inspect beam alignment on the telescope dome . davies , r. et al . , 1999 , _ alfa : operation and results _ , experimental astronomy , ... kasper , m. et al . , 1999 , _ alfa ao system design _ , experimental astronomy , ... rabien , s. et al . , 1999 , _ alfa : laser and analysis _ , experimental astronomy , ... wirth , a. , 1996 , _ airplane light imaging emergency notification system - technical description _ , adaptive optics associates , inc . , federal aviation requirements , part 25 , 25.1385 , 25.1397
the alfa laser subsystem uses a 4 watt continuous wave laser beam to produce an artificial guide star in the mesospheric sodium layer as a reference for wavefront sensing . in this article we describe the system design , focusing on the layout of the beam relay system . it consists of seven mirrors , four of which are motor - controlled in closed loop operation accounting for turbulences inside the dome and flexure of the main telescope . the control system features several computers which are located close to analysis and control units . the distribution of the tasks and their interaction is presented , as well as the graphical user interface used to operate the complete system . this is followed by a discussion of the aircraft detection system aliens . this system shuts off the laser beam when an object passes close to the outgoing laser .
SECTION 1. SHORT TITLE. This Act may be cited as the ``21st Century FHA Housing Act of 2009''. SEC. 2. MORTGAGE INSURANCE FOR CONDOMINIUMS. Section 203 of the National Housing Act (12 U.S.C. 1709) is amended by adding at the end the following new subsection: ``(y) Inapplicability of Environmental Review Provisions.--In insuring, under this section, any mortgage described in section 201(a)(C), the Secretary shall not be subject to the conditions of, or review under, the National Environmental Policy Act of 1969 or any other provision of law that furthers the purposes of such Act.''. SEC. 3. ENERGY EFFICIENT MORTGAGES. Section 106(a)(2)(C) of the Energy Policy Act of 1992 (42 U.S.C. 12712 note) is amended-- (1) in clause (i), by inserting ``(i)'' after ``(A)'' each place such term appears; and (2) in clause (ii), by striking ``203(b)(2)(B)'' and inserting ``203(b)(2)(A)(ii)''. SEC. 4. MODERNIZATION OF WORKFORCE AND RESOURCES. Section 202 of the National Housing Act (12 U.S.C. 1708) is amended by adding at the end the following new subsections: ``(g) Personnel.-- ``(1) In general.--Notwithstanding section 502(a) of the Housing Act of 1948 (12 U.S.C. 1701c(a)), the Secretary may appoint and fix the compensation of such officers and employees of the Department as the Secretary considers necessary to carry out the functions of the Secretary under this Act and any other functions of the Federal Housing Administration. Such officers and employees may be paid without regard to the provisions of chapter 51 and subchapter III of chapter 53 of title 5, United States Code, relating to classification and General Schedule pay rates. ``(2) Comparability of compensation with federal financial regulatory agencies.--In fixing and directing compensation under paragraph (1), the Secretary shall consult with, and maintain comparability with compensation of officers and employees of the Federal Housing Finance Agency, the Board of Governors of the Federal Reserve System, and the Federal Deposit Insurance Corporation. ``(3) Personnel of other federal agencies.--In carrying out the functions referred to in paragraph (1), the Secretary may use information, services, staff, and facilities of any executive agency, independent agency, or department on a reimbursable basis, with the consent of such agency or department. ``(4) Outside experts and consultants.--The Secretary may procure temporary and intermittent services under section 3109(b) of title 5, United States Code, to assist the work of the Department in carrying out the functions referred to in paragraph (1). ``(h) Information Technology.-- ``(1) In general.--In carrying out any program under this Act or any other program of the Federal Housing Administration, the Secretary may utilize any amounts as may be made available for such programs to ensure that an appropriate level of investment in information technology is maintained in order for the Secretary to upgrade the technology systems of the Department used in carrying out the functions referred to in subsection (g)(1). ``(2) Use of premium-generated income.--To the extent that income derived in any fiscal year from premium fees charged under section 203(c) is in excess of the level of income estimated for that such year for such premium fees and assumed in the baseline projection prepared by the Director of the Office of Management and Budget for inclusion in the President's annual budget request and subject to approval in advance in an appropriation Act, not more than $72,000,000 of such excess amounts may be used from such amounts for the purpose of carrying out this subsection. ``(i) Training and Education Program.-- ``(1) Establishment.--The Secretary of Housing and Urban Development shall carry out a comprehensive training and education program to improve the service provided by personnel of the Department carrying out functions referred to in subsection (g)(1) to users of the mortgage insurance programs under this Act and any other FHA mortgage insurance programs. ``(2) Topics.--The training and education program under this subsection shall-- ``(A) have as its primary goal improving the quality and consistency of responses provided by such personnel of the Department headquarters and other offices and centers of the Department regarding regulations, handbooks, mortgagee letters, and other guidance; and ``(B) be designed to-- ``(i) ensure that lenders participating in the FHA programs may rely on information provided by one office or center of the Department when doing business with a different office or center; and ``(ii) prevent such lenders from soliciting answers to the same question from different offices or centers of the Department in an attempt to obtain an answer that is satisfactory to the lender, by ensuring consistent responses from different offices and centers.''. SEC. 5. RISK MANAGEMENT IMPROVEMENTS. (a) Review of Delinquencies and Lender Monitoring.--Section 202 of the National Housing Act (12 U.S.C. 1708), as amended by the preceding provisions of this Act, is further amended by adding at the end the following new subsection: ``(j) Risk Management Improvement.-- ``(1) Review of delinquencies among recent originations.-- ``(A) In general.--The Secretary shall conduct an ongoing review of mortgages on single family housing originated during the preceding 12 months and insured pursuant to this Act under which the mortgagor has become 60 or more days delinquent with respect to payment under the mortgage during the first 90 days of the term of the mortgage to determine which mortgages should not have been originated or insured and the characteristics of such mortgages, and which lenders have relatively high incidences of such delinquent mortgages; ``(B) Reporting to congress.--Not later than 90 days after the date of enactment of the 21st Century FHA Housing Act of 2009, the Secretary shall make available to the Committee on Financial Services of the House of Representatives and the Committee on Banking, Housing, and Urban Affairs of the Senate any information and conclusions pursuant to the review required under subparagraph (A). ``(C) Sufficient resources.--There is authorized to be appropriated to the Secretary for each of fiscal years 2010 through 2014 the amount necessary to provide 90 additional full-time equivalent positions for the Department, or for entering into such contracts as are necessary, to conduct reviews in accordance with the requirements of this section. ``(2) Lender monitoring.--In conducting monitoring and analysis of the performance of lenders for mortgages on single family housing insured under this Act, the Secretary shall utilize a one-year period for such monitoring and analysis, to promote earlier identification of problem lenders and allow earlier intervention and sanctions.''. (b) Analysis of Mortgage Performance.--Section 203(g)(2) of the Helping Families Save Their Homes Act of 2009 (12 U.S.C. 1708 note) is amended-- (1) in paragraph (1), by striking ``and'' at the end; (2) in paragraph (2)(B), by striking the period at the end and inserting ``; and''; and (3) by adding at the end the following new paragraph: ``(3) analyze the portion of mortgages randomly reviewed pursuant to subparagraph (B) on the basis of performance.''. SEC. 6. SENSE OF CONGRESS REGARDING ADEQUATE CAPITAL FLOW FOR MORTGAGE LOANS. (a) Congressional Findings.--The Congress finds that-- (1) warehouse lending, which provides short-term lines of credit to non-depository lenders for mortgage loans that are eventually sold into the secondary market to Fannie Mae, Freddie Mac and Ginnie Mae, is a critical link in the housing finance chain; (2) according to data obtained pursuant to the Home Mortgage Disclosure Act of 1975, nondepository lenders that utilize warehouse lines of credit account for as much as 40 percent of all residential mortgage loans in the United States, and nearly 55 percent of FHA loans, which are increasingly popular; (3) it is estimated that since 2006 warehouse lending capacity available to the mortgage lending industry has declined by nearly 90 percent to the current level of approximately $20 billion to $25 billion; (4) based upon projected 2009 lending volume, there could be a shortfall of hundreds of billions of dollars in home mortgage availability caused by a lack of warehouse lending capacity; and (5) unless Federal regulators promptly address the issue, borrowers seeking to take advantage of today's low interest rates will face rising costs and reduced credit access, which could undermine the housing market recovery. (b) Sense of the Congress.--It is the sense of the Congress that-- (1) the Secretary of the Treasury, the Secretary of Housing and Urban Development, and the Director of the Federal Housing Finance Agency should use their existing authorities under the Emergency Economic Stabilization Act of 2008, the Housing and Economic Recovery Act of 2008, and other statutory and regulatory authorities to provide financial support and assistance to facilitate increased warehouse credit capacity by qualified warehouse lenders; (2) such financial support and assistance should-- (A) be used only to expand the amount of credit or lending capacity made available to qualified mortgage lenders by qualified warehouse lenders for the purpose of funding residential mortgage loans; (B) be provided in such form and manner as such Secretaries or the Director, as applicable, consider appropriate, which might include direct loans, guarantees, credit enhancement, and other incentives; and (C) comply with other requirements established by such Secretaries or the Director, as applicable. (c) Definitions.--For purposes of this section, the following definitions shall apply: (1) Qualified mortgage lender.--The term ``qualified mortgage lender'' means an entity that-- (A) is engaged in the business of making mortgage loans for one- to four-family residences that are-- (i) insured under title II of the National Housing Act (12 U.S.C. 1707 et seq.); (ii) guaranteed, insured, or made under chapter 37 of title 38, United States Code; (iii) made, guaranteed, or insured under title V of the Housing Act of 1949 (42 U.S.C. 1471 et seq.); or (iv) eligible for purchase by the Federal National Mortgage Association or the Federal Home Loan Mortgage Corporation; and (B) is not a depository institution. (2) Qualified warehouse lender.--The term ``qualified warehouse lender'' means an entity that extends credit to qualified mortgage lenders for the purpose of originating mortgage loans described in paragraph (1)(A), or that otherwise facilitates the origination of such loans by a qualified mortgage lender. SEC. 7. FORECLOSURE AVOIDANCE INITIATIVES. Section 230 of the National Housing Act (12 U.S.C. 1715u) is amended by inserting after subsection (d) the following new subsection: ``(e) Foreclosure Avoidance Demonstration Programs.--The Secretary may carry out such demonstration programs as the Secretary from time to time determines are appropriate to demonstrate the effectiveness of alternative methods of avoiding foreclosure on mortgages insured under this title, including methods involving short sales and deeds in lieu of foreclosure, and such methods may involve partial or full payment of insurance benefits to the mortgagee.''. Passed the House of Representatives September 15, 2009. Attest: LORRAINE C. MILLER, Clerk.
21st Century FHA Housing Act of 2009 - (Sec. 2) Amends the National Housing Act to declare that the Secretary of Housing and Urban Affairs (HUD) is not subject to the National Environmental Policy Act of 1969 when insuring any mortgage for a one-family unit in a multifamily project that holds an undivided interest in the common areas and facilities which serve the project (condominium). (Sec. 4) Authorizes the Secretary to: (1) appoint and fix the compensation of HUD personnel; and (2) use certain funds to maintain an appropriate level of investment in information technology in order to upgrade HUD technology systems used in carrying out personnel-related functions. Sets a cap upon the use of premium-generated income for such upgrades, subject to approval in advance in an appropriation Act. (Sec. 5) Requires the Secretary to: (1) establish a comprehensive training and education program to improve certain HUD services to users of Federal Housing Administration (FHA) mortgage insurance programs; and (2) conduct an ongoing review of delinquencies among recent single family housing mortgage originations; and (3) make available to certain congressional committees any information and conclusions pursuant to such review of delinquencies. Amends the Helping Families Save Their Homes Act of 2009 to direct the Secretary to implement procedures that analyze mortgage performance during the mandatory random review of mortgagees on one- to four-family residences who potentially present a high risk to the Mutual Mortgage Insurance Fund. (Sec. 6) Expresses the sense of Congress that the Secretary of the Treasury, the Secretary of HUD, and the Director of the Federal Housing Finance Agency (FHFA) should use their statutory and regulatory authorities to provide financial assistance to facilitate increased warehouse credit capacity by qualified warehouse lenders. Urges that such assistance: (1) be used only to expand the amount of credit or lending capacity made available to qualified mortgage lenders by qualified warehouse lenders in order to fund residential mortgage loans; and (2) be provided in a manner which might include direct loans, guarantees, credit enhancement, and other incentives. (Sec. 7) Amends the National Housing Act to authorize the Secretary to implement alternative insured mortgage foreclosure avoidance demonstration programs, including methods involving short sales and deeds in lieu of foreclosure, and partial or full payment of insurance benefits to the mortgagee.
r21em in this paper , we consider the problem of computing , for a given rectilinear angle sequence , a `` small '' rectilinear polygon that realizes the sequence . a _ rectilinear angle sequence _ @xmath2 is a sequence of left ( @xmath3 ) turns and right ( @xmath4 ) turns , that is , @xmath5 , where @xmath6 is the _ length _ of @xmath2 . as we consider only rectilinear angle sequences , we usually drop the term `` rectilinear . '' a polygon @xmath7 _ realizes _ an angle sequence @xmath2 if there is a counterclockwise ( _ ccw _ ) walk along the boundary of @xmath7 such that the turns at the vertices of @xmath7 , encountered during the walk , form the sequence @xmath2 . the turn at a vertex @xmath8 of @xmath7 is a left or right turn if the interior angle at @xmath8 is @xmath9 ( @xmath8 is convex ) or , respectively , @xmath10 ( @xmath8 is reflex ) . in order to measure the size of a polygon , we only consider polygons that lie on the integer grid . then , the _ area _ of a polygon @xmath7 corresponds to the number of grid cells that lie in the interior of @xmath7 . the _ bounding box _ of @xmath7 is the smallest axis - parallel enclosing rectangle of @xmath7 . the _ perimeter _ of @xmath7 is the sum of the lengths of the edges of @xmath7 . the task is , for a given angle sequence @xmath2 , to find a polygon that realizes @xmath2 and minimizes ( the area of ) its bounding box , its area , or its perimeter . figure [ fig : example ] shows that , in general , the three criteria can not be minimized simultaneously . obviously , the angle sequence of a polygon is unique ( up to rotation ) , but the number of polygons that realize a given angle sequence is unbounded . the formula for the angle sum of a polygon implies that , in any angle sequence , @xmath11 , where @xmath12 is the number of right turns , in other words , the number of right turns is exactly four less than the number of left turns . [ [ related - work . ] ] related work . + + + + + + + + + + + + + bae et al . @xcite considered , for a given angle sequence @xmath2 , the polygon @xmath13 that realizes @xmath2 and minimizes its area . they studied the following question : given a number @xmath6 , find an angle sequence @xmath2 of length @xmath6 such that the area of @xmath13 is minimized ( and let @xmath14 be this minimum area ) , or maximized ( and let @xmath15 be this maximum area ) . they showed that @xmath16 if @xmath17 , @xmath18 otherwise , and @xmath19 for any @xmath20 . the result for @xmath15 tells us that any angle sequence @xmath2 of length @xmath6 can be realized by a polygon with area at most @xmath21 . several authors have explored the problem of realizing a turn sequence . culberson and rawlins @xcite and hartley @xcite described algorithms that , given a sequence of exterior angles summing to @xmath22 , construct a simple polygon realizing that angle sequence . culberson and rawlins algorithm , when constrained to @xmath23 angles , produces polygons with no colinear edges , implying that any @xmath6-vertex polygon can be drawn with area approximately @xmath24 . however , as bae et al . @xcite showed , the bound is not tight . in his phd thesis , sack @xcite introduced label sequences ( which are equivalent to turn sequences ) and , among others , developed a grammar for label sequences that can be realized as simple rectilinear polygons . vijayan and wigderson @xcite considered the problem of efficiently embedding _ rectilinear graphs _ , of which rectilinear polygons are a special case , using an edge labeling that is equivalent to a turn sequence in the case of paths and cycles . in graph drawing , the standard approach to drawing a graph of maximum degree 4 orthogonally ( that is , with rectilinear edges ) is the topology shape metrics approach of tamassia @xcite : compute a planar(ized ) embedding ; compute an _ orthogonal representation _ , that is , an angle sequence for each edge and an angle for each vertex ; _ compact _ the graph , that is , draw it inside a bounding box of minimum area . step ( 3 ) has been shown to be np - complete by patrignani @xcite . note that an orthogonal representation computed in step ( 2 ) is essentially an angle sequence for each face of the planarized embedding , so our problem corresponds to step ( 3 ) in the special case that the input graph is a simple cycle . another related work contains the reconstruction of a simple ( non - rectilinear ) polygon from partial geometric information . disser et al . @xcite constructed a simple polygon in @xmath25 time from an ordered sequence of angles measured at the vertices visible from each vertex . the running time was improved to @xmath26 , which is the worst - case optimal @xcite . biedl et al . @xcite considered polygon reconstruction from points ( instead of angles ) captured by laser scanning devices . [ [ our - contribution . ] ] our contribution . + + + + + + + + + + + + + + + + + first , we show that finding a minimum polygon that realizes a given angle sequence is np - hard for any of the three measures : bounding box area , polygon area , and polygon perimeter ; see section [ sec : nph ] . this extends the result of patrignani @xcite and settles an open question that he posed . we also give efficient algorithms for special types of angle sequences , namely @xmath1- and @xmath0-_monotone sequences _ , which are realized by @xmath1-monotone and @xmath0-monotone polygons , respectively . ( for example , ` l``l``r``r``l``l``r``l``l``r``l``r``l``l``r``l``r``l``l``r ` is an @xmath0-monotone sequence , see figure [ fig : example ] . ) our algorithms minimize area ( section [ sec : area - algo ] ) and perimeter ( section [ sec : peri - algo ] ) . for an overview of our results , see table [ tbl : summary ] . .summary of our results . [ cols="^,^,^,^",options="header " , ] in this section we show the np - hardness of our problem for all three objectives : for minimizing the perimeter of the polygon , the area of the polygon , and the size of the bounding box . we first consider the following special problem from whose np - hardness we then derive the three desired proofs . fitupperright : given an angle sequence @xmath2 and positive integers @xmath27 and @xmath28 , is there a polygon realizing @xmath2 within an axis - parallel rectangle @xmath29 of width @xmath27 and height @xmath28 such that the first vertex of @xmath2 lies in the upper right corner of @xmath29 ? fitupperright is np - hard . _ our proof is by reduction from 3-partition : given a multiset @xmath30 of @xmath31 integers with @xmath32 , is there a partition of @xmath30 into @xmath33 subsets @xmath34 such that @xmath35 for each @xmath36 ? it is known that 3-partition is np - hard even if @xmath37 is polynomially bounded in @xmath6 and , for every @xmath38 , we have @xmath39 , which implies that each of the subsets @xmath34 must contain exactly three elements @xcite . for the idea of our reduction , see figure [ fig : hardness ] . for an instance @xmath40 of 3-partition , we construct an lr - sequence @xmath2 that can be drawn inside an @xmath41-box @xmath29 if and only if @xmath30 is a yes - instance . the sequence @xmath2 consists of a _ wall _ , and for each number @xmath42 , a _ snail _ , which in turn consists of a _ connector _ and a _ spiral_. the wall is a box ( @xmath43 ) whose top right corner corresponds to the start of @xmath2 . the connectors are attached to the left side of the wall by introducing two @xmath44-vertices . a connector is a thin @xmath0-monotone polygon going to the left that can change its @xmath45-position @xmath46 times . in detail , the lr - sequence @xmath2 is defined as follows where @xmath47 is the number of windings of the spirals : @xmath48 r22em we choose @xmath27 and @xmath28 such that the spirals have to be arranged in @xmath33 columns of three spirals each . note that for any order of the numbers in @xmath30 , we can route the connectors in a planar way such that the triplets of spirals that we desire end up in the same column . additionally , in each column there must be enough space for the at most @xmath49 connectors that go from the wall to spirals further left ; see figure [ fig : hardness ] . we set @xmath50 and @xmath51 . if all spirals are tightly wound , their bounding boxes need total area @xmath52 . the idea of our proof is to show that if a spiral is not tightly wound , we need too much space . the space that is not occupied by spirals is @xmath53 in any drawing inside @xmath29 . it is clear that our construction is polynomial . by construction , there is a polygon realizing @xmath2 that fits into @xmath29 if @xmath30 is a yes - instance of 3-partition . it remains to show that if @xmath2 fits into @xmath29 , then @xmath30 is a yes - instance of 3-partition . fix any feasible drawing of @xmath2 and a spiral @xmath54 . since the first vertex of @xmath2 has to lie in the upper right corner of @xmath29 , observe that the @xmath55th @xmath56 of @xmath54 has to lie in the interior of the bounding box of the first four @xmath56s of @xmath54 . inductively it follows that , for @xmath57 , the @xmath58th @xmath56 of @xmath54 lies in the interior of the bounding box of the last four @xmath56s of @xmath54 . hence , the drawing of @xmath59 lies in the bounding box of the last four @xmath56s of the @xmath60 sequence of @xmath54 . by repeating a similar argument for the @xmath44 vertices , we can observe that every @xmath61 edge in @xmath54 is lying opposite to a longer @xmath62-edge such that the bounding box spanned by both edges is interiorly empty and completely contained in the polygon . thus , we can move the @xmath61 edge towards the @xmath62 edge and assume that the bounding box has width @xmath63 . for the last @xmath64 @xmath61 edges in @xmath54 , we call the bounding box an _ arm_. hence , any drawing of a spiral consists of a drawing of the ladder and @xmath64 arms around it . we group the arms into four groups ; top , bottom , left , right , depending to which side of the ladder they are lying . recall that each arm is represented by a pair of @xmath62 and @xmath61-edges . we order the arms in each group from the outside to the inside , that is , by the order of their @xmath62 edges in @xmath2 , and define the _ level _ of an arm as its position in this ordering . we say that _ level @xmath36 is wound tightly _ if the distance of all arms of level @xmath36 to the arms of level @xmath65 is @xmath63 . [ obs : levels ] if the first outer @xmath36 levels are not wound tightly , then the spiral occupies @xmath66 more grid cells than in a tight winding . we consider only the length increase of the top arms . since the spiral is not wound tightly , the horizontal distance between two consecutive left arms of the first outer @xmath36 levels is at least two , one more than in a tightly wound spiral . the same is true for the right arms . hence , the length of the level-@xmath36 top arm increases at least by 2 , that of the level-@xmath67 top arm at least by 4 , and that of the level-1 top arm at least by @xmath68 ; see figure [ fig : spiral - size ] . summing up the increases yields @xmath66 . r18em now , consider any feasible drawing . recall that the space that is not occupied by spirals is @xmath53 hence , it follows by observation [ obs : levels ] that at most the first @xmath69 levels of any spiral are not wound tightly . we simplify the drawing by removing the wall , the connectors and the first @xmath70 levels of every spiral . we obtain a set of @xmath49 disjoint rectangles , one for each snail . the rectangle for snail @xmath36 is the bounding box of the inner @xmath71 levels of the snail s spiral , namely , those that must be wound tightly . rectangle @xmath36 has width @xmath72 and height @xmath73 . note that @xmath74 . if three rectangles share an @xmath0-coordinate , then the remaining height at this coordinate is at most @xmath75 hence , no four rectangles can be drawn at a common @xmath0-coordinate . further , if @xmath33 rectangles share a @xmath45-coordinate , then the remaining width at this coordinate is @xmath76 ; hence , no @xmath77 rectangles can be drawn at a common @xmath45-coordinate . these two facts combined imply an assignment of the rectangles to three rows of @xmath33 rectangles each . to see this , consider three rectangles lying above each other . then , since there is only @xmath78 free vertical space , any rectangle has to be intersected by at least one of the three horizontal lines at @xmath45-coordinates @xmath79 with @xmath80 . no rectangle can intersect two lines , otherwise at most two rectangles would fit vertically and the third rectangle could not be squeezed in anywhere else . analogously , we can assign the rectangles to one of the @xmath33 columns by intersecting them with @xmath33 vertical lines of distance @xmath81 . this assignment of rectangles to lines tells us the solution for the given instance of 3-partition : for @xmath82 , we put into the set @xmath83 the numbers @xmath84 represented by the three rectangles in column @xmath36 . to complete our proof , we claim that @xmath85 . in order to see the claim , note that the @xmath70 removed levels of each spiral have to be wound completely around the corresponding rectangle . thus , they also intersect the vertical line that goes through the rectangles in column @xmath36 . therefore , the height at this @xmath0-coordinate is at least @xmath86 . the height and , hence , this expression is upperbounded by @xmath87 since we assumed that the drawing fits into @xmath29 . this yields @xmath88 . exploiting that the @xmath89 s are integers shows that our above claim holds . @xmath90 in order to show the np - hardness of our three objectives , we adjust the above proof by attaching a very long spiral ( with @xmath91 , say @xmath92 , windings ) to the wall such that it wraps around our construction above . let @xmath93 be the resulting lr - sequence . we will provide an upper bound for the objective value of @xmath93 that holds if and only if the corresponding lr - sequence @xmath2 is a yes - instance of 3-partition . for this , we will use that any realization of @xmath2 that is a no - instance causes the very long spiral to stretch by at least one unit horizontally or vertically , which makes the value of the objective increase above the mentioned upper bound . in more detail , we construct the angle sequence @xmath93 as follows ( see figure [ fig : long - spiral ] ) : we tightly draw a spiral around a rectangle of size @xmath94 with @xmath91 windings . by adding the ladder @xmath95 to the innermost horizontal arm and the ladder @xmath96 to the innermost vertical arm of the spiral , we ensure that in any tight drawing with the two ladders being in the inside , the spiral goes around a rectangle of size exactly @xmath97 . further , we add the ladder @xmath98 to the outermost horizontal and the ladder @xmath99 to the outermost vertical arm of the spiral . finally , we add @xmath2 to the spiral by using the appropriate one of the inner - most arms of the spiral as the wall of @xmath2 . note that as long as @xmath2 fits into a bounding box of size @xmath100 it does not stretch the spiral around it . hence , if and only if @xmath2 is a yes - instance , we can draw @xmath2 inside the spiral without stretching the spiral . r12em consider any one of the two objectives : minimizing the inner area and minimizing the perimeter . observe that in any drawing of @xmath2 that fits inside the @xmath101-box , the value of the objective is bounded by @xmath102 . let @xmath103 be the value of the objective of the spiral and its ladders when drawn tightly around a rectangle of size @xmath104 . then @xmath105 is an upper bound of the value of the objective of @xmath93 in the case that @xmath2 is a yes - instance . now assume that @xmath2 is a no - instance . if the spiral is not winding around @xmath2 , that is , if the bounding box of the first three arms of the spiral ( starting with the arms with the attached @xmath106-ladders ) does not contain @xmath2 , then the other arms of the spiral have to be drawn outside the bounding box of the two arms . hence , this increases the total length of the other arms by at least @xmath91 , thus leading to a value of the objective greater than @xmath107 . if the spiral is winding around @xmath2 , then , given that @xmath2 is a no - instance , we have to stretch the spiral as argued above . stretching the spiral by one unit in any direction , say in the horizontal direction , causes all @xmath91 many horizontal arms to increase by at least one unit . hence , the value of the objective is at least @xmath107 . the case of minimizing the bounding box is simpler : let @xmath108 be the size of the bounding box when the spiral and its ladders are drawn tightly around a rectangle of size @xmath97 . we claim that @xmath93 can be drawn inside an @xmath109-bounding box if and only if @xmath2 is a yes - instance . if @xmath2 is not drawn inside the spiral , then the ladders @xmath110 lie on the innermost arms of the spiral and the claim follows immediately . if @xmath2 is drawn inside the spiral , we recall that @xmath2 stretches the spiral ( and thus the bounding box of @xmath93 ) if and only if it is a no - instance . this concludes the proof . in this section , we show how to compute , for a monotone angle sequence , a polygon of minimum area . we start with the simple @xmath1-monotone case and then consider the more general @xmath0-monotone case . an @xmath1-monotone polygon has four _ extreme edges _ ; its leftmost and rightmost vertical edge , and its topmost and bottommost horizontal edge . two consecutive edges are connected by a ( possible empty ) @xmath1-monotone chain that we will call a _ stair_. starting at the top extreme edge , we denote the four stairs in counterclockwise order @xmath111 , @xmath112 , @xmath113 , and @xmath114 ; see figure [ fig : xymonexampleb ] . we say that an angle sequence consists of @xmath115 nonempty _ stair sequences _ if any @xmath1-monotone polygon that realizes it consists of @xmath115 nonempty stairs ; we also call it a _ @xmath115-stair sequence_. the extreme edges correspond to the exactly four @xmath62-sequences in an @xmath1-monotone angle sequence and are unique up to rotation . any @xmath1-monotone angle sequence is of the form @xmath116 ^ 4 $ ] , where the single @xmath56 describes the turn before an extreme edge and @xmath117 describes a stair sequence . w.l.o.g . , we assume that an @xmath1-monotone sequence always begins with @xmath62 and that we always draw the first @xmath62 as the topmost edge ( the top extreme edge ) . then , we can use @xmath111 , @xmath112 , @xmath113 , @xmath114 also for the corresponding stair sequences , namely the first , second third and forth @xmath117 subsequence after the first @xmath62 in cyclic order . let @xmath93 be the concatenation of @xmath111 , the top extreme edge , and @xmath114 ; let @xmath118 , @xmath37 , and @xmath29 be defined analogously following figure [ fig : xymonexampleb ] . for a chain @xmath119 , let the _ @xmath120 _ @xmath121 be the number of reflex vertices on @xmath119 . is highlighted . ( a ) notation : the four stairs @xmath122 , @xmath123 , @xmath124 , and @xmath125of an @xmath1-monotone polygon . the sequences @xmath93 , @xmath29 , @xmath37 , and @xmath118 are unions of neighboring stairs . ( b ) & ( c ) two possible optimum configurations of the polygon.,title="fig:"]is highlighted . ( a ) notation : the four stairs @xmath122 , @xmath123 , @xmath124 , and @xmath125of an @xmath1-monotone polygon . the sequences @xmath93 , @xmath29 , @xmath37 , and @xmath118 are unions of neighboring stairs . ( b ) & ( c ) two possible optimum configurations of the polygon.,title="fig:"]is highlighted . ( a ) notation : the four stairs @xmath122 , @xmath123 , @xmath124 , and @xmath125of an @xmath1-monotone polygon . the sequences @xmath93 , @xmath29 , @xmath37 , and @xmath118 are unions of neighboring stairs . ( b ) & ( c ) two possible optimum configurations of the polygon.,title="fig : " ] [ thm : xyarea ] given an @xmath1-monotone angle sequence @xmath2 of length @xmath6 , we can find a polygon @xmath7 that realizes @xmath2 and minimizes its bounding box or area in @xmath126 time , and in constant time we can find the optimum value of the objective if the @xmath120s of the stair sequences are given . part ( i ) of theorem [ thm : xyarea ] follows from the following observation : the bounding box of every polygon that realizes @xmath2 has width at least @xmath127 and height at least @xmath128 . by drawing three stairs with edges of unit length , we can meet these lower bounds . for part ( ii ) , we first consider angle sequences with at most two nonempty stairs . here , the only non - trivial case is when the angle sequence consists of two opposite stair sequences , that is , @xmath111 and @xmath113 , or @xmath112 and @xmath114 . w.l.o.g . , consider the second case . [ lem : two - stairs ] let @xmath2 be an @xmath1-monotone angle sequence of length @xmath6 consisting of two nonempty opposite stair sequences @xmath112 and @xmath114 . we can find a minimum - area polygon that realizes @xmath2 in @xmath126 time . if @xmath129 and @xmath130 are given , we can compute the area of such a polygon in @xmath131 time . fix a minimum area polygon @xmath7 that realizes @xmath2 . let @xmath132 and @xmath133 . assume ( by rotation if necessary ) that @xmath134 . in the following , we consider the bottom and left extreme edge to be part of @xmath112 and the top and right extreme edge to be part of @xmath114 . since @xmath7 is of minimum area , we may assume that all horizontal segments of @xmath112 are of unit length . otherwise , consider the leftmost horizontal segment @xmath135 of @xmath112 longer than @xmath63 . if any horizontal segment of @xmath114 above it is longer than @xmath63 , then we may contract both by one unit and decrease the area of @xmath7 without causing @xmath112 and @xmath114 to intersect ; see figure [ fig : bl - unit-1 ] . if all such segments are of unit length then , since @xmath134 , some horizontal segment of @xmath114 must be longer than @xmath63 and have a unit - length horizontal segment of @xmath112 below it ; see figure [ fig : bl - unit-2 ] . take the leftmost pair and contract both by one unit , decreasing the area of @xmath7 by at least @xmath63 but removing one reflex vertex from @xmath112 . add this reflex vertex back to @xmath112 by shifting the unit - length horizontal segments of @xmath112 between its last vertical segment of length at least @xmath136 before @xmath135 and the first unit - length piece of @xmath135 up by one unit ; see figure [ fig : bl - unit-3 ] . this also decreases the area of @xmath7 and does not cause @xmath112 and @xmath114 to intersect . ( note : such a vertical length segment must exist and no intersections are created because either @xmath114 consists of unit - length horizontal segments up to the @xmath0-coordinate of the right end of @xmath135 or such a segment has been created in the contraction step . ) let @xmath137 denote the @xmath36-th horizontal segment in @xmath114 ( including the top extreme edge ) . the length @xmath138 of @xmath137 is also the number of horizontal @xmath112-segments ( including the bottom extreme edge ) lying under @xmath137 . we have @xmath139 . let @xmath140 denote the area under @xmath137 in @xmath7 . since the left extreme edge in @xmath7 has length at least @xmath63 , the area in @xmath7 under @xmath141 is @xmath142 . for @xmath143 , @xmath144 . we can overcome the difference between @xmath145 and @xmath143 , by splitting @xmath141 into @xmath146 and @xmath147 , such that @xmath148 and @xmath149 . let @xmath150 for all other @xmath36 . observe that now @xmath151 . thus , @xmath152 which is minimized if @xmath153 is minimal . by cauchy - schwarz we know that this is the case if all @xmath154 are equal to the arithmetic mean ; since we have to use integers , the convexity of the function tells us that all @xmath154 have to be as close to the arithmetic mean as possible , that is , @xmath155 . hence , @xmath156 where @xmath157 is the quotient and @xmath12 is the remainder when @xmath158 is divided by @xmath159 . this lower bound can be achieved provided @xmath160 . if @xmath161 , one can achieve only @xmath162 , which is @xmath63 more than the bound , since the left extreme edge has length @xmath136 ( not @xmath63 ) if all horizontal edges are unit length . the proof of lemma [ lem : two - stairs ] leads to the following observation . [ obs : two - stairs ] in any polygon @xmath7 of minimum area consisting of two nonempty opposite stairs @xmath112 and @xmath114 with @xmath163 , @xmath112 consists of only unit - length segments and @xmath114 only of segments of lengths @xmath164 and @xmath165 ( in any order ) . we now consider the case of four nonempty stairs . ( the case of three nonempty stairs can be solved analogously . ) an @xmath1-monotone polygon @xmath7 with four nonempty stairs @xmath111 , @xmath114 , @xmath112 , and @xmath113 is _ canonical _ if [ xycanon : adj ] @xmath7 has two non - adjacent nonempty stairs , say @xmath111 and @xmath113 , such that the bounding box @xmath166 of @xmath111 and its adjacent extreme edges and the bounding box @xmath167 of @xmath113 and its adjacent extreme edges intersect in at most one point , and [ xycanon : corner ] the bottom - right corner of @xmath166 as well as the top - left corner of @xmath167 coincides with an endpoint of @xmath114 or @xmath112 . [ lem : xycanonical ] for every 4-stair sequence @xmath2 with @xmath168 , there exists a polygon of minimum area realizing @xmath2 that is canonical . consider an optimum polygon realizing angle sequence @xmath2 . assume it is not canonical . observe that all four extreme edges are of length @xmath63 , otherwise the polygon is not optimum . first , suppose that the canonical property [ xycanon : adj ] does not hold . then for any pair of two opposite stairs , the bounding boxes of their adjacent extreme edges intersect in more than one point . hence , the ( closed ) @xmath0-ranges of the horizontal extreme edges intersect and the ( closed ) @xmath45-ranges of the vertical extreme edges intersect . since the extreme edges have length @xmath63 , and the bounding boxes intersect in more than one point , we even have that either the ( closed ) @xmath0-ranges of the top and bottom extreme edges are the same , or the ( closed ) @xmath45-ranges of the left and right extreme edges are the same . suppose ( by rotation if necessary ) it is the latter and also suppose ( by temporary vertical or horizontal reflection and reflecting it back afterwards ) that stair @xmath114 has @xmath120 greater than @xmath169 ( since @xmath170 this is possible ) . let @xmath171 be the left endpoint of the bottom extreme edge and let @xmath8 be the reflex vertex that precedes ( in ccw order ) the top extreme edge ; see figure [ fig : make - canonical-1 ] . we shift the boundary of @xmath7 that lies on the ccw walk from @xmath171 to @xmath8 down by two units , stretching the vertical edges adjacent to @xmath171 and @xmath8 . the new polygon @xmath172 still realizes the angle sequence and its area is larger by two units than the area of @xmath7 . however , now @xmath166 and @xmath167 are intersection - free . let @xmath81 be the reflex vertex that follows ( in ccw order ) the right extreme edge and let @xmath173 be the bottom endpoint of the left extreme edge ; see figure [ fig : make - canonical-2 ] . we shift the boundary of @xmath172 that lies on the ccw walk from @xmath81 to @xmath173 to the left by three units , stretching the horizontal edges adjacent to @xmath81 and @xmath173 . the new polygon still realizes the angle sequence and is still simple : the only crossings that can occur by this operation are between @xmath123and @xmath125 . the left extreme edge lies at most three rows above the right extreme edge @xmath174 ; hence , any crossing must involve the vertical edge @xmath175 of @xmath123 in the row above @xmath174 or the vertical edge @xmath176 of @xmath123two rows above @xmath174 . since @xmath177 , we have that ( after the shift ) @xmath178 since each vertical edge of @xmath125has @xmath0-coordinate at most @xmath179 , there can be no crossing ; see figure [ fig : make - canonical-3 ] . however , now the area of the polygon decreased by three units ; a contradiction to the fact that @xmath180 is optimum . hence , the canonical property [ xycanon : adj ] has to hold . now , assume that there is a bounding box pair having at most one point in common , w.l.o.g . @xmath166 and @xmath167 . since the optimum polygon is not canonical , the canonical property [ xycanon : corner ] has to be violated . hence , for at least one of the two bounding boxes , say @xmath166 , neither an endpoint of @xmath114 nor an endpoint of @xmath112 lies on a corner of @xmath166 , that is , their endpoints lie on two different edges of @xmath166 , and the distance from their endpoints to the closest corner of @xmath166 is at least 1 . then , for at least one of the two edges , it holds that the line going through the edge does not cross the interior of @xmath167 . w.l.o.g . , this holds for the line @xmath181 that goes through the horizontal edge of @xmath166 . then , we can also observe that @xmath181 does not cross any vertical line segment of @xmath114 ; instead , there is a horizontal line segment of @xmath114 lying on @xmath181 . to see this , assume the contrary . then , there exists a vertical line segment @xmath8 of @xmath114 that is cut by @xmath181 ; see figure [ fig : make - c2 - 1 ] . thus , the two endpoints of @xmath8 lie at least one unit above and below @xmath181 , respectively . consider the horizontal line segment of @xmath114 starting at the top endpoint of @xmath8 . we can move the horizontal segment downwards and place it on @xmath181 . by this , the angle sequence does not change and the polygon remains simple as all line segments of @xmath112 , the only segments that might cross @xmath114 after his operation , lie below @xmath181 by at least one unit . hence , by moving the horizontal edge downwards , we in fact shrink the polygon ; a contradiction to its optimality . thus , @xmath181 contains a horizontal line segment of @xmath114 . now , we cut the polygon through @xmath181 into two parts ; see figure [ fig : make - c2 - 2 ] . then , we shift the upper part to the left until @xmath112 intersects the bottom right corner of @xmath166 . the resulting polygon realizes the same angle sequence as before and has the same area as before ; see figure [ fig : make - c2 - 3 ] . however , now @xmath112 intersects a corner of @xmath166 . if the polygon is not yet canonical , then we repeat the procedure with @xmath167 and get a canonical optimum polygon . hence , the canonical property [ xycanon : corner ] holds . consider the line segment of @xmath114 and the line segment of @xmath112 that connect to @xmath166 in a canonical polygon . the two line segments are ( a ) both horizontal , ( b ) both vertical , or ( c ) perpendicular to each other . consequently , there is only a constant number of ways in which the stairs outside the two bounding boxes are connected to them . ( the number of combinations is further limited as both case ( a ) and case ( b ) can appear only once . ) consider a ( canonical ) optimum polygon . we cut the polygon along the edge of @xmath166 to which @xmath112 and @xmath114 are connected . we also cut along the respective edge of @xmath167 . we get three polygons . the polygons on the outside realize the 1-stair sequence defined by @xmath111 and @xmath113 ( including the extreme edges ) , respectively , whereas the middle polygon realizes the 2-stair sequence defined by the concatenation of @xmath112 , @xmath114 , and the edge segments of @xmath166 and @xmath167 that connect them . this observation leads to the following algorithm : for @xmath182 , we find a solution in constant time by exhaustive search . for larger @xmath183 , we guess the partition of the extreme edges whose bounding boxes do not intersect in the ( canonical ) optimum polygon that we want to compute . w.l.o.g . , we guessed @xmath166 and @xmath167 ( the other case is symmetric ) . then , we guess how @xmath114 and @xmath112 , the two stairs outside @xmath166 and @xmath167 , are connected to each of the two bounding boxes ( see ( a)(c ) ) . this gives us two 1-stair instances and a 2-stair instance . we solve the instances independently and then put the solutions together to form a solution to the whole instance . by lemma [ lem : two - stairs ] and observation [ obs : two - stairs ] , we solve the middle instance such that the left extreme edge of our solution is of minimum length , and , if possible , also the top extreme edge . in detail , we put them together as follows . let @xmath184 denote our solution to the instance corresponding to @xmath166 and let @xmath185 denote our solution to the middle instance ; see figure [ fig : xy - decomposition-1 ] . if we guessed case ( a ) for @xmath166 , then we put @xmath184 and @xmath185 together along their corresponding vertical extreme edges . if the right extreme edge of @xmath184 is too short , we make it sufficiently longer by lifting the top extreme edge of @xmath184 up . case ( b ) works symmetrically . if we guessed ( c ) for @xmath166 , note that either the left or top extreme edge of @xmath185 has length at least @xmath136 . we put @xmath184 and @xmath185 along this extreme edge and the corresponding extreme edge of @xmath184 ; see figure [ fig : xy - decomposition-2 ] . we repeat the same process with @xmath185 and our solution @xmath186 to the instance corresponding to @xmath167 . however , we proceed differently if the following holds : ( a ) we guessed case ( a ) or ( b ) for @xmath167 and the respective extreme edge of @xmath186 is too short for the corresponding edge @xmath187 of @xmath185 , ( b ) we guessed ( c ) for @xmath166 or the respective extreme edge of @xmath184 is longer by at least @xmath136 than the corresponding extreme edge @xmath188 of @xmath185 , and ( c ) @xmath189 . in this case , by observation [ obs : two - stairs ] , we solve @xmath185 again such that @xmath187 is of minimum length . then we proceed as before . note that @xmath184 and @xmath185 remain feasibly connected . let @xmath190 . according to observation [ obs : two - stairs ] , in @xmath185 , all horizontal segments in @xmath112 are of unit length and all horizontal segments in @xmath114 are of length @xmath191 or @xmath192 . all in all , we get a canonical polygon which realizes the given angle sequence . it follows immediately that the polygon has minimum area if we did not prolong any extreme edges in cases ( a ) or ( b ) . now , assume that we had to prolong an extreme edge of @xmath184 . w.l.o.g . , we prolonged the bottom extreme edge of @xmath184 in case ( b ) . instead of prolonging the edge , we could have cut the polygon horizontally through the top endpoint of the left extreme edge ( instead of through the bottom endpoint ) and solve the two resulting instances , a 1- and a 2-stair instance , independently ; see figure [ fig : xy - decomposition-3 ] . observe that if we cut our combined solution in the same way , we get optimum solutions to those two instances . let us consider our solution @xmath193 to the 2-stair instance . we increased a ( minimum - length ) top step of @xmath114 by one and at the same time increased the number @xmath158 of reflex vertices of @xmath112 by one . for @xmath112 ( in @xmath193 ) , all steps are still of unit length , and for @xmath114 ( in @xmath193 ) , only steps of lengths @xmath194 and @xmath195 appear . now , assume that we ( also ) prolonged an extreme edge of @xmath186 . w.l.o.g . , we are in case ( a ) . consider the situation after a possible recomputation of @xmath185 . if the rightmost horizontal edge @xmath181 on the top side of @xmath185 is of minimum length , then we can apply the same argument as before . otherwise , we increase the length of any horizontal minimum - length edge on the top side of @xmath185 by one and reduce the length of @xmath181 by one ; by observation [ obs : two - stairs ] , this remains a minimum - length solution . thus , we computed a polygon of minimum area . the running time is linear in @xmath6 since our algorithm computes only constantly many 1-stair and 2-stair instances which are themselves solvable in linear time . given the number of steps for the four stairs , we can even compute the minimum area in constant time since this is true for instances with two or less stairs . for the @xmath0-monotone case , we first give an algorithm that minimizes the bounding box of the polygon , and then an algorithm that minimizes the area . an @xmath0-monotone polygon consists of two _ vertical extreme _ edges , i.e. , the leftmost and the rightmost vertical edge , and at least two _ horizontal extreme _ edges , which are defined to be the horizontal edges of locally maximum or minimum height . the vertical extreme edges divide the polygon into an upper and a lower hull , each of which consists of @xmath1-monotone chains that are connected by the horizontal extreme edges . we call a horizontal extreme edge of type @xmath61 an _ inner extreme edge _ , and a horizontal extreme edge of type @xmath62 an _ outer extreme edge _ ; see figure [ fig : bbarea - canonical1 ] . similar to the @xmath1-monotone case , we consider a _ stair _ to be an @xmath1-monotone chain between any two consecutive extreme edges ( outer and inner extreme edges as well as vertical extreme edges ) and we denote by _ stair sequence _ the corresponding angle subsequence @xmath117 . w.l.o.g . , at least one inner extreme edge exists , otherwise the polygon is @xmath1-monotone and we refer to section [ subsec : xymonarea ] . given an @xmath0-monotone sequence , we always draw the first @xmath61-subsequence as the leftmost inner extreme edge of the lower hull . by this , the correspondence between the angle subsequences and the stairs and extreme edges is unique . ) . note that the illustrating drawing is not optimal.,title="fig : " ] ) . note that the illustrating drawing is not optimal.,title="fig : " ] ) . note that the illustrating drawing is not optimal.,title="fig : " ] an @xmath0-monotone polygon is _ canonical _ if [ xcan : outer ] all outer extreme edges are lying on the border of the bounding box , [ xcan : vert ] each vertical non - extreme edge that is not incident to an inner extreme edge has length @xmath63 , and [ xcan : hor ] each horizontal edge that is not an outer extreme edge has length @xmath63 . the following lemma states that it suffices to find a canonical @xmath0-monotone polygon of minimum bounding box ; see figure [ fig : bbarea - canonicals ] for an illustration . [ lem : transform ] any @xmath0-monotone polygon can be transformed into a canonical @xmath0-monotone polygon without increasing the area of its bounding box . let @xmath7 be an @xmath0-monotone polygon . we transform it into a canonical polygon in two steps . first , we move all horizontal edges on the upper hull as far up as possible and all horizontal edges on the lower hull as far down as possible ; see figures [ fig : bbarea - canonical1]. this establishes condition [ xcan : outer ] . furthermore , assume that there is a vertical edge @xmath196 on the upper hull with @xmath197 . if the ( unique ) horizontal edge @xmath198 is not an inner extreme edge , then it can be moved upwards until @xmath199 , which contradicts the assumption that all horizontal edges on the upper hull are moved as far up as possible . this argument applies symmetrically to the edges on the lower hull . hence , condition [ xcan : vert ] is established . second , we move all vertical edges on a stair as far as possible in the direction of the inner extreme edge bounding the stair , e.g. , if the stair lies on the upper hull and is directed downwards , then all vertical edges are moved as far right as possible ; see figures [ fig : bbarea - canonical2]. this stretches the outer extreme edges while simultaneously contracting all other horizontal edges to length 1 , which satisfies condition [ xcan : hor ] . note that in neither step the bounding box changed . since all conditions are satisfied , the resulting polygon is canonical . we observe that the length of the vertical extreme edges depends on the height of the bounding box , while the length of all other vertical edges is fixed by the angle sequence . thus , a canonical @xmath0-monotone polygon is fully described by the height of its bounding box and the length of its outer extreme edges . furthermore , the @xmath45-coordinate of each vertex depends solely on the height of the bounding box . we use a dynamic program that constructs a canonical polygon of minimum bounding box in time @xmath200 . for each possible height @xmath201 of the bounding box , the dynamic program populates a table that contains an entry for any pair of an extreme vertex @xmath180 ( that is , an endpoint of an outer extreme edge ) and a horizontal edge @xmath187 of the opposite hull . the value of the entry @xmath202 $ ] is the minimum width @xmath81 such that the part of the polygon left of @xmath180 can be drawn in a bounding box of height @xmath201 and width @xmath81 in such a way that the edge @xmath187 is intersecting the interior of the grid column left of @xmath180 . [ thm : xbbox ] given an @xmath0-monotone angle sequence @xmath2 of length @xmath6 , we can find a polygon @xmath7 that realizes @xmath2 and minimizes the area of its bounding box in @xmath200 time . to prove the theorem , we present an algorithm that constructs a canonical polygon of minimum bounding box in time @xmath200 . the height of any minimum bounding box is at most @xmath6 ; otherwise , as there are only @xmath6 vertices , there is a @xmath45-coordinate on the grid that contains no vertex and can be `` removed '' . for any height @xmath201 of the @xmath6 possible heights of an optimum polygon , we run the following dynamic program in @xmath26 time . we call the left and right endpoint of an outer extreme edge the _ left extreme vertex _ and the _ right extreme vertex _ , respectively . the dynamic program contains an entry for any pair of an extreme vertex @xmath180 and a horizontal edge @xmath187 of the opposite hull . consider the part of the polygon between @xmath180 and @xmath187 that includes the left vertical extreme edge , that is , the chain that goes from @xmath180 to @xmath187 over the left vertical extreme edge . the value of the entry @xmath202 $ ] is the minimum width @xmath81 of a bounding box of height @xmath201 in which this part of the polygon can be drawn in such a way that edge @xmath187 is intersecting the interior of the grid column left of @xmath180 and such that @xmath187 has the same @xmath45-coordinate as it has in a canonical drawing of the whole polygon in a bounding box of height @xmath201 ; see figure [ fig : bbarea - dp ] . we call @xmath203 an _ extreme column pair_. and @xmath204 with @xmath202=t[p',e']+w'=w$ ] . the part of the polygon left of @xmath180 can be drawn in the bounding box of size @xmath205 . ] we compute @xmath202 $ ] as follows . consider a drawing of the part of the polygon between @xmath180 and @xmath187 that includes the left vertical extreme edge in a bounding box of height @xmath201 and minimum width . let @xmath206 be the rightmost extreme vertex in this drawing to the left of @xmath180 , let @xmath204 be the corresponding extreme column pair , and let @xmath207 be the horizontal distance between @xmath180 and @xmath206 ; see figure [ fig : bbarea - dp ] . we can find @xmath204 and @xmath207 from the angle sequence as follows . if @xmath180 is a left extreme vertex , then , by condition [ xcan : hor ] , the pair @xmath204 and the distance @xmath207 is fully determined . otherwise , if @xmath180 is a right extreme vertex , then @xmath206 is either the left extreme vertex incident to @xmath180 , or @xmath206 is the horizontally closest extreme vertex on the opposite hull ; we test both cases . again , by condition [ xcan : hor ] , edge @xmath208 and distance @xmath207 is fully determined . when determining @xmath204 and @xmath207 , we also test , as we will describe in the next paragraph , whether we can canonically draw the part of the polygon between @xmath204 and @xmath203 in the given space constraints . if we can , then we call @xmath204 a feasible pair for @xmath203 . we find a feasible pair @xmath204 for @xmath203 with the smallest value of @xmath209 + w'$ ] and set @xmath210 = t[p',e ' ] + w'~.\ ] ] if all pairs for @xmath203 are infeasible , we set @xmath202=\infty$ ] . first , we will argue that if there is such a canonical drawing , then it is unique . we assume that @xmath209<\infty$ ] . we group each pair of stairs that share an inner extreme edge as a _ double stair _ ; see figure [ fig : bbarea - canonical3 ] . each remaining stair forms a double stair by itself . let @xmath211 denote the part of the upper hull between @xmath204 and @xmath203 . given the choice of @xmath206 , it does not contain any endpoint of an outer extreme edge in its interior . hence , there are only two cases . either @xmath211 consists of a single horizontal line segment belonging to an outer extreme edge , or it is a subchain belonging to a double stair . in the first case , by condition [ xcan : outer ] , we have to draw @xmath211 on the top boundary of the bounding box . further , its left endpoint has @xmath0-coordinate equal to @xmath209 $ ] and the length of the segment is @xmath207 . hence , the drawing is unique . in the second case , note that conditions [ xcan : outer][xcan : hor ] determine the lengths and @xmath45-positions of all edges with exception of the lengths of the outer extreme edges . thus , given the @xmath0-position of any vertex of a double stair , there is only one canonical way to draw the double stair . in our case , the value of @xmath209 $ ] is equal to the @xmath0-position of the leftmost vertex of @xmath211 . hence , the drawing of @xmath211 is unique . by the same arguments , we have to draw the part @xmath212 of the lower hull between @xmath204 and @xmath203 in a unique way . now , given the unique drawings of @xmath211 and @xmath212 , we check for every @xmath0-coordinate whether @xmath211 is lying above @xmath212 . if and only if this is the case , then the two drawings together form a feasible canonical drawing and @xmath204 is a feasible pair for @xmath203 . in the last step , we compute the minimum width @xmath81 of the bounding box assuming height @xmath201 . consider an optimum canonical drawing of the whole polygon in a bounding box of height @xmath201 . let @xmath213 be a rightmost ( right ) extreme vertex . note that for @xmath213 there are only two candidates , one from the upper hull and one from the lower hull . since @xmath213 is a rightmost extreme vertex , all horizontal edges to the right of @xmath213 ( on the upper and on the lower hull ) are segments of length @xmath63 . thus , given @xmath213 , we can compute the distance @xmath214 between @xmath213 and the right vertical extreme edge . let @xmath215 be the @xmath214th horizontal edge from the right on the hull opposite to @xmath213 . observe that edge @xmath215 is the edge that forms an extreme column pair with @xmath213 . hence , the width of the polygon is @xmath216+r^*$ ] . we compute width @xmath81 as follows . for each one of the two candidates for @xmath213 , we determine @xmath214 and @xmath215 . then we check whether the candidate is feasible . for this , recall that conditions [ xcan : outer][xcan : hor ] determine the @xmath45-positions of all edges . also recall that all horizontal edges to the right of @xmath217 are of length @xmath63 . hence , there is only one way to canonically draw the edges right to @xmath217 . if the upper hull always stays above the lower hull , candidate @xmath213 is feasible . thus , we get the width by @xmath218+r^*\ } \cup \{\infty\}~.\ ] ] for every height @xmath201 , we compute the minimum width @xmath81 and find the bounding box of minimum area @xmath219 . it remains to show the running time of the algorithm . the table @xmath93 consists of @xmath26 entries . to find the value of an entry @xmath202 $ ] , we have to find the closest column pair @xmath204 to the left , the distance @xmath207 , and we have to test whether we can canonically draw the polygon between @xmath204 and @xmath203 . we now show that each of these steps is possible in @xmath131 time by precomputing some values for each point . 1 . [ precomp : ycoord ] for each point , we store its @xmath45-coordinate . as observed above , the @xmath45-coordinate is fixed , and it can be computed in @xmath126 time in total by traversing the stairs . [ precomp : next ] for each point @xmath180 , we store the next extreme point @xmath220 to the left on the same hull , as well as the distance @xmath221 to it . these can be computed in @xmath126 time by traversing the upper and the lower hull from left to right . [ precomp : array ] for each left extreme vertex @xmath157 , we store an array that contains all horizontal edges between @xmath157 and @xmath222 ordered by their appearance on a walk from @xmath157 to @xmath222 on the same hull . we also store the index of the inner extreme edge in this array . these arrays can be computed in total @xmath126 time by traversing the upper and the lower hull from right to left . the precomputation takes @xmath126 time in total . given an extreme column pair @xmath203 , let @xmath223 be the left endpoint of @xmath187 . we can use precomputation [ precomp : next ] to find in @xmath131 time the closest extreme vertex @xmath206 to the left of @xmath180 , since it is either @xmath220 or @xmath224 , as well as the distance @xmath207 , which is either @xmath221 or @xmath225 . to test whether we can canonically draw the polygon between @xmath204 and @xmath203 , we make use of the fact that there is no outer extreme edge between them . hence , we only have to test whether a pair of opposite double stairs intersects . to this end , we observe that a pair of double stairs can only intersect if the inner extreme edge of the lower hull lies ( partially ) above the upper hull or the inner extreme edge of the upper hull lies ( partially ) below the lower hull . with the array precomputed in step [ precomp : array ] , we can find the edge opposite of the inner extreme edges , and by precomputation [ precomp : ycoord ] , each point ( and thus each edge ) knows its @xmath45-coordinate , which we only have to compare to find out whether an intersection exists . hence , we can compute each table entry in @xmath131 times after a precomputation step that takes @xmath126 time . since we call the dynamic program @xmath126 times once for each candidate for the height of the bounding box the algorithm takes @xmath200 time in total . following lemma [ lem : transform ] , this proves the theorem . for the area minimization , we make two key observations . first , since the polygon is @xmath0-monotone , each grid column ( properly ) intersects either no or exactly two horizontal edges : one edge from the upper hull and one edge from the lower hull . second , a pair of horizontal edges share at most one column ; otherwise , the polygon could be drawn with less area by shortening both edges . with the same argument as for the bounding box , the height of any minimum - area polygon is at most @xmath6 . we use a dynamic program to solve the problem . to this end , we fill a three - dimensional table @xmath93 as follows . let @xmath187 be a horizontal edge on the upper hull , let @xmath188 be a horizontal edge of the lower hull , and let @xmath226 . then , the entry @xmath227 $ ] specifies the minimum area required to draw the part of the polygon to the left of ( and including ) the unique common column of @xmath187 and @xmath188 under the condition that @xmath187 and @xmath188 share a column and have vertical distance @xmath201 . let @xmath228 be the horizontal edges on the upper hull from left to right and let @xmath229 be the horizontal edges on the lower hull from left to right . we initialize the table with @xmath230=h$ ] for each @xmath226 . to compute any other entry @xmath231 $ ] , we need to find the correct entry from the column left of the column shared by @xmath232 and @xmath233 . there are three possibilities : this column either intersects @xmath234 and @xmath235 , it intersects @xmath232 and @xmath235 , or it intersects @xmath234 and @xmath233 . for each of these possibilities , we check which height can be realized if @xmath232 and @xmath233 have vertical distance @xmath236 and search for the entry of minimum value . we set @xmath237=\min_{h '' \text { valid}}\{t[e_{i-1},f_{j-1},h '' ] , t[e_i , f_{j-1},h '' ] , t[e_{i-1},f_j , h'']\}+h'.\ ] ] finally , we can find the optimum solution by finding @xmath238\}$ ] . since the table has @xmath200 entries each of which we can compute in @xmath126 time , the algorithm runs in @xmath239 time . this proves the following theorem . [ thm : xbarea ] given an @xmath0-monotone angle sequence @xmath2 of length @xmath6 , we can find a minimum - area polygon that realizes @xmath2 in @xmath239 time . in this section , we show how to compute a polygon of minimum perimeter for an @xmath1-monotone or @xmath0-monotone angle sequence @xmath2 of length @xmath6 . let @xmath7 be an @xmath0-monotone polygon realizing @xmath2 . let @xmath240 be the leftmost vertical edge and let @xmath241 be the rightmost vertical edge of @xmath7 . recall that @xmath7 consists of two @xmath0-monotone chains ; an upper chain @xmath93 and a lower chain @xmath37 connected by @xmath240 and @xmath241 . without loss of generality , we assume for the number of reflex vertices of @xmath93 and @xmath37 that @xmath242 . we transform any minimum - perimeter polygon into a perimeter - canonical form without sacrificing its perimeter in two steps as follows . first , we shorten every _ long _ vertical edge @xmath243 with @xmath244 so that @xmath245 . this is always possible : for any long vertical edge @xmath246 , say @xmath247 , if its end vertices have turns @xmath248 in counterclockwise order , then we proceed as in figure [ fig : mono - hori - edge-1 ] . we move the subchain @xmath249 from @xmath241 to @xmath187 upward by @xmath250 by shortening @xmath187 and simultaneously by stretching @xmath241 . this guarantees that @xmath251 decreases , instead @xmath252 increases by the same amount of @xmath250 , so the perimeter remains the same . we can also shorten any long vertical edge whose end vertices have turns @xmath253 in a symmetric way . second , we shorten every long horizontal edge @xmath247 with @xmath244 so that its length becomes one . suppose that @xmath187 is the rightmost long horizontal edge @xmath187 in @xmath93 . since @xmath242 , there must be a long horizontal edge @xmath208 in @xmath37 . we shorten both @xmath187 and @xmath208 by one unit , and move two subchains @xmath254 and @xmath255 together with @xmath241 one unit left . this move may cause two vertical edges , @xmath256 and @xmath257 , to intersect ; see figure [ fig : mono - hori - edge-2 ] . note that exactly one of both vertical edges did not move , say @xmath258 , as otherwise there would be no intersection between them . this means @xmath258 is to the left of @xmath208 , i.e. , @xmath259 . we also know that the @xmath0-distance between @xmath188 and @xmath258 prior to the move was one , otherwise they would not intersect . since @xmath188 and @xmath258 are of unit length , the lower end vertex of @xmath188 has the same @xmath45-coordinate as the upper end vertex of @xmath258 . to avoid the intersection , we first move the whole upper chain @xmath93 one unit upward by stretching @xmath241 and @xmath240 each by one unit , as in figure [ fig : mono - hori - edge-3 ] . then we can move @xmath254 , @xmath255 , and @xmath241 without causing any intersection . we lose two units by shortening @xmath187 and @xmath208 , and gain two units by stretching @xmath241 and @xmath240 , so the total perimeter is unchanged . we repeat this until @xmath245 . suppose that @xmath7 is a minimum - perimeter canonical polygon that realizes @xmath2 with @xmath242 , and @xmath260 denotes its perimeter . by conditions [ pericanon : vert][pericanon : hor ] , every edge in @xmath93 is of unit length , so the length of @xmath93 is @xmath261 . this implies the width of @xmath37 should be @xmath262 . by condition [ pericanon : vert ] , the length of the vertical edges in @xmath37 is @xmath263 , so the total length of @xmath37 is @xmath264 . thus we can observe the following property . the first three terms of @xmath260 in lemma [ lem : peri - equation ] are constant , so we need to minimize the sum of the last two terms , @xmath266 and @xmath252 , to get a minimum perimeter . however , once one of them is fixed , the other is automatically determined by the fact that all vertical edges in @xmath93 and @xmath37 are unit segments . even more , minimizing one of them is equivalent to minimizing their sum , consequently minimizing the perimeter . we call the length of the left vertical extreme edge of a polygon the _ height _ of the polygon . let @xmath7 be a minimum - perimeter canonical @xmath1-monotone polygon that realizes an @xmath1-monotone angle sequence @xmath2 of length @xmath6 . as before , we assume that @xmath267 . when @xmath268 , i.e. , the number @xmath12 of reflex vertices is @xmath269 , a unit square @xmath7 achieves the minimum perimeter , so we assume here that @xmath270 . recall that the boundary of @xmath7 consists of four stairs , @xmath271 , and @xmath113 . let @xmath272 be a quadruple of the numbers of reflex vertices of @xmath271 , and @xmath113 , respectively . then @xmath273 , where @xmath274 for each @xmath36 . again , we define @xmath118 as the chain consisting of @xmath111 , @xmath240 and @xmath112 and @xmath29 as the chain consisting of @xmath113 , @xmath241 and @xmath114 . in @xmath7 , let @xmath275 and @xmath276 denote the widths of @xmath93 and @xmath37 , respectively , and @xmath277 and @xmath278 the heights of @xmath118 and @xmath29 , respectively . hence , the perimeter of @xmath7 is @xmath279 . note that @xmath280 and , by condition [ pericanon : hor ] , @xmath281 . thus @xmath282 . similarly , @xmath283 , and , by condition [ pericanon : vert ] , @xmath284 and @xmath285 . thus , if @xmath286 , then @xmath287 , and , if @xmath288 , then @xmath289 . further observe that @xmath286 implies @xmath290 , and that @xmath288 implies @xmath291 . hence , if @xmath286 or @xmath288 , then @xmath292 and then @xmath293 now , consider the remaining case when @xmath294 and @xmath295 . we will observe that this case can occur only if @xmath272 is @xmath296 or @xmath297 . we will also observe that then @xmath298 . hence , we get that @xmath299 for case @xmath296 , and @xmath300 for case @xmath297 . for all other cases , equation [ eq : per - when - height - zero ] holds . to make these observations , we first apply the same contraction step as depicted in figure [ fig : bbarea - canonical2 ] of lemma [ lem : transform ] . that is , we contract all horizontal segments of @xmath112 to length @xmath63 by moving all their right endpoints as far as possible to the left , and we contract all horizontal segments of @xmath113 to length @xmath63 by moving all their left endpoints as far as possible to the right . by this , all edges of @xmath37 except the bottom extreme edge have length @xmath63 , and the perimeter does not change . next , note that @xmath93 and @xmath37 have vertical distance @xmath63 to each other . otherwise we could move @xmath37 at least one unit to the top by simultaneously shrinking @xmath240 and @xmath301 , and thus shrinking the perimeter of @xmath7 , a contradiction to the minimality of @xmath260 . as @xmath93 consists only of unit segments ( conditions [ pericanon : vert][pericanon : hor ] ) , there is a vertex @xmath180 in @xmath93 having distance @xmath63 to @xmath37 . first assume that @xmath180 belongs to @xmath114 . we choose the rightmost such @xmath180 . if @xmath180 were a convex vertex , then it would be the top endpoint of @xmath241 , and , hence , we would have @xmath288 ; a contradiction to @xmath295 . thus , @xmath180 is a reflex vertex and therefore an left endpoint of a horizontal edge @xmath302 . hence , the right endpoint @xmath206 of @xmath302 is convex . let @xmath187 be the edge in @xmath37 below @xmath302 , that is , the edge that crosses the same grid column as @xmath302 . observe that the distance between @xmath302 and @xmath187 is at least @xmath136 . if it were @xmath63 , then the vertical edge @xmath303 incident to @xmath206 would connect to @xmath187 ( recall that @xmath206 is convex ) . hence , @xmath302 and @xmath187 would be incident to @xmath304 , and again we would have @xmath288 ; a contradiction . thus , the distance between @xmath180 and @xmath187 is at least @xmath136 . let @xmath157 be the point of @xmath37 directly one unit below @xmath180 . then @xmath187 lies at least one unit below @xmath157 . hence , @xmath157 has to connect to @xmath187 via an vertical edge , and , consequently , @xmath157 has to be a reflex vertex and belong to @xmath112 . by condition [ pericanon : vert ] , the vertical edge connecting @xmath157 and @xmath187 has length @xmath63 , hence , the distance between @xmath302 and @xmath187 is exactly @xmath136 . but now , either the bottom endpoint @xmath305 of @xmath303 has distance @xmath63 to @xmath37 , or @xmath305 lies on @xmath37 , that is , @xmath306 . the former case contradicts our assumption that @xmath180 is the rightmost vertex of @xmath93 having distance @xmath63 to @xmath37 . thus , the latter case holds and @xmath302 and @xmath187 are incident to @xmath241 . hence , @xmath307 , @xmath187 is the bottom extreme edge and has length @xmath245 , and @xmath113 is empty , that is , @xmath308 . thus , all horizontal edges in @xmath37 have unit length . this property allows us to use the same argument as above to show that @xmath309 and @xmath310 . given @xmath311 , we get @xmath312 . given an @xmath1-monotone angle sequence @xmath2 of length @xmath6 , we can find a polygon @xmath7 that realizes @xmath2 and minimizes its perimeter in @xmath126 time . furthermore , if the lengths of the stair sequences @xmath313 are given as above , then @xmath260 can be expressed as : @xmath314 a minimum height polygon @xmath7 that realizes @xmath2 can be computed in @xmath26 time using dynamic programming . recall that a perimeter - canonical polygon of minimum height is a polygon of minimum perimeter . from right to left , let @xmath315 be the horizontal edges in @xmath93 and @xmath316 , @xmath317 , @xmath318 @xmath319 be the horizontal edges in @xmath37 . recall that @xmath242 . for @xmath320 , let @xmath321 $ ] be the minimum height of the subpolygon formed with the first @xmath36 horizontal edges from @xmath93 and the first @xmath58 horizontal edges from @xmath37 . note that the leftmost vertical edge of the subpolygon whose minimum height is stored in @xmath322 $ ] joins the left endpoints of @xmath323 and @xmath324 . to compute @xmath322 $ ] , we attach edges @xmath323 and @xmath324 to the upper and lower chains of the subpolygon constructed so far . since @xmath323 has unit length , either @xmath323 and @xmath324 are attached to the subpolygon with height of @xmath325 $ ] or just @xmath323 is attached to the subpolygon with height of @xmath326 $ ] . as in figure [ fig : dp - polygon - perimeter ] , there are four cases for the first attachment and two cases for the second attachment , according to the turns formed at the attachments . let @xmath171 and @xmath8 be the left end vertex of @xmath327 and the right end vertex of @xmath323 , respectively . let @xmath328 and @xmath329 be the right end vertex of @xmath330 and the left end vertex of @xmath331 , respectively . notice that both vertical edges @xmath196 and @xmath332 have unit length . as an example , let us explain how to calculate @xmath321 $ ] when @xmath333 and @xmath334 , which corresponds to figures [ fig : dp - polygon - perimeter-2 ] and . we set @xmath321 $ ] to the minimum height of the two possible attachments and . consider the height for . if @xmath335>1 $ ] , then @xmath323 and @xmath324 are attached to the subpolygon as illustrated in figure [ fig : dp - polygon - perimeter-2 ] . since edges @xmath196 and @xmath332 have unit length , @xmath322 = a[i-1,j-1]$ ] . in the other case , if @xmath335 = 1 $ ] , then we can move the upper chain of the subpolygon one unit upward without intersection so that @xmath323 and @xmath324 are safely attached to the subpolygon with @xmath322 = 2 $ ] . note that this is the smallest possible value for @xmath322 $ ] given @xmath336 and @xmath334 . thus @xmath322 = \max(a[i-1,j-1 ] , 2)$ ] . the height for should be at least @xmath63 , so it is expressed as @xmath337 - 1 , 1)$ ] . therefore , @xmath338 = \min(\max(a[i-1,j-1],2),\max(a[i-1,j]-1,1)).\ ] ] for the other turns at @xmath339 and @xmath340 , we can similarly define the equations as follows : @xmath338=\left\ { \begin{array}{ll } \text{undefined } & \text{if $ i = 0 $ , $ j = 0 $ or $ i < j$}\\ 1 & \text{if $ i=1 $ , $ j = 1$}\\ a[i-1,j]+1 & \text{if $ uv={\texttt{r}}{\texttt{l}}$ , $ j = 1$}\\ \max(a[i-1,j]-1 , 1 ) & \text{if $ uv={\texttt{l}}{\texttt{r}}$ , $ j = 1$}\\ \min(\max ( a[i-1,j-1 ] , 2 ) , a[i-1,j]+1 ) & \text{if $ uv={\texttt{r}}{\texttt{l}}$ , $ u'v'={\texttt{r}}{\texttt{l}}$ } \\ \min(\max ( a[i-1,j-1 ] , 2 ) , \max(a[i-1,j]-1,1 ) ) & \text{if $ uv={\texttt{l}}{\texttt{r}}$ , $ u'v'={\texttt{l}}{\texttt{r}}$ } \\ \min(a[i-1,j-1]+2 , a[i-1,j]+1 ) & \text{if $ uv={\texttt{r}}{\texttt{l}}$ , $ u'v'={\texttt{l}}{\texttt{r}}$ } \\ \min(\max ( a[i-1,j-1]-2 , 1),\\ \phantom{\min(}\!\max(a[i-1,j]-1,1 ) ) & \text{if $ uv={\texttt{l}}{\texttt{r}}$ , $ u'v'={\texttt{r}}{\texttt{l}}$ } \end{array } \right.\ ] ] s. w. bae , y. okamoto , and c. shin . area bounds of rectilinear polygons realized by angle sequences . in k. chao , t. hsu , and d. lee , editors , _ proc . algorithms comput . ( isaac12 ) _ , volume 7676 of _ lncs _ , pages 629638 . springer , 2012 .
a _ rectilinear _ polygon is a polygon whose edges are axis - aligned . walking counterclockwise on the boundary of such a polygon yields a sequence of left turns and right turns . the number of left turns always equals the number of right turns plus 4 . it is known that any such sequence can be realized by a rectilinear polygon . in this paper , we consider the problem of finding realizations that minimize the perimeter or the area of the polygon or the area of the bounding box of the polygon . we show that all three problems are np - hard in general . then we consider the special cases of @xmath0-monotone and @xmath1-monotone rectilinear polygons . for these , we can optimize the three objectives efficiently .
the standard model ( sm ) has been very successful in explaining a range of observations at hadron colliders and the cern @xmath12 collider , lep . but it is still widely believed to be an effective theory valid at the electroweak scale , with new physics lying beyond it . the minimal supersymmetric standard model ( mssm)@xcite is widely considered to be the most promising candidate for physics beyond the sm . the mssm contains supersymmetric ( susy ) partners of quarks , gluons and other sm particles which have not been observed , leading to speculation that they might be too heavy to have observable production rates at present collider energies . however it has been suggested in @xcite that a light sbottom ( @xmath1 ) with mass @xmath13(@xmath14 gev ) is not ruled out by electroweak precision data if its coupling to the @xmath15 boson is tuned to be small in the mssm . recently berger _ et al _ @xcite have also proposed a light sbottom and light gluino ( lslg ) model to explain the long - standing puzzle of overproduction of @xmath3 quarks at the tevatron @xcite . in this model gluinos of mass @xmath0 gev are produced in pairs in @xmath16 collisions and decay quickly into a @xmath3 quark and light sbottom ( @xmath2 gev ) each . the sbottom evades direct detection by quickly undergoing @xmath17-parity violating decays into soft dijets of light quarks around the cone of the accompanying @xmath3 jet . the extra @xmath3 quarks so produced result in a remarkably good fit to the measured transverse momentum distribution @xmath18 at nlo level , including data enhancement in the @xmath19 region . some independent explanations within the sm have also been proposed to resolve the discrepancy . these include unknown nnlo qcd effects , updated @xmath3-quark fragmentation functions @xcite and effects from changing the renormalization scale @xcite . but , without an unambiguous reduction in theoretical and experimental errors , the lslg scenario can not be ruled out . it is also interesting in its own right even if not solely responsible for the tevatron discrepancy . for example , a light @xmath1 is more natural if the gluino is also light @xcite . experimental bounds on light gluinos do not apply here as either the mass range or the decay channel is different : only gluinos lighter than @xmath20 gev @xcite are absolutely ruled out . very recently aleph @xcite has ruled out stable sbottoms with lifetime @xmath21 ns and mass @xmath22 gev . however , using formulae in @xcite we calculate that even minimal @xmath17-parity violating couplings , as small as @xmath23 times experimental limits , would leave @xmath1 with a lifetime shorter than @xmath24 ns . light gluino and sbottom contributions to the running strong coupling constant @xmath25(@xmath26 ) have also been calculated and found to be small @xcite . new phenomenon such as susy @xmath15-decays @xcite and gluon splitting into gluinos @xcite are predicted in this scenario , but the rates are either too small or require more careful study of lep data . the sbottoms and light gluinos also affect electroweak precision observables through virtual loops . in this case , serious constraints arise on the heavier eigenstate of the sbottom , i.e. @xmath4 . according to @xcite , corrections to @xmath27 are increasingly negative as @xmath4 becomes heavier and it has to be lighter than @xmath28 ( @xmath29 ) gev at the @xmath30 ( @xmath31 ) level . an extension of this analysis to the entire range of electroweak precision data @xcite yields that @xmath4 must be lighter than @xmath5 gev at @xmath7 level . however , it has been suggested that the susy decay @xmath32 can contribute positively to @xmath27 @xcite , reducing some of the negative loop effects , and possibly allowing higher @xmath4 masses @xcite . independently , if large @xmath33-violating phases are present in the model a @xmath4 with mass @xmath34 gev is possible @xcite . still , it is fair to say that in the face of electroweak constraints the lslg model at least favors a @xmath4 lighter than @xmath35 gev or so . in this article we study production and decay of such a heavy sbottom at lepii . available channels are ( i ) pair production : @xmath36 and ( ii ) associated production : @xmath37 . with lepii center - of - mass energies ranging upto @xmath38 gev , the second channel should have produced heavy sbottoms with masses as high as @xmath39 gev . since they have not been observed , it has been commented that the lslg scenario is disfavored @xcite . however , searches for unstable sbottoms at lepii have not been done for the decay @xmath40 , which should dominate in this scenario as squarks , quarks and gluinos have strong trilinear couplings in the mssm . in that case , the fast - moving gluino emitted by @xmath4 would decay quickly into a @xmath3 quark and @xmath1 that are nearly collinear , with @xmath1 subsequently undergoing @xmath17-parity violating decays into light quarks around the cone of the accompanying @xmath3-jet . unless the jet resolution is set very high , the gluino should look like a fused @xmath3 flavored jet . overall @xmath4 should appear as a heavy particle decaying into @xmath3 flavored dijets . on the other hand , the highly boosted prompt @xmath1 produced in the associated process would decay into nearly collinear light quarks and appear as a single hadronic jet . pair and associated production are therefore naturally described as 4-jet and 3-jet processes respectively at leading order . pair production in particular should be similar to neutral mssm higgs production in the channel @xmath10 if @xmath41 and @xmath42 have approximately equal masses . the article is organised as follows : @xmath4 decays are studied in section ii and @xmath40 is found to be dominant , cross - sections and event topology are studied in section iii and the corresponding sm 3-jet background for associated production is studied in section iv . in section v , lep searches for neutral higgs bosons are used to derive a lower bound on @xmath4 mass . conclusions are drawn in section vi . sbottom decays in mssm scenarios with large mass splitting between @xmath4 and @xmath1 have been investigated before ; see @xcite for example . however the scenario where the gluino is also light has not received much attention . the direct decay products can be purely fermionic ( 1 ) or bosonic ( 2 ) : @xmath43 where @xmath44 @xmath45 and @xmath46 are neutralinos and charginos respectively , @xmath47 is the top quark , @xmath48 are stops , @xmath41 and @xmath49 are neutral @xmath33-even higgs bosons , @xmath42 is the @xmath33-odd higgs and @xmath50 are charged higgs bosons . the individual widths depend on masses of above particles , but available experimental constraints @xcite are model - dependent and might not all be applicable in the lslg scenario . however precision @xmath15-width measurements can be used to apply some basic constraints on masses and the sbottom mixing angle . in the mssm , @xmath15-boson couplings to sbottom pairs are given by , @xmath51 where @xmath52 is the mixing angle between left and right - handed states : @xmath53 the light sbottom should have a vanishingly small coupling in eqn . ( [ eqn3 ] ) as the @xmath54 decay does not occur to high accuracy . this is achieved with the choice @xmath55 the narrow range @xmath56 ( @xmath57 ) is allowed @xcite which we use at times to obtain upper and lower bounds . given that @xmath58 gev , the decay @xmath59 might also take place if @xmath4 is lighter than @xmath60 gev . however this decay is suppressed both kinematically and by the factor @xmath61 . even for the higher value @xmath62 we calculate @xmath63 mev for @xmath64 gev and @xmath65 gev . with the full @xmath15-width having a @xmath66 error of @xmath67 mev and a @xmath68 pull from the theoretical sm calculation @xcite , a lower limit of @xmath69 gev on @xmath4-mass can be set at @xmath70 level without a detailed analysis . similarly , decays into pairs of neutralinos , charginos and stops might contribute unacceptably to the @xmath15 width and it seems safe enough to apply a lower mass limit of @xmath71 to them for calculation purposes . with the observed top quark mass of @xmath72 gev , this rules out the chargino channel @xmath73 as @xmath4 masses @xmath74 gev are being considered . the decay width for @xmath40 is easily calculated at tree - level using feynman rules for the mssm given in @xcite : @xmath75 where @xmath76 , @xmath77 ( summing over all particles involved in the decay ) is the usual kinematic factor and @xmath78 is the strong coupling evaluated at @xmath79 . the canonical strong coupling value @xmath80 is used here . other parameters used in this section are @xmath81 gev , @xmath82 gev , @xmath83 gev and @xmath84 . the remaining widths in eqns . ( [ eqn1],[eqn2 ] ) are calculated using tree - level formulae given in @xcite . [ fig1 ] shows the branching ratios versus @xmath4 mass . the @xmath85 width is large , varying between @xmath86 gev for @xmath87 gev . it has the maximum amount of available phase space and proceeds via the strong coupling , while the other widths are @xmath88 where @xmath89 is the usual weak coupling . the width shown for @xmath90 is the summed width over all 4 neutralinos ( @xmath44 ) . this value scales approximately as @xmath91 for large @xmath92 . here @xmath93 where @xmath94 are the vacuum expectation values of the two higgs doublets . our calculation is most likely an overestimate as mixing angles are ignored and all neutralinos are prescribed the same mass . this channel has been extensively searched for at lep @xcite , but seems to be at most @xmath95 of the full width in the lslg scenario . bosonic decays with @xmath96 , @xmath15 in the final state are also found to be small . we show @xmath97 correct upto an unknown factor @xmath98 where @xmath99 is the stop mixing angle . for @xmath100 the factor would be @xmath101 . because of the unnaturally low value of @xmath102 mass chosen here , this width rises significantly as @xmath103 approaches @xmath35 gev . decays into higgs bosons are more complex as besides higgs masses , the widths depend on unknown soft susy - breaking mass terms @xmath104 and @xmath105 . the only available mass constraint is @xmath106 gev at two - loop level in the mssm . however the excellent agreement between electroweak precision measurements and theoretical predictions with a single sm higgs boson has led to a preference for the `` decoupling limit '' of the mssm higgs sector . in this limit , yukawa couplings of @xmath41 to quarks and leptons are nearly identical to those of the standard model higgs . at the same time @xmath107 have almost degenerate masses @xmath108 . therefore , with @xmath4 lighter than @xmath35 gev , only @xmath109 is likely to be significant while other decays would be kinematically impossible or heavily suppressed . the width is then given by @xmath110 we choose @xmath111 gev in our calculation as lep data has ruled out sm higgs bosons lighter than this value @xcite . in the decoupling limit , arbitrary variation over @xmath104 , @xmath105 in calculating @xmath112 is not required as the factor @xmath113 can be expressed in terms of sbottom masses and @xmath52 : @xmath114 with @xmath52 given by eqn . ( [ eqn7 ] ) . this is a common relation that arises when the sbottom mass matrix ( see @xcite for example ) is diagonalized with the mixing matrix in eqn . ( [ eqn6 ] ) . branching ratios for @xmath4 with @xmath115 . masses are set as @xmath116 @xmath117 @xmath44 and @xmath111 gev . the higgs width is calculated in the decoupling limit . ] though theoretically and experimentally attractive , if the decoupling limit does not hold then other higgs particles might also be light . the most general lower mass limits from lep on neutral mssm higgs bosons are about @xmath11 gev @xcite . then , the @xmath118 width ( say ) can become larger than @xmath119 of @xmath120 due to the coupling @xmath121 this happens if @xmath122 is larger than @xmath123 tev . though the possibility is there , we consider it less likely and do not pursue it further . in any event such a decay would be more important for higher @xmath4 masses , and we show in section iii that @xmath4 production at lepii falls rapidly as its mass nears @xmath35 gev . we therefore conclude that the strong decay @xmath40 is dominant and other decays are unlikely to be of more than marginal importance at lepii . cross - sections for @xmath4 production are defined as follows : @xmath124 and @xmath125 . for completeness production of @xmath126 pairs is referred to as @xmath127 . the @xmath128 are readily calculated at tree - level , @xmath129 where @xmath130 , @xmath131 , @xmath132 , @xmath133 , @xmath134 and @xmath135 are electron vector and axial couplings that equal @xmath136 and @xmath137 respectively . the @xmath138-factors are proportional to sbottom-@xmath15 couplings in eqns . ( 3 - 5 ) . we use the same parameters here as used earlier for width calculations . both virtual photon ( @xmath139 ) and virtual @xmath15 ( @xmath140 ) channels are available for @xmath141 while only @xmath140 is available for @xmath142 . the latter falls by a factor of @xmath143 in going from @xmath62 to @xmath144 . pair production rises in the same range by a smaller factor of 1.3 at @xmath145 gev . variation of @xmath1 mass between @xmath2 gev has negligible effect on @xmath142 . [ fig2 ] shows @xmath128 versus @xmath4-mass at @xmath145 gev . both cross - sections are suppressed due to the @xmath146 kinematic factor for scalar particle production . however , asymmetry between sbottom masses causes additional kinematic suppression of @xmath142 as @xmath147 for the same total rest mass of final products , @xmath148 . the missing photon channel and smaller @xmath138-factor , @xmath149 , reduces the cross - section further . therefore associated production is generally small and falls rapidly as @xmath4 gets heavier . the lepii operation covered a range of center - of - mass energies from @xmath150 gev with maximum data collected at @xmath151 gev and @xmath152 gev . [ fig3 ] shows the expected number of raw events . we use an approximate luminosity distribution provided in @xcite counting the combined integrated luminosity recorded by all four lep experiments . the number of events for associated production falls below @xmath153 for @xmath154 gev at @xmath155 . it is therefore possible that sufficient statistics might not be available to explore sbottom masses above this value . the @xmath4 production cross - section for @xmath145 gev , @xmath155 as a function of mass . ] [ cols="^,^,^ " , ] we now discuss the event topology in order to identify important backgrounds . as shown in section ii the decay @xmath40 is dominant which results in the states @xmath156 and @xmath157 for associated and pair processes respectively . we decay the gluinos into @xmath158 pairs and show the opening angles between final products for some representative @xmath4 masses in fig . the @xmath3 quark and @xmath1 arising from gluino decay overwhelmingly prefer a small angular separation with a sharp peak at @xmath159 . the other particles tend to be well - separated . through @xmath17-parity and baryon - number violating couplings @xmath160 , @xmath1 can decay into pairs of light quarks : @xmath161 . a detailed discussion of such decays is given in @xcite . in that case , the @xmath1 arising from gluino decay would further decay hadronically in and around the cone of the accompanying @xmath3 jet . in practise it would be difficult to distinguish between the overlapping jets , unless a very fine jet resolution is used . the gluino should then appear for the most part as a single fused @xmath3-flavored jet with perhaps some extra activity around the cone . the prompt @xmath1 from associated production is highly boosted for most @xmath4 masses within range . this should result in a very small angular separation between its decay products . if it decays into pairs of light quarks , we calculate that at @xmath145 gev , @xmath82 gev and @xmath162 gev , at least @xmath163 of these would have an opening angle @xmath164 . at any rate a @xmath4 as heavy as @xmath165 gev is unlikely to be observable because of low event counts and would be obscured by the large 3-jet sm background ( section iv ) . therefore in the observable range @xmath1 should show up as a single hadronic jet . at leading order then , associated production is best described as a 3-jet process , with 2 jets that can be tagged as @xmath3 quarks and a hadronic jet from @xmath1 . the relevant background for this would be sm 3-jet events which we discuss in section iv . opening angles between particle pairs in ( a ) pair production and ( b ) associated production at @xmath145 gev . particles marked with `` @xmath166 '' are gluino decay products . in ( a ) the @xmath167 distribution shown is for @xmath3 quarks and gluinos arising from the same @xmath4 . @xmath167 arising from different @xmath4 and @xmath168 have an identical distribution to that shown for @xmath169 . in ( b ) , @xmath170 is not shown as it is the same as @xmath171 . ] on the other hand , pair production is naturally a 4-jet process where each jet can be tagged as a @xmath3 quark . this would have significant background from _ any other _ heavy particles produced in pairs and decaying into dijets of @xmath3 quarks . searches for neutral higgs bosons @xmath41 and @xmath42 that can satisfy this criteria have been done , and we discuss them in section v. the sm gluon radiation process : @xmath172 , @xmath173 ; constitutes the main 3-jet background for associated production . in particular , @xmath174 could be an irreducible background as gluon jets and jets from light sbottoms might not be distinguishable on a case - by - case basis . we compare this background with associated production using the jade jet - clustering algorithm @xcite : @xmath175 where @xmath176 are the momenta of the final state partons and @xmath177 is the jet resolution parameter . as long as @xmath178 for @xmath145 gev and @xmath179 gev , the hadronic decay products of @xmath180 and @xmath1 are clustered into single jets . we evaluate matrix elements at leading order and do not consider contributions to the sm 3-jet cross - section from final states with more than three partons . the renormalization scale is set at @xmath181 with @xmath80 . [ fig5 ] shows that @xmath142 is a small fraction of the total sm 3-jet cross - section , though it increases in proportion as @xmath182 increases and the jets are required to be well - separated . it is unlikely to be visible as a generic excess in 3-jet production given that measurements of hadronic cross - sections at lepii have errors of at least @xmath183 pb @xcite . however , if at least one jet is @xmath3-tagged and @xmath184 is measured very accurately , then for @xmath4 lighter than @xmath185 gev an excess might be observable at higher @xmath182 values . associated production ( dashed lines ) compared to sm 3-jet cross - sections versus @xmath182 at @xmath145 gev . ] if two jets out of three are required to have @xmath3 tags then their total invariant mass can also be studied as in fig . the total invariant mass of the @xmath186 quark and gluino ( which appears as a @xmath3-like jet ) gives rise to a clear resonance around @xmath103 . this would allow direct observation of a @xmath4 , and should be the preferred method of study . the invariant mass of two @xmath3 tagged jets can be reconstructed to observe excesses . dashed lines show associated production and the solid line @xmath187 for @xmath188 . tagging efficiencies for @xmath3 quarks are not applied here . events are shown for the total integrated luminosity recorded by the four lep collaborations at @xmath189 gev . ] the differential cross - section for @xmath187 events increases with the invariant mass , @xmath190 , while the resonance in @xmath142 rapidly gets smaller as @xmath4 gets heavier . this is natural as gluon radiation from quark pairs is higher for softer gluons , which in turn implies a higher total invariant mass for the @xmath191 pair . to estimate the discovery region we calculate both signal ( @xmath192 ) and background ( @xmath112 ) events in the mass window @xmath193 where @xmath194 is the invariant mass of the @xmath3 tagged jets and @xmath195 . the @xmath3 tagging efficiency @xmath196 is taken to be @xmath197 , from @xmath27 studies at lepii @xcite . mistag probabilities are assumed to be small and not included in the analysis . we also use @xmath188 which is found to maximize the significance @xmath198 . the @xmath199 discovery region is defined as @xmath200 calculating events using the entire integrated luminosity recorded for @xmath189 gev , we find that for @xmath155 , @xmath4 masses upto @xmath201 ( @xmath202 ) gev can be discovered at the @xmath7 ( @xmath31 ) level . for @xmath56 , the upper limits for discovery are @xmath203 gev ( @xmath7 ) and @xmath204 gev ( @xmath31 ) . since @xmath192 and @xmath112 are @xmath205 , the significance is @xmath206 and better @xmath3 tagging efficiencies can improve the upper limits . however we have not included effects of gaussian smearing of pair invariant mass measurements , which might reduce the significance . we note that the associated process also receives an irreducible susy background as the @xmath207 final state is possible even if the heavy sbottom is absent . this has been studied in the context of @xmath15 decay @xcite . however , its kinematics are very different from the same state produced by @xmath4 decay , and it should have little effect on the overall background . in fig . 6 it would appear as an approximately uniform distribution of @xmath208 events @xmath209 gev , which is insignificant compared to the @xmath187 background . at leading order @xmath210 proceeds only through the virtual @xmath15 channel . the relevant coupling is @xmath211 where @xmath212 is the mixing angle between neutral @xmath33-even higgs bosons . this is comparable to the heavy sbottom coupling @xmath213 in eqn . . however production of @xmath214 pairs is somewhat higher as it also takes place through the @xmath139 channel and receives an extra factor of @xmath215 from summing over final - state colors . being scalars , both pairs of particles are produced with the same angular distribution . searches for @xmath216 production @xcite have been done along the diagonal @xmath217 , which makes them kinematically identical to @xmath4 pair production . the final states searched for are @xmath218 , @xmath219 or @xmath220 as @xmath221 decay mainly into @xmath3 or @xmath222 pairs in the parameter space where they are approximately equimassive . therefore , the 4@xmath3 channel can be used to place limits on @xmath4 pair production as the latter leads to 4 @xmath3 flavored jets in the final state . cross - sections for the two processes are compared in fig . the @xmath216 cross - section is called @xmath223 . we simply maximize this by setting @xmath224 and br@xmath225 . the parameters used in the experimental study were similar or lesser . we find that @xmath141 is @xmath226 times higher than higgs production for @xmath227 . if the more typical branching ratios @xmath228 and @xmath229 are used then @xmath141 is effectively 2.1 to 2.6 times higher . however that could be offset if @xmath4 has a branching ratio into @xmath85 near its lower limit of around @xmath230 in this mass range ( see fig . [ fig1 ] ) . experimental searches for @xmath216 have used approximately @xmath231 pb@xmath232 of combined integrated luminosity , with center - of - mass energies between @xmath35 and @xmath233 gev . only opal has seen a significant excess in the 4@xmath3-jet channel , which is at the @xmath30 level at @xmath234 gev . this does not appear in other experiments , though it can not be ruled out statistically . no excess in this channel seems to have been observed by any experiment below @xmath235 gev which is approximately the quoted lower limit at @xmath236 confidence for higgs masses . since the pair cross - section is higher than that for @xmath216 , this should simultaneously rule out heavy sbottoms lighter than @xmath11 gev in the lslg scenario . comparison between @xmath223 and @xmath141 at @xmath145 gev , versus @xmath237 . upper and lower limiting curves for @xmath141 are obtained for @xmath238 , @xmath239 respectively . ] there are some qualifications to this analysis . first , @xmath4 has a much larger width in absolute terms than @xmath41 or @xmath42 , and that seems to have been a significant factor in the @xmath216 searches at lep . however , since @xmath141 is larger , it is likely that any excess would have been observed and the @xmath11 gev lower limit is approximately correct . secondly , if very low values of @xmath182 ( below @xmath240 ) were used in the lep searches , then the above analysis might not hold . we have shown that the heavy sbottom eigenstate decays dominantly into @xmath85 pairs in the light sbottom and light gluino scenario . pair and associated production of @xmath4 at lepii have been studied and found to be naturally described as 4-jet and 3-jet processes respectively . their cross - sections and raw event rates have been calculated and associated production is found to be small and obscured by the large sm 3-jet background for large values of @xmath4 mass . however , we find that @xmath7 discovery of a @xmath4 is possible using 3-jet data provided @xmath241 gev , for @xmath56 . the corresponding @xmath31 limits are @xmath242 gev . we recommend a search as far as possible . while invariant masses reconstructed from @xmath3-tagged jet pairs might be the most direct way to do this , single @xmath3-tagged events can also be useful if the cross - sections are measurable to a high accuracy . we also find that @xmath4 pair production is similar to production of neutral mssm higgs bosons decaying into @xmath191 pairs , which have been extensively searched for by the four lep collaborations . minor excesses , though inconclusive , seen in the @xmath243 jet channel for masses @xmath244 gev provide further motivation for a detailed study of 3-jet events . we show that @xmath4 should be heavier than about @xmath11 gev as no excess has been reported below this value . i would like to thank prof . d. a. dicus for useful discussions and help given throughout the course of this work . this work was supported in part by the united states department of energy under contract no . de - fg03 - 93er40757 . _ note : _ a paper by e.l . berger , j. lee and t.m.p . tait ( hep - ph/0306110 ) that also covers associated production in this scenario , using the jet cone algorithm , appeared independently on the internet a few days before this one .
a low - energy supersymmetry scenario with a light gluino of mass @xmath0 gev and light sbottom ( @xmath1 ) of mass @xmath2 gev has been used to explain the apparent overproduction of @xmath3 quarks at the tevatron . in this scenario the other mass eigenstate of the sbottom , i.e. @xmath4 , is favored to be lighter than @xmath5 gev due to constraints from electroweak precision data . we survey its decay modes in this scenario and show that decay into a @xmath3 quark and gluino should be dominant . associated sbottom production at lep via @xmath6 is studied and we show that it is naturally a three - jet process with a small cross - section , increasingly obscured by a large standard model background for heavier @xmath4 . however we find that direct observation of a @xmath4 at the @xmath7 level is possible if it is lighter than @xmath8 gev , depending on the sbottom mixing angle @xmath9 . we also show that @xmath4-pair production can be mistaken for production of neutral mssm higgs bosons in the channel @xmath10 . using searches for the latter we place a lower mass limit of @xmath11 gev on @xmath4 .
the graph isomorphism problem ( gi ) requires to decide whether two given graphs @xmath1 and @xmath2 are indeed the same graph but for a relabeling of the vertices . due to its practical applications ( ranging from chemistry to social sciences ) and theoretical properties , the problem has been thoroughly studied @xcite . gi possesses peculiar features that make it an interesting candidate for an _ efficient _ quantum algorithm . in fact it is in np but is not believed to be np - complete : like factoring , it belongs to the np - intermediate family @xcite and is representative of the ( non - abelian ) _ hidden subgroup _ problem family @xcite . the best classical general algorithm solves gi for graphs of @xmath3 vertices in time @xmath4 , were @xmath5 is a constant . + one way to solve gi is to show that two graphs are non - isomorphic . starting from 2005 there have been different proposals of quantum algorithms based on `` non - isomorphism witnesses '' , i.e. observable quantities that assume different values only if the two input graphs are non - isomorphic . the standard benchmark for this approach is provided by the family of strongly regular graphs ( srgs ) , that includes many hard instances of gi @xcite . for example in @xcite to distinguish non isomorphic graphs the authors exploit continuous @xcite and discrete time quantum walks @xcite of one or more particles moving through the graphs and compare the evolution of the same initial condition on the two graphs . the distinguishing power of the algorithm increases with the number of walker moving along the graph ; the technique , however , is not universal and there are non - isomorphic graphs that can not be distinguished . + a different approach , based on the adiabatic quantum computation paradigm ( aqc)@xcite , has been recently proposed in @xcite . in order to distinguish non - isomorphic graphs , for example , vinci _ _ look at the values assumed by a set non - isomorphism witnesses during the adiabatic evolution of the couple of graphs under investigation . they show that their technique is able to distinguish non - isomorphic srgs up to instances of 29 vertices . on the other side , the technique is not guaranteed against the problem that afflicts all the quantum algorithm based on the adiabatic theorem : the spectral gap of the driving hamiltonian can become exponentially small when the size of the problem increases ; consequently , it could take an exponentially long time to reach the time - region in which it is possible to distinguish non - isomorphic graphs . recently @xcite it has been shown that there is a family of observables that can be used to distinguish non - isomorphic graphs even if the `` adiabatic protocol '' is not respected and the systems under observations are subjected to some degree of noise . an interesting feature of both hen - young and vinci _ et al _ proposals is that they can be , in principle , experimentally verified on current commercial hardware ( d - wave one @xcite ) . + in this work we propose an alternative approach to gi based on aqc . instead of looking for non - isomorphism witnesses , the algorithm we propose solves gi by finding , if it exists , a permutation that transforms one of the two input graphs into the other . it uses a number of qubits that scales quadratically with the input size ( @xmath6 ) . the configuration space is explored through a continuous - time quantum - walk of @xmath3 interacting walkers that , by construction , visits only the space of _ functions _ from @xmath7 to @xmath7 . this makes it possible to define a cost function that is equivalent to a boolean formula made up of clauses of two literals ( @xmath0-sat ) , which can be easily turned into a @xmath0-local hamiltonian , without using any perturbative gadget or projective reduction @xcite . + the paper is organized as follows : in section 2 we formally define the gi problem and the associated optimization problem . in section 3 we cast the optimization problem into an adiabatic algorithm . section 4 is devoted to a presentation of the results . the last section is devoted to discussion , experimental verification proposal / issues and outlook . an unoriented graph of size @xmath3 is a couple @xmath8 , where @xmath9 is set of vertices and two vertices @xmath10 are connected to each other iff @xmath11 . + a permutation @xmath12 of the vertices is a bijection @xmath13 . we indicate by @xmath14 the graph obtained by applying @xmath12 to @xmath15 , where @xmath16 . we will refer to the group of permutations of @xmath3 elements as to the _ symmetric group _ @xmath17 . + the _ graph isomorphism _ problem ( gi ) is defined as follows : given two graphs @xmath18 and @xmath2 of @xmath3 vertices , does exist a permutation @xmath19 such that @xmath20 ? in what follows we will indicate the set of solutions of an assigned instance of gi as : @xmath21 and indicate @xmath22 if @xmath23 is non - empty . + we start our construction of a quantum algorithm for gi by defining a _ cost function _ @xmath24 that assigns a penalty ( positive weight ) to every permutation not belonging to @xmath23 . given the adjacency matrices @xmath25 and @xmath26 of , respectively , @xmath27 and @xmath28 and the permutation matrix @xmath29 associated to @xmath12 , the function @xmath30 counts the number of edges that are in @xmath31 but not in @xmath28 and vice versa . therefore @xmath32 if @xmath33 , @xmath34 otherwise . + instead of representing a permutation @xmath19 through its permutation matrix @xmath29 we use a set of @xmath35 variables @xmath36 , @xmath37 , organized in a grid on @xmath3 rows and @xmath3 columns ( see figure [ fig : system ] ) . the variable @xmath38 is set to 1 if the permutation @xmath12 assigns to the element at position @xmath39 the element at position @xmath40 . + with this representation , the cost function @xmath41 becomes a real valued function @xmath42 : @xmath43 the addenda in the last line assign a penalty to every configuration that do not correspond to a permutation , i.e. has more than one 1 in each row and column . + finding an assignment to the variables @xmath38 such that @xmath44 is equivalent to the problem of finding a satisfying assignment to the following boolean cnf formula : @xmath45 this is an @xmath3-sat formula . the terms in the first row of ( [ eq : sat ] ) are 2-literal clauses and depend on the input graphs ; the terms of the second row are simultaneously satisfied only if there is exactly one `` 1 '' in each row and column of the grid : the @xmath3-literals terms @xmath46 are satisfied as long as there is at least one `` 1 '' in each row , whereas the term @xmath47 is satisfied if there is at most one `` 1 '' in each column . to sum up , the second line of the formula ( [ eq : sat ] ) is evaluated to _ true _ is if the variables in the grid form a _ permutation matrix _ and the first line is _ true _ if such permutation maps one of the input graphs into the other , i.e. @xmath22 . we observe that , if we restrict the possible assignments to the variables @xmath48 to those corresponding to configurations in which there is exactly one `` 1 '' in each row of the grid , all the @xmath3-literal clauses will be automatically satisfied and the satisfaction of the formula @xmath49 alone would guarantee that the configuration of the grid corresponds to a permutation . under this assumption , the cost function @xmath41 is equivalent to the @xmath50 formula : @xmath51 i.e. a formula made up of terms involving at most ( in our case , exaclty ) two variables . this fact will play a central role in the construction of the following section . the solution of a combinatorial problem , such as gi , can be mapped into the state of lowest energy of a potential operator , or _ final _ hamiltonian , @xmath52 @xcite . in aqc the problem of finding such a state is solved by using an auxiliary , or _ initial _ hamiltonian @xmath53 . the system is prepared in the `` easy to prepare '' ground state of @xmath53 and evolves under the action of a time - dependent hamiltonian of the form : @xmath54.\end{aligned}\ ] ] if the evolution time @xmath55 satisfies @xmath56 with @xmath57 and the spectral gap @xmath58 defined as in @xcite ( see also appendix a ) , the hypothesis of the adiabatic theorem are satisfied and the state of the system at the final time @xmath55 will be the ground state of @xmath52 . + in order to turn the optimization problem defined in the previous section into a quantum algorithm , we first assign a two - level system ( qubit ) to each boolean variable @xmath38 ( see figure [ fig : system ] ) . we select the direction @xmath59 as the computational direction and indicate by @xmath60 ( or `` up '' ) and @xmath61 ( or `` down '' ) the eigenstates of @xmath62 belonging to the eigenvalues + 1 and -1 respectively . + the conventional generator of the diffusion ( @xmath53 in ( [ eq : adiabatic ] ) ) adopted in aqc , adapted to our system , has the form @xmath63 the ground state of the hamiltonian @xmath64 is easy to prepare ( all the spins aligned along the @xmath65 axis ) and corresponds to an uniform superposition of all the possible configurations @xmath66 . + on the other side , we observed that , by restricting the set of possible assignments , gi can be mapped to a 2-sat formula . consider then the hamiltonian @xmath67 where @xmath68 are the spin raising and lowering operators @xmath69 . each row of the spin grid evolves independently of the others and the interactions in each chain are next - neighbors of @xmath70 type . the number of spins `` up '' , or excitations , in each chain is preserved by @xmath53 ; in fact , defined the _ number _ operator for each chain as : @xmath71 it is @xmath72=[n_i^z h_i - h_i n_i^z]=0 $ ] . in particular , if we choose , for each row @xmath39 , an initial condition in the @xmath73 sector of the hilbert space , the evolution under @xmath53 will remain into the space @xmath74 , i.e. the space of functions from @xmath7 to @xmath7 . indeed , this property will be preserved as long as the hamiltonian of the system has the form @xmath75 , with @xmath76 diagonal in the computational basis . moreover , the ground state of @xmath53 is easy to prepare either by an adiabatic scheme ( see appendix b ) or by dissipative means @xcite . + the operator @xmath53 restricted to @xmath74 can be rewritten as : @xmath77 where @xmath78 indicates that the excitation of the @xmath39-th chain is at position @xmath40 . + thanks to this simplified notation , it becomes clear that the exploration of the space of configuration is performed through @xmath3 continuous time quantum walks on linear graphs . + in this setting , it is possible to formulate the gi problems in terms as in ( [ eq : twosat ] ) . the formula can be translated into the following potential hamiltonian : of gi . on the right , the spin - grid and interaction graph for the same gi instance : solid lines correspond to @xmath79 interactions : in blue we show the `` permutation''-constraints related interaction ; in red , the instance dependent one . dashed lines represent @xmath70 interactions . [ fig : system ] ] rearranged in order to show the topology of the _ hardware _ part of the algorithm.[fig : systemgeom ] ] @xmath80 the hamiltonian is 2-local ( i.e. it is made up of terms involving at most two qubit ) . to every violated clause in ( [ eq : twosat ] ) it corresponds a unit energy penalty . if the 2-sat formula associated to the gi instance @xmath81 is satisfiable , i.e. @xmath22 , there exists a zero energy configuration . + the topology of the ensuing interaction graph has particular features . within each chain there are only next - neighbor interactions . the @xmath79 interactions between the spins in a column of a grid , on the other side , define a complete @xmath3-graph . together , the _ infra_-chain and _ infra_-column allow for the search of the solution to happen close to the space of permutations : they depend on the input size @xmath3 alone , and not on the particular instance of gi : they represent the _ hardware _ part of the algorithm . the `` geometry '' that minimizes the `` distance '' of the hardware part is that of a cylinder . + the instance - dependent interactions connect only elements that sit on different rows and columns : they must be programmed ad - hoc ( _ software _ ) . figure [ fig : system ] shows an example of the interaction - graph associated to a gi instance of dimension @xmath82 . + if started from the ground state of @xmath53 , restricted to @xmath74 , the adiabatic evolution ( i.e. with @xmath83 ) of the system under the action of the time - dependent hamiltonian ( [ eq : adiabatic ] ) , with @xmath53 and @xmath52 defined as in ( [ eq : initialhamprime ] ) and ( [ eq : hf ] ) , will end up in the ground state @xmath84 of @xmath52 ( see appendix a ) . if @xmath27 and @xmath28 are non - isomorphic , the ground state energy will be equal to the number of clauses that can not be satisfied , i.e. @xmath85 . we will address the key issue of the estimation of the `` annealing time '' @xmath55 in the next section . here we propose a measurement protocol for the read - out of the result . + first of all , we observe that the expectation value @xmath86 of the observable @xmath87 is an isomorphism witness . in fact , it is zero iff the two graphs are isomorphic . + besides , the final state of the computation carries informations on @xmath23 , even in the case the observable @xmath88 can not be measured . for example , if the input graphs are _ rigid _ , i.e. the group of automorphisms of each of the graphs consists of the identity alone @xcite , then there is at most one solution to gi and @xmath89 . the ground state of @xmath52 , therefore , either encodes the permutation that maps @xmath27 into @xmath28 or not . by performing local and independent measurements of the _ position _ observables @xmath90 we can read out the permutation @xmath91 ; then it suffices to check that @xmath92 . + if the graphs are not rigid , the ground state will be a superposition @xmath93 in order to extract one of the solution , we can proceed as follows . we run the algorithm once . we measure @xmath94 . the measurement will provide the value @xmath95 . we then restart the algorithm by setting the `` spin up '' of the first chain to @xmath95 , while the state of the other chains is prepared in the ground state of @xmath96 we then let the system evolve under @xmath97.\ ] ] we then measure @xmath98 and iterate the procedure . after @xmath3 iterations of this scheme , we will end up in a permutation state @xmath91 and it suffices to verify whether it maps @xmath27 into @xmath28 to have a definite answer . so , in the case the input graphs are not guaranteed to be rigid , we need at most a linear time overhead in order to read out the result and the overall execution time of the algorithm ( adiabatic procedure + measurement ) will scale , in the worst case , as @xmath99 , @xmath55 being the annealing time required by the first run of the adiabatic procedure . + independently of their rigidity of the input graphs the output of the algorithm is always a permutation @xmath12 . if the two input graphs are not isomorphic , it will be @xmath100 . in what follows , therefore , we will restrict our investigation on the performance of the algorithm on isomorphic instances of gi . for @xmath101 , the spin chains interact with each other . the analysis of the spectral gap of the hamiltonian ( [ eq : adiabatic ] ) is quite hard . we did not find any mean to derive analytic results about the spectral gap @xmath58 ; we can only warrant that it @xmath102 $ ] . the result follows immediately by an application of the perron - frobenius theorem @xcite . + this , together with the results about the spectra of the operators @xmath53 and @xmath52 of the previous sections , assure that the algorithm `` makes sense '' but provides no information about its efficiency : we can not rule out the possibility of @xmath58 becoming exponentially small as the input size increases . + the determination of the spectral gap for instances of size @xmath3 requires the solution of the eigenvalue problem for @xmath103 matrices by numeric means . with our computational resources , we have been able to characterize the spectral gap for graphs of at most @xmath104 vertices ( i.e. , for a system of @xmath105 qubits , evolving in a hilbert space isomorphic to @xmath106 ) . this does not allow for a study of the spectral behavior of the algorithm as a function of the input size @xcite . by direct inspection of the spectral gap , however , it is easy to see that the `` hardness '' ( i.e. @xmath58 ) of an isomorphic instance @xmath107 , of gi may depend on @xmath12 ( see figure [ fig : aqc ] ) . + the observation of this simple `` fact of life '' suggests the following strategy , that we christened _ permutation trick _ ( pt ) : try to solve the original instance @xmath108 . if at the end of the adiabatic evolution a solution is not found , modify the input instance @xmath109 , with @xmath110 . + for more significant instances we resorted to monte - carlo simulations . we used the world - line quantum monte - carlo ( qmc ) @xcite numerical scheme . the algorithm is described in the appendix c. + in order to study the dependence of the annealing time @xmath111 on the problem size @xmath3 we proceeded as follows . we generated a sample of @xmath112 isomorphic instances @xmath113 , with @xmath114 randomly extracted from the symmetric group @xmath17 ; each of the graphs @xmath115 is connected and is generated using the wolfram mathematica function * randomgraph(@xmath116 ) * ( and discarding non - connected graphs ) ; the parameter @xmath117 is the number of edges of the graph , uniformly extracted in the range @xmath118\cap \mathbb{z}$ ] : we avoided graphs with low connectivity , since they usually provide very easy gi instances . + for each instance we ran the qmc simulation for a tentative time , say @xmath119 , and up to @xmath120 times . if a solution is found , stop ; otherwise , apply the permutation trick : sample @xmath110 and try to solve @xmath121 . the algorithm fails when a solution in not found after @xmath122 applications of the permutation trick . we point out that the maximum number of monte - carlo runs for each instance is @xmath123 independently of the instance size . we define the `` annealing time '' @xmath111 as the time needed to solve _ all _ the @xmath124 instances of size @xmath3 of gi , with the help of pt . the results are shown in figure [ fig : qmc ] . we show also the number of failures of the algorithm when we run it on instances of size @xmath3 with annealing time @xmath111 , @xmath125 and without the application of pt ( @xmath126 ) ; the steep growth of the number of failures supports the conjecture that a rearrangement of the `` solution landscape '' is likely to significantly simplify the original instance , without modifying its structural properties . + in order to avoid any misunderstanding , we stress here that the results we will discuss below are inconclusive under , at least , two points of view . first , the dimension of the instances is very limited . secondly , the qmc simulation of the adiabatic scheme is not guaranteed to provide a faithful simulation of the evolution of the system @xcite . what we are presenting here , therefore , are preliminary results and observations . + the results obtained with qmc for random graphs of size up to @xmath127 vertices are shown in figure [ fig : qmc ] . the annealing time scales linearly from @xmath128 to @xmath129 . then there is some kind of `` phase transition '' : the time required to solve instances of size @xmath130 is about twice the time needed to solve the @xmath129 instances . then the annealing time grows linearly from @xmath130 to @xmath127 ( but more steeply than from @xmath131 ) . + needless to say , the reduced size of the tractable instances makes it impossible to infer anything about the behavior of the algorithm on large gi instances . the presence of `` phase transitions '' , like the one observed at @xmath132 , will most likely imply an exponential dependence of the annealing time on the input size ; the rate of such transitions , however , will determine the presence of any quantum speed - up with respect to the best classical algorithm . + since srgs can provide harder instances of gi @xcite , we tested our algorithm on instances of gi generated from srg up to @xmath133 vertices . the class of srg is organized in families @xmath134 , where @xmath3 is the number of vertices , @xmath135 is the degree of each vertex , @xmath136 is the number of common neighbors of any two adjacent vertices and @xmath137 is the number of common neighbors shared by any two non - adjacent vertices . unfortunately the families @xmath134 are made up of at most two representatives for @xmath138 . in order to allow for the comparison with the results obtained with randomly generated graphs to be fair , for each representative @xmath15 of the srg family @xmath134 , we generated 10 instances @xmath139 , with @xmath114 randomly extracted from @xmath17 ; as for random graphs , we define @xmath140 as the time needed to solve all the 10 @xmath3-instances . the results are shown in figure [ fig : qmcsrg ] . the annealing times for srg are usually much smaller than those needed to solve random graphs . this allowed us to push the qmc simulations with srg up to instances of @xmath133 vertices . a direct comparison with the annealing - time required by random graphs is possible for @xmath141 . for @xmath142 we found instances of random graphs of size @xmath3 that are not solved for @xmath143 , thus showing that @xmath144 . as far as small graphs are considered , therefore , strong regularity is an advantage . as a function of the problem size @xmath6 . in the inset : the number of failures on 100 instances , for an annealing time determined as in the main figure , without the permutation trick . ] for strongly regular graphs as a function of the problem size @xmath6 . ] the @xmath0-local quantum adiabatic algorithm for gi we presented finds the isomorphism between the input graphs by finding , if it exists , a permutation matrix that maps one of the two graphs into the other . by using @xmath3 interacting quantum walks , we were able to reduce the gi problem to the search of a satisfying assignment to a @xmath0-sat formula . remarkably enough , this is done without resorting to any perturbation gadget or projective technique . + the algorithm is a true quantum algorithm . in fact , the initial hamiltonian @xmath53 ( actually a slightly modified version of it , see appendix b ) is frustration - free and stoquastic @xcite . when the two input graphs @xmath145 are isomorphic , and it is the case for all the instances used in our study , the final hamiltonian @xmath146 is frustration - free and stoquastic as well . on the other side , while @xmath147 , preserves the stoquasticity , it is no guaranteed to be frustration - free . this rules out the possibility of efficiently simulate our algorithm by classical means @xcite . + we can not provide a characterization of the spectral behavior @xcite of the adiabatic hamiltonian driving the system ; in the lack of analytic results , we resorted to numerics , which allow for an inspection of the spectral gap only for gi instances up to @xmath128 vertices , which is obviously largely insufficient to infer any scaling law . + with the help of monte - carlo simulations we were able to get some preliminary results about the running - time of the algorithm for random graphs and srg . there is no evidence of any quantum speed - up with respect to the best classical algorithm for gi . in fact , the ( admittedly very limited ) data on the annealing - times @xmath111 and @xmath140 , needed to solve , respectively , random and strongly regular graphs , fit very well to a scaling @xmath148 . if the scaling were confirmed by an extended simulation campaign , we could therefore only claim ( no surprise here ) that the adiabatic procedure we defined is not equivalent to a grover search @xcite in the _ unstructured _ @xmath149- dimensional space of functions from @xmath7 to @xmath7 , nor in the @xmath150-dimensional space of permutations , since it would require a time @xmath151 . + from the point of view of complexity , therefore , the results we obtain are quite modest , but maybe not unexpected . aqc offers the potential advantage of being a general purpose tool ; as such it may be not the _ best _ tool for any given problem . the 2-sat problem , to which we reduce gi in our setting , provides a key example : it is in the complexity class @xmath152 , since there is an _ ad hoc _ algorithm that solves it in linear time . a 2-sat problem can be straightforwardly encoded into a 2-local hamiltonian by a construction similar to the one presented in section ii and be used as the final hamiltonian of an adiabatic algorithm . in the aqc setting , however , the adiabatic hamiltonian to solve 2-sat is equal to the one used to solve np - hard problem max-2-sat , that is the problem of determining the maximum number of 2-literal clauses that can be simultaneously satisfied @xcite . it is possible that satisfiable 2-sat formulas or , equivalently , isomorphic instances @xmath22 , are easier to solve than unsatisfiable ( non - isomorphic ) instances : in this case , in fact , the final hamiltonian is frustration - free . this conjecture , however , remains to be proved . + the results we obtained through monte - carlo simulations must be considered with caution : it is possible that the numerical scheme ( and the parametrization ) we used does not capture some fundamental aspect of the quantum adiabatic evolution . besides , the simulations must be pushed much further to understand , in the spirit of @xcite , the real dependence of the annealing time on the size of the instances . the development of optimized and parallelized quantum monte - carlo algorithms , exploiting the computational power of multi - core cpu and gpus , will be one of the focuses of future research . however , the dimension @xmath153 of the hilbert space visited by our algorithm is such that , even by exploiting all the computational resources used in ref.@xcite , we will be able to simulate the algorithm for graphs of at most @xmath154 vertices . a real check of the performance of the procedure described in this work will be possible only by implementing it to a quantum computational device . + we thus conclude with a discussion of the difficulties one would encounter in an hardware implementation of the algorithm . + as a matter of example , let us consider the d - wave one quantum computer @xcite . the fact that the device implements the standard aqc paradigm , and promises to be easily scalable , makes it look as an ideal candidate for an experimental verification of our procedure . + the main issue with this reference architecture is related to the kind of interactions required by the algorithm . the current version of the device does not implement @xmath70-interactions . as a matter of fact the d - wave one is currently able to solve only problems that can be mapped into a 2-d ising problem , that is problems that can be mapped to standard aqc hamiltonians involving only @xmath79 interactions between nearest - neighbor qubits and a transverse field @xmath155 . on the other side the superconducting flux - flux qubits used in the d - wave one can in principle support @xmath70 interactions @xcite , so it is possible that our scheme will become implementable in some next - generation version of the hardware . + another , somehow minor , criticality is the mapping of the interaction - graph determined by the algorithm ( see figures [ fig : system ] and [ fig : systemgeom ] ) onto the chimera - graph ( see , for example , figure 1 of ref.@xcite for a representation of the graph ) . the _ minor - embedding _ procedure @xcite can map a complete graph onto the chimera graph with a quadratic resource overhead . this means that our interaction graph can be mapped into the d - wave graph ; what remains to be understood is the effect that such an embedding will induce on the execution time of the algorithm . + in other physical implementations , such as crystal of trapped ions @xcite , the realization of the @xmath70-hamiltonian , together with its control and the preparation of its ground state , will be quite straightforward . in this setup , however , it is the realization of the @xmath79 interactions between distant qubits that may be very challenging , and would require some sort of _ quantum bus _ @xcite . the definition verification scheme for our algorithm based on current technology and will be the focus of future research . 42ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1''''@noop [ 0]secondoftwosanitize@url [ 0 ] + 12$12 & 12#1212_12%12@startlink[1]@endlink[0]@bib@innerbibempty @noop _ ( , ) @noop * * , ( ) @noop * * ( ) @noop * * , ( ) @noop * * , ( ) in @noop _ _ ( ) pp . * * , ( ) link:\doibase 10.1103/physreva.81.052313 [ * * , ( ) ] link:\doibase 10.1103/physreva.86.022334 [ * * , ( ) ] in @noop _ _ ( ) pp . @noop * * , ( ) @noop * * , ( ) @noop * * , ( ) @noop * * ( ) @noop ( ) link:\doibase 10.1103/physreva.86.042310 [ * * , ( ) ] @noop ( ) @noop ( ) @noop * * , ( ) @noop * * , ( ) link:\doibase 10.1103/physreva.77.062329 [ * * , ( ) ] @noop * * , ( ) @noop * * , ( ) @noop `` '' @noop _ _ ( , ) @noop * * , ( ) @noop * * , ( ) @noop * * , ( ) @noop ( ) @noop * * , ( ) @noop * * , ( ) link:\doibase 10.1103/physrevlett.109.030502 [ * * , ( ) ] in @noop _ _ ( , ) @noop * * , ( ) @noop * * , ( ) @noop * * , ( ) @noop * * , ( ) @noop * * , ( ) @noop * * , ( ) @noop * * , ( ) @noop _ _ , vol . ( , ) @noop _ _ , edited by and ( , ) for the sake of self - containedness , we report here some basic definitions related to the adiabatic theorem . + given two hamiltonian operators @xmath156 and @xmath76 on on @xmath157 , let us consider the time - dependent hamiltonian @xmath158 we indicate by @xmath159 and @xmath160 the instantaneous non - degenerate eigenvalues of @xmath161 and the corresponding eigenvectors . + the _ spectral gap _ of @xmath161 is defined as @xmath162 the adiabatic theorem asserts that , if the rescaling constant @xmath55 satisfies the relation @xmath163 , where @xmath164 then a system prepared at time @xmath165 in the ground state of @xmath166 will follow the instantaneous ground state @xmath167 of the rescaled hamiltonian @xmath168 and end up , at time @xmath169 in the ground state of the hamiltonian @xmath76 . + while the value @xmath57 can be usually bounded from above by a polynomial in the system size @xmath3 , the spectral gap @xmath58 can happen to have an exponential dependence on the system size . the preparation of the ground state of the initial hamiltonian @xmath53 ( see equation [ eq : initialham ] ) restricted to the @xmath170 sector of the hilbert space of the system can be done efficiently by adiabatic means . + in what follows we will describe the preparation of a single chain of the system . the overall initial state will then be obtained by tensorialization . + consider the initial state @xmath171 describing a chain with a single spin up at position @xmath172 . this is the ground state of the hamiltonian @xmath173 for any @xmath174 . + we let the system evolve under @xmath175 where @xmath176 the annealing time depends polynomially on the system size @xmath3 . in fact the spectral gap of ( [ eq : auxham ] ) can be analytically determined by standard techniques @xcite to be , for @xmath177 the gap is monotonically decreasing in @xmath178 and reaches its minimum at @xmath179 . for @xmath179 the gap is the gap of the isotropic @xmath70 on @xmath3 sites hamiltonian restricted to the single excitation subspace @xmath180 , that is @xmath181 the ground state of @xmath182 can therefore be prepared efficiently . + we point out that while ( [ eq : hamxy ] ) is not frustration - free , it becomes such as soon as we add two localized potential . in fact the ground state of @xmath183 is the @xmath184 state @xmath185 which minimizes @xmath186 for @xmath187 and @xmath188 . we use the world - line quantum monte - carlo algorithm to simulate the evolution of the ground - state distribution of the @xmath189 observables . for a complete account on the numerical scheme , we refer the reader to @xcite . the @xmath88 code used to simulate the system is available at https://bitbucket.org/luca_zanetti/qmc_gi/downloads . here we briefly describe the algorithm and define the parameters used in our simulations . + we first discretize the time evolution . instead of interpolating between @xmath53 ( [ eq : initialhamprime ] ) and @xmath52 ( [ eq : hf ] ) by continuously varying the parameter @xmath190 ( see ( [ eq : adiabatic ] ) ) , we take an integer _ evolution time _ @xmath55 and change the time - dependent system hamiltonian through unit steps from 0 to @xmath55 . + we approximate the evolution of the instantaneous ground state @xmath191 of the system between two interpolation steps @xmath135 and @xmath192 via the suzuki - trotter replica method : @xmath193 replicas of the system are evolved through @xmath194 metropolis moves toward the equilibrium distribution of @xmath195 at temperature @xmath196 . in our experimental campaign that the best results are obtained if we set @xmath197 . + the algorithm can be synthesized as follows : + the thermal - annealing procedure is used to reproduce the equilibrium distribution of @xmath199 of the hamiltonian ( [ eq : initialhamprime ] ) . the iterations over @xmath202 implement the permutation trick . the iterations over @xmath203 capture the non - deterministic nature of mc . + since @xmath204 the thermal state will have a support larger than the sole ground state . besides , the allowed number of metropolis moves does not guarantee that the replicas equilibrate @xcite . for these reasons , we say that the qmc procedure succeeds in finding a solution of an instance of gi when , at the final time @xmath55 , @xmath205 of the replicas are in a configuration corresponding to a solution of the given instance . in this way we are able to capture the approximate nature of the solutions provided by the qmc numerical scheme , while ruling out the possibility of finding a solution by mere chance . + in our simulations the parameters have been set to : @xmath206 .
we present a @xmath0-local quantum algorithm for graph isomorphism gi based on an adiabatic protocol . by exploiting continuous - time quantum - walks , we are able to avoid a mere diffusion over all possible configurations and to significantly reduce the dimensionality of the visited space . within this restricted space , the graph isomorphism problem can be translated into the search of a satisfying assignment to a @xmath0-sat formula without resorting to perturbation gadgets or projective techniques . we present an analysis of the execution time of the algorithm on small instances of the graph isomorphism problem and discuss the issue of an implementation of the proposed adiabatic scheme on current quantum computing hardware .
we report a new case of clear cell adenocarcinoma of the proximal urethra in a 56-year - old woman who presented with grossly hematuria . urethral cystoscopy revealed a tumour protruding from the posterior urethral wall at the bladder neck . clear cell adenocarcinoma ( cca ) of the female urethra is very rare ; most information has been gained from single case reports and small case series . we report a new case in a 57-year - old woman and discuss the clinico - pathologic pattern . a 56-year - old woman presented with gross hematuria . on physical examination , bleeding from the urethra meatus was seen . urethral cystoscopy revealed a tumour protruding from the posterior urethral wall at the bladder neck . computed tomography scan of the pelvis revealed a severe thickening of the bladder wall ( fig . the patient underwent transurethral biopsy of the tumour that showed an invasive poorly differentiated carcinoma of the urethra . computed tomography scan : severe thickening of the bladder wall histological examination revealed a tumour composed of nests and papillary structures ( fig . 2 ) that were lined with cells having clearly cytoplasm with hobnail cells in some areas of the tumour ( fig . 3 ) ; these cells showed severe cytologic atypia and high mitotic rate ; tumour cells invaded all the urethral layers , but did nt involve the bladder . clear cell carcinoma composed of nests and papillary structures ( he 40 ) papillary structures lined by cells with clearly cytoplasm and pleomorphic nuclei ( he 400 ) immunohistochemical staining , using the two - step indirect imunoperoxydase technique with antibodies to prostate - specific antigen ( psa ; dako , l-1838 ) showed no cytoplasmic reaction in the tumour cell . . it mainly affects women and up to half of the cases develop in the context of a urethral diverticule . mesonephric carcinoma , and suggested that the tumour probably arises from the mesonephric duct or intermediate mesodermal vestiges . however , some authors insisted on the mullerian origin of this tumour . in 1984 , pollen and dreilinger strongly supported the homogeneity between the female paraurethral duct and male prostate gland on finding positive immunohistochemical staining using antibodies to psa ( prostate - specific antigenin ) and pap ( prostatic acid phosphatase ) . they have advocated that the tumour arises from the female para - urethral duct . in our case , more recently , zaviaci et al reported a neoplasma with similar histologic appearance and immunohistochemical characteristics as adenocarcinoma of skene 's paraurethral glands and ducts . the present findings support the theory that the female clear cell adenocarcinoma arises from the paraurethral duct . however , it appears that female urethral adenocarcinoma has more than one tissue of origin with minority arising from the skene 's glands . morphologically , cca of the urethral must be differentiated from nephrogenic adenoma of the urethra especially on biopsy . the predominance of clear cells , severe cytological atypia , high mitotic rate and necrosis favoured the diagnosis of cca . because of the rarity of cca in the urethra , the optimal treatment is unknown . it seems to be based on the localisation of the primary tumour and the presence of metastasis .
context : clear cell adenocarcinoma of the urethra is an extremely rare tumour . its histogenetic derivation remains controversial.case report : we report a new case of clear cell adenocarcinoma of the proximal urethra in a 56-year - old woman who presented with grossly hematuria . urethral cystoscopy revealed a tumour protruding from the posterior urethral wall at the bladder neck . treatment consisted of urethrocystectomy with pelvic lymph node dissection . histologically , the neoplasm consisted of clear cell adenocarcinoma of the urethra.conclusion:it appears that female urethral adenocarcinoma has more than one tissue of origin .
in 1956 , as his hungarian compatriots were initiating a violent revolt against soviet rule , hans selye ( 190782 ) published what was arguably his most influential study of the relationship between stress , health and disease . written for a general as well as a medical and scientific audience and based on many years of laboratory experiments performed first at mcgill university and subsequently at selye s institute of experimental medicine and surgery at the university of montreal , the stress of life set out the principal features of what selye had originally termed the general adaptation syndrome but increasingly referred to as the stress syndrome , or more simply as stress. by exploring in turn his discovery of the concept of stress , the biological processes involved in stress reactions , and the various diseases that were thought to result from failures in the stress - fighting mechanism , such as cardiovascular and inflammatory diseases and peptic ulceration , selye claimed that he had identified an innovative approach to understanding the mosaic of life in health and disease ( selye , 1956 : ix ) . in a brief coda to his biological account of how the nervous and endocrine systems help to adjust us to the constant changes which occur in and around us ( ibid . : according to selye , people possessed a finite quantity of adaptation energy which was gradually consumed by the wear and tear of life , leading to physiological ageing and death ( selye , 1956 : 2733 ; selye , 1938a ; selye , 1938b ) . longer and healthier lives could be promoted by protecting the stores of adaptation energy , a feat achieved by living wisely in accordance with natural laws. close study of nature , selye argued , would allow people to derive some general philosophic lesson , some natural rules of conduct , in the permanent fight between altruistic and egotistic tendencies , which account for most of the stress in interpersonal relations ( selye , 1956 : 2812 ) . intercellular altruism , so too social harmony , collective survival and human satisfaction could be enhanced by interpersonal altruism or mutual inter - dependence , driven ultimately by striving for , and dispensing , a feeling of gratitude . convinced that a mature philosophy of gratitude based on biological principles offered the most constructive way of life , selye concluded his reflections on the secret of happiness with a characteristic rhetorical flourish : can the scientific study of stress help us to formulate a precise program of conduct ? can it teach us the wisdom to live a rich and meaningful life which satisfies our needs for self - expression and yet is not marred or cut short by the stresses of senseless struggles ? ( selye , 1956 : 294 ) nearly 20 years later , and apparently prompted by the disproportionate amount of interest expressed by psychologists , sociologists , anthropologists and clergymen in his earlier subjective digression into the philosophical aspects of stress , selye developed the ideas first aired in the stress of life into a more coherent argument about the promotion and maintenance of social equilibrium and individual happiness . in stress without distress , first published in 1974 , selye suggested that biological rules governing cells and organs could also be the source of a natural philosophy of life , leading to a code of behavior based on scientific principles ( selye , 1974 : 2 ) . arguing that the greater sense of social instability generated by the multiple stresses of modern lives made a unifying philosophy of life even more critical , selye set out the manner in which the biological mechanisms of adaptive self - organization and homeostasis should dictate social relations : the same principles must govern cooperation between entire nations : just as a person s health depends on the harmonious conduct of the organs within his body , so must the relations between individual people , and by extension between the members of families , tribes , and nations , be harmonized by the emotions and impulses of altruistic egotism that automatically ensure peaceful cooperation and remove all motives for revolutions and wars . ( selye , 1974 : 64 ) both the authoritative tone of his argument and the absence of supporting citations suggest that for selye the philosophy of altruistic egotism constituted a relatively unproblematic translation of the results of laboratory studies and personal experiences of stress into the social realm . not only could sick societies be diagnosed and healed in much the same manner that sick bodies could be identified and restored to health by scientific knowledge and clinical intervention , but the faithful application of biological principles to social organization would also ensure the prolonged physical and mental health of modern populations . the aim of this article is to problematize selye s seemingly effortless application of biology to society by exploring the social and scientific contexts that framed his ideas . i shall argue that selye s philosophy of altruistic egotism drew heavily on preceding and adjacent intellectual and cultural developments : a traditional interest in the analogy between the human body and the body politic , evident not only in the scientific writings of walter b. cannon ( 18711945 ) , for example , but also in post - war science fiction ; the emergence of psychosomatic and psychosocial medicine , which postulated links both between mind and body and between health , environment and social behaviour ; the rising prominence of cybernetic , socio - biological and biopsychosocial models of life and disease that were being fashioned by norbert wiener ( 18941964 ) , edward o. wilson ( b. 1929 ) , robert l. trivers ( b. 1943 ) , george l. engel ( 191399 ) , and others during the late 1960s and early 1970s ; and , more broadly , growing contemporary fears about rapid technological change and global political instability during the cold war . in the process , i also want to suggest that selye s recipe for health and happiness reveals key features of post - war articulations of the psychosocial : not only did scientific formulations of stress serve to shape clinical discussions of the psychosocial determinants of health as well as political debates about effective social organization and health promotion , but expanding interest in biopsychosocial accounts of disease in turn also significantly increased the popularity of the language of stress as a means of experiencing , defining and managing diverse forms of mental and physical suffering . hans selye was by no means the first scientist or clinician to highlight possible analogies between biological and social organization or to emphasize the capacity for medical science to provide solutions to personal and political problems . as roy porter and others have suggested , it was customary in the early modern period for engravers , essayists , cartoonists and doctors themselves to transfer the idioms of sickness and healing to the realm of politics , to refer to healing practices in order to comment on and restore the health of the body politic ( roy porter , 2001 : 20 , 33 , 22049 ) , and increasingly to draw analogies between body organs and the specialized functions of various governmental agencies ( stephens , 1970 : 688 ) . this tradition persisted . during the late 19th century , for example , it became popular to compare the functions of the emergent telecommunications system with the actions of the human brain and nerves . in this instance , the analogy worked in both directions : not only did the electric telegraph carry messages and regulate the social organism in the same manner that the central and peripheral nervous systems governed the body , but the telegraph was also conversely adopted by scientists and doctors as a means of explaining the functions ( and malfunctions ) of the nerves ( morus , 1999 , 2000 ) . during the middle decades of the 20th century , the analogy between physiological and technological forms of communication was pursued even further . according to the canadian literary scholar marshall mcluhan ( 191180 ) , whose study of the impact of mass media on modern lives was informed by selye s work on stress , the telegraph constituted a social hormone , serving not only to replicate physiological processes , but also to extend the reach of the human endocrine and neurological networks into the social realm ( mcluhan , 1964 : 24657 ) . while mobilizing a traditional style of social commentary , however , selye s philosophical ventures owed more to his reverence for the work of two earlier physiologists , claude bernard ( 181378 ) and walter cannon , both of whom had attached pensive codas to seminal scientific publications . in 1865 , bernard , whose notion of la fixit du milieu intrieur shaped selye s physiological approach to stress and stability , published an introduction to the study of experimental medicine . having explicated the significance of the experimental method for the study of vital phenomena , bernard devoted his final chapter to considering the wider implications of his argument . although he rejected the notion that experimental medicine corresponded in any way to a philosophic system or that it should be extended beyond the phenomena that it described , bernard did acknowledge that the progress of human knowledge and the resolution of problems that were torturing humanity required an intelligent combination of both science and philosophy ( bernard , 1957 : 21826 ) . during the early 20th century , the american physiologist walter cannon , whose formulation of homoeostasis ( or physiological equilibrium ) also provided a pivotal concept for selye s subsequent accounts of adaptation and stress , continued the trend set by bernard s humanistic reflections . in the final chapter of the wisdom of the body , first published in 1932 , cannon directly examined the analogies between the body physiologic and the body politic and suggested that comparative studies of the means by which organisms retained physiological stability in the face of external environmental changes might furnish opportunities for generating or restoring industrial , domestic and social harmony ( cannon , 1939 : 305 ) . ignoring bernard s words of caution about the inappropriate extrapolation of results from laboratory to society , cannon argued that applying the principles of homoeostasis to social organization would not only foster the stability , both physical and mental , of the members of the social organism , but also provide serenity and leisure , which are the primary conditions for wholesome recreation , for the discovery of a satisfactory and invigorating social milieu , and for the discipline and enjoyment of individual aptitudes ( ibid . : 324 ) . cannon further developed the analogy between physiological and social systems in a series of articles published during the 1930s and 1940s . in 1933 , for example , he posed the question : does the human body contain the secret of economic stabilization? starting from the premise that modern civilization was in need of urgent corrective measures in order to eliminate hunger , poverty and unemployment , and drawing explicitly on bernard s emphasis on the importance of maintaining a stable operating environment , cannon applied his model of the self - regulating human body directly to society . it seems to me , he argued , that quite possibly there are general principles of organization that may be quite as true of the body politic as they are of the body biologic. these principles comprised the effective division of labour , the establishment of an intricate system of communication and exchange of goods and services , and a central authority ( equivalent to the brain ) responsible for controlling social and economic transactions ( cannon , 1933 , 1941 ) . cannon s work provided inspiration for other social commentators . the american psychologist albert t. poffenberger ( 18851977 ) , for example , acknowledged the role of cannon s 1932 monograph in shaping his own belief that the processes responsible for maintaining both psychological and social equilibrium were analogous to those preserving physiological stability ( poffenberger , 1938 , 1950 ) . similarly , in 1936 , the american cytologist edmund v. cowdry ( 18881975 ) applauded cannon for attempting to determine whether there were any methods of regulation within the human body of any interest to those responsible for regulation within the nation ( cowdry , 1936 : 222 ) . according to cowdry , cell theory in particular provided an effective blueprint for the productive division of labour , the regulation of the manufacture and consumption of goods , and the overall maintenance of social stability . as stephen cross and william albury have argued in an exemplary discussion of the development of the organic analogy in the work of cannon and his fellow harvard physiologist lawrence j. henderson ( 18781942 ) , early-20th - century applications of the principles of physiological regulation to social organization were shaped by the social , political and economic challenges faced by inter - war american society ( cross and albury , 1987 ) . although cannon and henderson differed in their political orientation , both they and other commentators were responding to a perceived social crisis in the years following the great war , a crisis exemplified by the rise of fascism , the perilous consequences of economic depression , the eradication of cherished values and institutions by the technological tide of a new machine age , and the proliferation of contentious debates about evolutionary theory and human behaviour within the natural and social sciences ( cross and albury , 1987 : 16670 ) . these concerns were not confined to north america : inter - war european populations too were consumed by morbid anxieties about the decline , and impending collapse , of modern western civilization ( overy , 2009 ) . in spite of persistent reservations about the applicability of biological principles to political and social problems ( julian huxley , 1941 ; stephens , 1970 ; ingle , 1975 ) , various features of the organic analogy remained fertile concepts not only for scientists but also increasingly for social commentators and novelists . as cynthia eagle russett has argued , the notion of equilibrium that emerged partly from early 20th - century physiology constituted a critical tool in contemporary social theory , particularly in the work of the italian sociologist vilfredo pareto ( 18481923 ) and his followers at harvard ( russett , 1966 ; heyl , 1968 ) . interest in balance , equilibrium and self - regulating systems was also a feature of the work of selye s hungarian compatriot , arthur koestler ( 190583 ) , who was aware of cannon s work on the neurophysiology of emotions and familiar with debates about homoeostasis , particularly in the context of evolution ( koestler , 1968 : 154 , 26174 ) . indeed , in the ghost in the machine , koestler directly compared the behaviour of bodily organs , mental structures and social groups under conditions of stress ( koestler , 1967 : 48 , 2303 ) . in the novels of aldous huxley ( 18941963 ) , ursula le guin ( 1929 ) , john wyndham ( 190369 ) , and nobel laureate doris lessing ( 1919 ) , the maintenance of social harmony and ecological balance similarly constituted a pivotal theme : utopian ( or eventually dystopian ) societies routinely mobilized biological principles in order to justify the regulatory measures adopted to ensure the security and stability of their inhabitants ( aldous huxley , 1962 ; le guin,1974 ; wyndham , 1979 ; lessing , 1979 ) . by the time that koestler and others were debating the capacity for scientific principles to regulate society , new threats to health and happiness had emerged . in the decades following the second world war , global political reconstruction was hindered by ideological , and increasingly military , conflicts between western capitalism and eastern communism : the korean war , the hungarian revolution , the cuban missile crisis , and the vietnam war , events that clearly provided a conscious backdrop to selye s reflections ( selye , 1974 : 78 ) , all served to heighten the escalating tension between america and eastern bloc countries . although intellectual responses to these events were mixed , the fear of impending global destruction generated by the cold war and deepening concerns about the human consequences of the technological revolution and expansion of the media ( mcluhan , 1967 ; toffler , 1970 ) pervaded both scientific and fictional commentaries on the value and attainability of individual and social stability . many of these strands of mid-20th - century scientific and political ideology were already evident in selye s first philosophical enterprise in 1956 . selye himself admitted that he was greatly indebted to the philosophical physiology of bernard and cannon , acknowledging their seminal role in shaping his approach to adaptation and disease ( selye , 1956 ; selye , 1975 : 89 ) . indeed , in many ways , selye s science and philosophy were both genuine descendants of cannon s quest for a comprehensive physiology of man ( dale , 1947 ; cross and albury , 1987 ) . at the same time , interpersonal altruism as a means of moderating stress in human relations was based on his scientific understanding of the evolutionary significance of intercellular altruism , or collective egotism , in higher animals , that is , on a particular , and relatively nave , version of the organic analogy . for selye in 1956 , the personal benefits of following a natural code of life were self - evident : a philosophy of gratitude based on fundamental biological laws ( selye , 1956 : 301 ) would ensure a reduction in stress - related mental and physical disease and increased happiness and success . as selye s later explication of his code suggests , adhering to the principle of altruistic egotism also carried the potential to heal sick societies and to confront what many commentators regarded as an expanding burden of psychosomatic diseases afflicting modern communities . as donna haraway , gregg mitman and john parascandola have suggested , organicist and holistic ideologies permeated research in a range of scientific and social science domains during the inter - war and post - war years . studies of competition , cooperation and aggression among primate populations carried out by c. r. carpenter ( 190575 ) , the ecological theories of warder clyde allee ( 18851955 ) and alfred edwards emerson ( 18961976 ) , and lawrence henderson s investigations of both physiological and social regulation and adaptation , for example , were based on a belief not only that communities functioned as integrated organisms ( and vice versa ) , but also that it was legitimate to extrapolate directly from the biological to the social realm ( mitman , 1992 : 144 ; haraway , 1982 ; parascandola , 1971 ) . increasingly linked to totalitarian and fascist ideals , such attempts to develop natural codes of behaviour for human populations from animal studies in the laboratory or field were contested ( mitman , 1992 ) . the american economist and social scientist lawrence k. frank ( 18901968 ) , for example , strongly criticized the tendency to apply biological laws unproblematically to social problems . arguing that cultural factors rendered societies inherently more complex and discordant than natural systems , he suggested the need for more sophisticated accounts of individual and social behaviour in order to safeguard social order ( frank , 1932 : 519 , 525 ; frank , 1925 , 1928 , 1936 ) . although frank rejected the manner in which physical principles were applied indiscriminately to questions of social organization and behavioural control , he did accept one of the fundamental premises on which cannon , and later selye , based their accounts of social homoeostasis , namely that modern society itself was in some ways dysfunctional . there is a growing realization among thoughtful persons , he wrote in 1936 when he was working for the josiah macy foundation , that our culture is sick , mentally disordered , and in need of treatment ( frank , 1936 : 335 ) . citing the work of his macy - funded colleague , the american psychoanalyst helen flanders dunbar ( 190259 ) , frank suggested that a variety of symptoms of cultural disintegration were evident in modern societies : crime , mental disorders , family disorganization , juvenile delinquency , prostitution and sex offenses , and much that now passes as the result of pathological processes ( e.g. gastric ulcer) ( frank , 1936 : 336 ) . frank s reference to dunbar reveals another context for the evolution of selye s natural philosophy of life . along with the hungarian - born franz alexander ( 18911964 ) , through the pages of psychosomatic medicine , founded in 1939 , and the activities of the american psychosomatic society , established three years later , alexander , dunbar and others began to promote a more holistic , organismic approach to illness that highlighted interactions between psychological and physical processes and between social circumstances and health ( powell , 1977 ) . although proponents of psychosomatic medicine did not necessarily agree about the precise relationship between mind and body , they did tend to focus collectively on what alexander referred to as the magic seven psychosomatic conditions : asthma ; essential hypertension ; rheumatoid arthritis ; peptic ulceration ; ulcerative colitis ; hyperthyroidism ; and neurodermatitis ( levenson , 1994 ; mark jackson , 2007 ) . for the psychoanalytically minded dunbar and alexander , who regarded the principle of stability , initially devised by gustav theodor fechner ( 180187 ) and subsequently developed into the constancy principle by sigmund freud ( 18561939 ) , as one of the fundamental building blocks of psychodynamic medicine ( alexander , 1960 : 35 ) , the magic seven diseases were caused primarily by repressed emotions or frustrated desires from childhood ( dunbar , 1947 ) . for others , such as the scottish physician james lorimer halliday ( 18971983 ) , the aetiology of chronic functional disorders was to be located within the structures and habits of modern societies . according to halliday , whose major study of the sick society was first published in 1948 , epidemics of peptic ulcers , gastritis and fibrositis , declining fertility , and a range of social , cultural and political problems such as high rates of unemployment , sickness absence and juvenile delinquency , were the direct result of economic rivalry , military conflict and social disintegration ( halliday , 1949 ) . from halliday s perspective , the challenge for psychosocial medicine was to acknowledge the biological reality of social sickness and to address its causes through social reintegration , rather than the familiar , individualistic strategies traditionally employed by doctors and the state ( ibid . : halliday s prescription for a new form of integrated medicine involved educating doctors , medical students and the public about the burden of social sickness , expanding professional and state awareness of the importance of preserving or restoring psychological health , and encouraging the emergence of a form of biopolitics that prioritized both the physical and the spiritual health of modern western populations ( ibid : 196224 ) . of course , halliday s emphasis on the social determinants of illness was not new . on the contrary , it echoed the political and utopian rhetoric of british social medicine , which was heavily influenced during the immediate post - war years by john ryle s notion of social pathology and which increasingly focused on the role of stress as an important behavioural factor in the aetiology of chronic disease ( d. porter , 1992 , 2002 ) . in north america , a programme of progressive socio - economic reform and preventative health care , similar to that adopted by ryle and his colleagues and also concerned with the impact of environmental stress on mental health , was promoted by proponents of social psychiatry and endorsed by president j. f. kennedy ( rosen , 1959 ; smith , 2008 ) . parallels between psychosomatic and psychosocial medicine , on the one hand , and selye s formulation of stress , health and disease , on the other hand , are evident at a number of levels . in the first instance , although halliday challenged selye s preoccupation with physical rather than emotional stressors in his experimental work ( halliday , 1950 ) , even he acknowledged that his suggestion that modern western civilization had precipitated a failure of biological adaptation ( halliday , 1949 : 181 ) echoed selye s emphasis on maladaptation to modern life as the principal mechanism involved in the pathogenesis of many chronic diseases ( halliday , 1950 ) . indeed , the magic seven conditions explored by proponents of psychosomatic medicine overlapped considerably with the paradigmatic stress disorders or diseases of adaptation described by selye ( selye , 1956 : 12889 ) : not only was there pressure during the post - war years to redefine stress disorders ( viner , 1999 : 396 ) , but selye s colleagues and peers began to conflate the two traditions by referring increasingly to psychosocial stress ( levi and andersson , 1975 ) . there is also evidence that selye had read both halliday s formulation of psychosocial medicine and accounts of psychosomatic medicine by alexander and dunbar ( selye , 1974 : 1556 , 161 , 166 ) . more broadly , it is evident from the british - based journal of psychosomatic research , founded in the same year that selye first published the stress of life , that stress was becoming an increasingly important focus for researchers on both sides of the atlantic interested in psychosomatic or psychobiological approaches to health and disease . during the 1950s , 1960s and 1970s , the journal published the results of a number of animal studies , which explored the links between stress and a range of diseases , including cancer , peptic ulceration , tuberculosis , asthma , eczema , and coronary and thyroid disease . echoing earlier concerns expressed by frank and halliday that society was itself a potent stressor , from the 1960s articles also began increasingly to address the relationship between the onset of physical and mental diseases and the stress of social circumstances ; that is , to examine the impact on health of what richard h. rahe and his colleagues referred to in 1964 as a psychosocial life crisis ( rahe et al . , 1964 : 41 ) . in a series of articles published in the journal over the next year or so , rahe and thomas h. holmes elaborated the principal features of their social readjustment rating scale ( holmes and rahe , 1967 ) . based on a fusion of adolf meyer s psychobiology , in particular his use of life charts or dynamic biography to reveal the relationship between biological , psychological and sociological processes and disease , and harold g. wolff s exploration of stressful life events , rahe and holmes offered clinicians and their patients a means of quantifying life stressors and predicting , or at least explaining , illness onset ( masuda and holmes , 1967a , 1967b ) . in the same year that holmes and rahe first outlined their approach to social adjustment and disease , the american psychiatrist george l. engel ( 191399 ) was invited by members of the society for psychosomatic research to present the keynote speech at their annual conference . arguing that psychosomatic medicine was still in its infancy and riven with theoretical and clinical differences , theoretician of the calibre of darwin or einstein to provide a unifying theory that allowed researchers to relate clinical and laboratory phenomena across frames of reference ( engel , 1967 : 8) . according to some commentators , the necessary synthesis had already been achieved by hans selye . in 1952 , in a paper originally broadcast on the third programme of the bbc , the british surgeon david le vay argued that selye s formulation of diseases of adaptation as the product of endocrine disturbances generated by stress provided the possibility of a satisfactory integration of previous approaches to the mechanisms of disease causation . for le vay , the significance of selye s work lay particularly in its application to broader social issues : selye s work is important , not only in the narrow biological field of injury and response to injury , but in relation to the much wider problems of man as a living organism set in the stresses of modern civilisation , so many and so varied and so constant in their impact . ( le vay , 1952 : 168 ) it was precisely this belief that the scientific study of stress would provide a blueprint for protecting the physical and mental health of modern populations living in a troubled world that encouraged selye to develop a more expansive vision of how to achieve social harmony , or to manage stress without distress , in 1974 . between october and november 1956 , a series of protests against the stalinist government and soviet policies ricocheted through selye s homeland of hungary , a stark manifestation of escalating east west hostilities during the cold war . although he had left europe over two decades earlier , hans selye remained proud of his hungarian heritage and had retained ties with his family in komrom . his father , who had been a surgeon in the austro - hungarian army and subsequently set up his own surgical clinic , had died in budapest some years earlier , but his mother was a direct casualty of escalating violence during the winter of 1956 , killed by a stray bullet as soviet troops attempted to suppress the revolution . it is difficult to establish with any certainty the impact of these events , or indeed of his own experience of pain and life - threatening illness ( selye , 1979 : xi ; selye , 1977 : 1248 ) , on selye s science and philosophy . in his autobiography , selye implied that he had been relatively untroubled by the trauma of the hungarian revolution or by the emptiness generated by his mother s death , from which he felt emotionally separated by time and distance ( selye , 1977 : 66 ) . however , it is possible to detect a more critical and perhaps more poignant political edge to selye s humanistic voice in 1974 than had been present in 1956 : stress without distress constituted not merely a set of philosophical reflections , like his earlier work , but a manifesto for urgent personal and social change . stress without distress was selye s definitive attempt to translate the fruits of laboratory research on stress into the social realm . arguing that previous strategies intended to achieve peace and happiness had largely proved unsuccessful ( selye , 1974 : 2 ) , he highlighted the growing need for a convincing philosophy with which to address momentous socio - political and cultural challenges : besides , since 1956 , technological advances in our rapidly changing world are making more and more special demands on our abilities for readaptation . now , through the media in our homes , we are facing daily new and often threatening events wherever they occur on earth ( vietnam , watergate , the middle east ) or even in outer space . on the other hand , jet travel tends to make many of us feel uprooted and virtually homeless . ever - increasing requirements for travel create the need for adaptation to different time zones , customs , languages , lodgings , and a sense of instability caused by unpredictable changes in schedules . ( selye , 1974 : 78 ) selye s claim that humans were struggling to adapt both physically and mentally to the structures and processes of modern society was not routinely endorsed . in 1965 , the french - born microbiologist ren dubos ( 190182 ) not only covertly questioned the validity of selye s general adaptation syndrome ( dubos , 1980 : 2623 ) , but also dismissed the reality of contemporary anxieties about the impact of spectacular technological developments : the dangers posed by the agitation and tensions of modern life constitute another topic for which public fears are not based on valid evidence . most city dwellers seem to fare well enough under these tensions : their mental health is on the whole as good as that of country people . indeed , there is no proof whatever that mental diseases are more common or more serious among them now than they were in the past , or than they are among primitive people . ( dubos , 1980 : 274 ) however , in a climate of growing global political instability , when the world appeared to be in a state of permanent hostility , and in the light of an apparent rise in the prevalence of many chronic diseases , dubos s faith in the ability of humans to adapt effectively to new conditions was rejected by researchers and social commentators keen to lament the social anomie and health hazards generated by the stresses and strains of modern lives . in 1970 , the american writer alvin toffler ( b. 1928 ) explored the overwhelming sense of instability imposed by super - industrial societies , coining the term future shock to describe the shattering stress and disorientation that we induce in individuals by subjecting them to too much change in too short a time ( toffler , 1970 : 4 ) . similarly , in his concluding remarks at a symposium on the psychosocial environment and psychosomatic diseases sponsored by the world health organization in 1970 ( and at which selye presented an overview of the stress concept and its clinical applications ) , arne engstrm , professor of medical physics at the karolinska institute in sweden , emphasized the urgent need to mitigate the impact of dramatic technological and social change on human health , arguing that psychological and environmental stress would become one of the most important future issues both politically and ecologically ( engstrm , 1971 : 448 ) . for toffler , engstrm and selye , like halliday and others before them , a combination of personal endeavour and political reform was required to manage the diverse threats to human and animal health , and indeed to the balance of the environment and the harmony of the cosmos , inherent in modern western lifestyles . according to toffler , the successful pursuit of happiness required people to identify and attain what john l. fuller , a geneticist at the jackson laboratory in maine , had referred to as the optimum amount of change in their lives , which in turn allowed them to achieve serenity , even in the midst of turmoil ( toffler , 1970 : 339 ) . more particularly , the antidote to future shock in toffler s view comprised a stronger commitment to democracy . to master change, he wrote in 1970 , we shall therefore need a clarification of important long - term social goals and a democratization of the way in which we arrive at them . and this means nothing less than the next political revolution in the techno - societies a breathtaking affirmation of popular democracy ( ibid . : in contrast to the sweeping political changes envisaged by toffler , selye focused on a more overtly individual route to social harmony , one that explicitly dismissed the practical and theoretical values of democracy ( selye , 1974 : 247 ) and drew instead on selye s understanding of biological homoeostasis and on his earlier reflections on the nature and control of conflict and competition . in stress without distress , selye argued that peaceful cooperation between people and societies , like that between cells and organs , could only be achieved by a collective commitment to altruistic egotism. in essence , this philosophy involved recognizing the evolutionary benefits of both altruistic and egotistical tendencies and combining them , at a social level , in much the same way that multicellular organisms formed a single cooperative community in which competition was amply overcompensated by mutual assistance ( ibid . : 57 ) . cooperation was to be achieved by dispensing , and striving for , a sense of gratitude , that is , by making ourselves indispensable to , and valued by , our neighbours , an approach to social cohesion encapsulated in his motto : selye clearly relied on a variety of personal , intellectual and philosophical resources in order to develop the notion of altruistic egotism. according to selye himself , his belief in the psychological value of gratitude , rather than the accumulation of worldly assets , stemmed originally from his father s advice to prioritize knowledge over possessions or status following his experiences during the collapse of the hapsburg empire ( selye , 1977 : 28 ) . selye also acknowledged that his practical code of behaviour held much in common with many religious ideals , although his approach carried the advantage of being substantiated by natural laws ( selye , 1974 : 1201 ) . more directly , selye based his natural philosophy not only on laboratory investigations of homoeostasis and stress , but also on the rise of systems philosophy , which was influenced largely by cybernetic studies of feedback and adaptation in individual and social life . although selye did not cite the ground - breaking study of cybernetics published by the american mathematician norbert wiener in 1948 or the subsequent attempts of karl w. deutsch ( 191292 ) and others to apply cybernetic principles to social and political organizations ( wiener , 1948 ; deutsch , 1966 ; pickering , 2010 ) , he was clearly aware of the systems philosophy of his hungarian compatriot ervin laszlo ( b. 1932 ) and of the evident similarities between cybernetics and his own studies of stress reactions ( selye , 1974 : 64 , 113 ) . it is noticeable , however , that selye made only oblique ( and rather dismissive ) references to parallel developments in ecology and socio - biology ( selye , 1974 : 10 ) , that is , to studies of the evolution of the biological determinants of social behaviour . during the 1960s and 1970s , biologists such as robert trivers and edward wilson , both then at harvard , were deeply concerned with exploring and explaining various behavioural patterns , most notably altruism and aggression , among animal and human populations . in a seminal paper published in 1971 , trivers analysed the evolutionary significance of reciprocal altruism , highlighting in particular the psychological and cognitive complexity of altruistic behaviour in humans ( trivers , 1971 ) . several years later , wilson suggested that reciprocal , or what he termed soft - core , altruism , much like selye s altruistic egotism , offered one route to social harmony : my own estimate of the relative proportions of hard - core and soft - core altruism in human behaviour is optimistic . human beings appear to be sufficiently selfish and calculating to be capable of indefinitely greater harmony and social homeostasis true selfishness , if obedient to the other constraints of mammalian biology , is the key to a more nearly perfect social contract . ( wilson , 2004 : 157 ) wilson was clearly conversant with selye s work . in his monumental overview of the field , first published in 1975 , wilson discussed selye s general adaptation syndrome in relation to the external and internal triggers of aggression . although he suggested that selye s account of adaptive processes awaited experimental validation and expressed doubts about the credibility of extrapolating directly from animal studies to debates about human behaviour , wilson accepted that aggression constituted a set of complex responses of the animal s endocrine and nervous system , programmed to be summoned up in times of stress ( wilson , 2000 : 248 ) . wilson s focus on the evolutionary biology of stress was not unusual in this period . as haraway has argued , stress became a pivotal concept in socio - biological studies of communications systems ( and their limits ) in the decades following the second world war ( haraway , 1981 : 250 ) . although selye was extremely well read in many scientific disciplines , and cited numerous studies of psychosocial stressors and their impact on health , his annotated bibliography in stress without distress included no references to ecological or socio - biological theories of aggression and altruism and only occasional allusions to studies of the factors regulating aggressive behaviour . it may be that selye was not aware of the socio - biology of wilson and trivers , of carpenter s studies of aggression and dominance among primate populations , or of biopsychosocial models of disease , which were also informed by systems theory and were being elaborated in particular by george engel and his colleagues at the university of rochester medical center ( engel , 1967 , 1977 ) . equally , it is feasible that selye preferred to distance himself from these studies , perhaps in order to emphasize the scientific , rather than social science , basis of his theories : in the opening pages of stress without distress , selye insisted that , although he had relied on observations about the evolution of natural selfishness in living beings ( suggesting at least some acquaintance with ecological and socio - biological literature ) , discoveries in these fields were only superficially , or not at all , related to what i described as the stress syndrome ( selye , 1974 : 10 ) . it is also possible that selye wished to establish the primacy of his particular formulation of altruistic egotism over competing prescriptions for social cohesion and human happiness : the foundations for his natural philosophy of life were , after all , already apparent in 1956 , some years before the emergence of trivers s parallel notion of reciprocal altruism . at the turn of the millennium , it became fashionable for scientific experts , health psychologists , the media and government ministers ( at least in europe ) to proclaim not only that happiness could be accurately defined and quantified , but also that it could be more readily attained if modern populations implemented a relatively simple set of prescriptions for individual behaviour and social reform . according to the world database of happiness , directed by sociologist ruut veenhoven , patterns of happiness can now be measured and compared between nations and across time : evidence from the database apparently indicates that while real income has increased dramatically in most western countries , fewer people are very happy in the early years of the 21st century than 50 years ago ( world database of happiness ; tim jackson , 2009 : 40 ) . in the writings of the economist richard layard and the psychologist jonathan haidt , levels of individual and collective happiness are largely determined by family relationships , financial circumstances , work , friends , community support and health as well as by genetic predisposition , leading some commentators to construct what haidt refers to as a discrete happiness formula ( haidt , 2006 : 91 ; layard , 2006 ) . within this context , while unhappiness and stress have emerged as key ( and relatively unchallenged ) indicators of social pathology and as central targets for political intervention , contemporary formulations of the psychosocial determinants of disease have in turn amplified the figurative currency of stress as an explanation for sadness and ill - health . recent attempts to calculate and engineer happiness have often been based on an intuitive , almost transcendental , notion of happiness as a universal and timeless quality , recognizable in all cultures at all historical moments . while this approach may have some validity , it is important to recognize that current formulations of happiness also draw heavily on particular accounts of the psychosocial determinants of health and behaviour that were mapped out initially by dunbar , halliday , selye and others during the second half of the 20th century . as this article has argued in relation to selye s specific prescription for greater social cohesion and individual happiness , the construction of a link between psychosocial processes and health and the invention of a regulatory code of behaviour for inhabitants of the modern world were not inevitable corollaries of biological principles of stress reactions revealed in the laboratory , as selye claimed . on the contrary , selye s emphasis on balancing the seemingly contradictory evolutionary forces of egotism and altruism and his belief in the applicability of biological models of homoeostasis to social problems were contingent upon a range of scientific , social , political and cultural contexts . the relatively well - established , if occasionally contested , credibility of the organic analogy , the rising popularity of holistic , organicist approaches to sick bodies and sick societies , the spread of cybernetic models of physiological and social organization , and growing concerns to explain seemingly deviant social behaviour in biological terms all provided an important intellectual matrix for selye s natural philosophy of life . in addition , selye s recipe for social harmony , like our current preoccupations with manufacturing happiness , can be seen as the result of a constellation of anxieties ( often perhaps unsubstantiated , as dubos suggested ) about global political instability and seemingly uncontrollable technological change . from this perspective , both selye s science of stress and his pursuit of happiness were as much a product of psychosocial processes as the diseases that he struggled to explain .
in 1956 , hans selye tentatively suggested that the scientific study of stress could help us to formulate a precise program of conduct and teach us the wisdom to live a rich and meaningful life. nearly two decades later , selye expanded this limited vision of social order into a full - blown philosophy of life . in stress without distress , first published in 1974 , he proposed an ethical code of conduct designed to mitigate personal and social problems . basing his arguments on contemporary understandings of the biological processes involved in stress reactions , selye referred to this code as altruistic egotism. this article explores the origins and evolution of selye s natural philosophy of life , analysing the links between his theories and adjacent intellectual developments in biology , psychosomatic and psychosocial medicine , cybernetics and socio - biology , and situating his work in the broader cultural framework of modern western societies .
postmortem examination revealed diffuse hemorrhages in the lungs ( which did not collapse ) , splenomegaly , a pale mottled liver , and thoracic and pericardial effusions . diagnostic microbiologic examination of tracheal washes and lung tissue identified only common environmental bacteria , and tests for viruses and fecal examination for parasites were all negative . histopathologic examination of the liver revealed cystic structures containing eukaryotic parasite cells between 4 and 5 m in diameter ( figure 1 ) . similar cells were observed in the parenchyma and blood vessels of lung and spleen ( not shown ) . on the basis of these results and clinical observations , the cause of death was determined to be acute respiratory distress due to disseminated infection with an unknown parasite . images of liver sections stained with hematoxylin and eosin ( h&e ) stain were captured at 10 magnification ( a ; scale bar = 30 m ) and 100 magnification ; b ; scale bar = 5 m ) . large numbers of parasite cells can be seen within well - defined cystic structures separated from the surrounding host tissue by clearly visible membranes . because attempts to identify the parasite by morphologic features were inconclusive , total dna extracted from infected organs was subjected to deep sequencing to detect molecular sequences of pathogens . dna was isolated from liver , lung , and spleen by using the qiagen dneasy blood and tissue kit ( qiagen inc . , valencia , ca , usa ) , followed by treatment with rnase ( epicenter biotechnologies , madison , wi , usa ) to remove rna . dna libraries were then generated by using the nextera dna sample prep kit ( illumina , san diego , ca , usa ) and sequenced on an illumina miseq instrument as described ( 2 ) . resulting sequence data were analyzed by using clc genomics workbench 5.5 ( clc bio , aarhus , denmark ) . briefly , low quality ( < q30 ) and short ( < 100-bp ) sequences were removed , sequences were aligned against an orangutan ( p. abelii ) genome ( 3 ) , and nonmapped sequences were subjected to de novo assembly . deep sequencing of total dna from infected tissues resulted in 2,400,000 sequences after quality trimming . de novo assembly of the remaining 50,000 sequences resulted in 293 contiguous sequences , 7 of which had high similarity to genbank sequences corresponding to taenia spp . subsequent mapping of nonhost sequences against the t. solium genome ( 4 ) resulted in 8,494 matches . on the basis of deep - sequencing results , pcr primers were used to amplify 3 mitochondrial genes informative for resolving relationships within the taeniidae ( table ) . for 12s ribosomal rna ( 12s rrna ) , primers ces12sf ( 5-aggggataggacacagtgccagc-3 ) and ces12sr ( 5-cggtgtgtacmtgagytaaac-3 ) for cytochrome c oxidase subunit i ( cox1 ) , published primers jb3 ( 5-ttttttgggcatcctgaggtttat-3 ) and jb4.5 ( 5-taaagaaagaacataatgaaaatg-3 ) were used , and for nadh dehydrogenase subunit 1 ( nad1 ) , published primers jb11 ( 5-agattcgtaaggggcctaata- 3 ) and jb12 ( 5-accactaactaattcactttc-3 ) were used ( 5 ) . pcrs were conducted in 20-l volumes with 1-l dna template by using the phusion kit ( new england biolabs inc . , ipswich , ma , usa ) , cycled as follows : 98c , 30 s ; 35 cycles of 94c , 10 s , annealing , 30 s , 72c , 30 s ; and final extension at 72c for 10 min ( annealing temperatures for 12s rrna , cox1 , and nad1 were 60c , 50c , and 55c , respectively ) . * multiple sequences were chosen to capture the maximum extent of intraspecific genetic divergence within highly diverse taxa ( variants arbitrarily labeled a or b ) . amplicons underwent electrophoresis on 1% agarose gels stained with ethidium bromide , were purified with the zymoclean gel dna recovery kit ( zymo research , irvine , ca , usa ) , and were sanger - sequenced in both directions by using pcr primers on abi 3730xl dna analyzers ( applied biosystems , carlsbad , ca , usa ) at the university of wisconsin biotechnology center . sequence chromatograms were edited and assembled by using sequencher version 4.9 ( gene codes corporation , ann arbor , mi , usa ) . sequences were aligned with homologous sequences from all taeniid species in genbank as of april 7 , 2013 ( table ) . to construct phylogenetic trees , we used the maximum - likelihood method in mega5.2 software ( 6 ) . figure 2 shows phylogenetic trees of newly generated 12s rrna ( panel a ) and concatenated cox1/nad1 ( panel b ) sequences and representative taeniid sequences . the trees closely agree with recently published taeniid phylogenies ( 1 ) . concatenated cox1/nad1 sequences from the orangutan cluster with v. mustelae ( formerly , t. mustelae ) with 100% bootstrap support , placing the organism within the newly proposed genus versteria ( 1 ) with confidence . however , the new cox1 and nad1 sequences are 12% different from those of published v. mustelae sequences . this degree of divergence is equal to or greater than that separating established echinococcus and taenia spp . phylogenetic trees of the taeniidae , including newly generated sequences derived from tissues of a fatally infected bornean orangutan . trees were constructed from dna sequence alignments of 12s rrna ( a ) and concatenated cox1/nad1 ( b ) sequences from the orangutan ( versteria sp . ; bold ; accession nos . kf303339303341 ) and representative echinococcus , hydatigera , taenia , and versteria sequences from genbank ( see table ) . the maximum likelihood method was used , with the likeliest model of molecular evolution chosen for both datasets by using mega5.2 software ( 6 ) . models of molecular evolution and tree likelihood values are hky+g , -lnl = 2279.42 for 12s rrna , and gtr+g+i , -lnl = 11582.71 for cox1/nad1 . numbers next to branches indicate bootstrap values ( % ) , estimated from 1,000 resamplings of the data ( only bootstrap values 50% are shown ) . members of the newly proposed genus versteria have morphologic features that distinguish them from members of the other taeniid genera , such as miniature rostellar hooks , small scolex , rostellum , and suckers ; a short strobili ; and a small number of testes ( 1 ) . however , no such distinguishing morphologic features could be identified by microscopy in the case described here . v. mustelae tapeworms infect multiple small animal intermediate host species and have been found in the upper midwestern united states in a hunter - killed fox squirrel ( sciurus niger rufeventer ) with hepatic cysts ( 8) . the definitive hosts of v. mustelae tapeworms are small carnivores of the family mustelidae , such as weasels and martens ( 9 ) . the genus versteria also contains v. brachyacantha ( 10 ) tapeworms , which infect the african striped weasel ( poecilogale albinucha ) , but sequences of this species are not represented in genbank . north american v. mustelae tapeworms are capable of asexual multiplication in the intermediate host ( 11 ) ; however , sequence data are only available for eurasian specimens ( 7 ) . the parasite described herein could thus represent a novel species or a previously genetically uncharacterized north american v. mustelae variant . similar methods have aided in the discovery of rna viruses ( 12 ) , but their application to eukaryotic pathogens has lagged , presumably because of technical challenges associated with distinguishing host from parasite dna . in this light , it is noteworthy that our efforts were greatly facilitated by the availability of an orangutan genome against which to perform in silico subtractive mapping ( 3 ) . as more host genomes become available , and as costs of equipment , reagents , and bioinformatics software decline , such methods promise to enter the diagnostic mainstream , as a complement to traditional morphologic and molecular approaches . encysted taeniid metacestodes can remain dormant for years before asexual multiplication ( 13 ) ; thus , this animal could have become infected at virtually any point in its life . rapid progression to fatal disease could indicate an underlying condition , such as immune deficiency . regarding source of infection , orangutans engage in geophagy ( 14 ) , a behavior that this animal frequently practiced , suggesting that the infectious agent could have been obtained from contaminated soil . however , other sources ( e.g. , food , water , fomites ) can not be excluded . infectious eggs could have entered the orangutan s environment through direct deposition by a definitive host or through complex pathways of environmental transport . to date , no other animals in the zoologic collections in colorado or wisconsin , where the orangutan was housed , have experienced similar disease , nor have similar infections been reported in persons , to our knowledge . in any case , this animal s rapid and severe disease progression raises concerns about the health of captive apes in similar settings . moreover , the close evolutionary relationship between orangutans and humans ( 3 ) raises concerns about the parasite s zoonotic potential .
a captive juvenile bornean orangutan ( pongo pygmaeus ) died from an unknown disseminated parasitic infection . deep sequencing of dna from infected tissues , followed by gene - specific pcr and sequencing , revealed a divergent species within the newly proposed genus versteria ( cestoda : taeniidae ) . versteria may represent a previously unrecognized risk to primate health .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Pascua Yaqui Mineral Rights Act of 2005''. SEC. 2. DEFINITIONS. In this Act: (1) Secretary.--The term ``Secretary'' means the Secretary of the Interior. (2) State.--The term ``State'' means the State of Arizona. (3) Tribe.--The term ``Tribe'' means the Pascua Yaqui Tribe. SEC. 3. ACQUISITION OF SUBSURFACE MINERAL INTERESTS. (a) In General.--Not later than 180 days after the date of enactment of this Act, the Secretary, in coordination with the Attorney General of the United States and with the consent of the State, shall acquire through eminent domain the following: (1) All subsurface rights, title, and interests (including subsurface mineral interests) held by the State in the following tribally-owned parcels: (A) Lot 2, sec. 13, T. 15 S., R. 12 E., Gila and Salt River Meridian, Pima County Arizona. (B) Lot 4, W\1/2\SE\1/4\, sec. 13, T. 15 S., R. 12 E., Gila and Salt River Base & Meridian, Pima County, Arizona. (C) NW\1/4\NW\1/4\, N\1/2\NE\1/4\NW\1/4\, SW\1/ 4\NE\1/4\NW\1/4\, sec. 24, T. 15 S., R. 12 E., Gila and Salt River Base & Meridian, Pima County Arizona. (D) Lot 2 and Lots 45 through 76, sec. 19, T. 15 S., R. 13 E., Gila and Salt River Base & Meridian, Pima County, Arizona. (2) All subsurface rights, title, and interests (including subsurface mineral interests) held by the State in the following parcels held in trust for the benefit of Tribe: (A) Lots 1 through 8, sec. 14, T. 15 S., R. 12 E., Gila and Salt River Base & Meridian, Pima County, Arizona. (B) NE\1/4\SE\1/4\, E\1/2\NW\1/4\SE\1/4\, SW\1/ 4\NW\1/4\SE\1/4\, N\1/2\SE\1/4\SE\1/4\, SE\1/4\SE\1/ 4\SE\1/4\, sec. 14, T. 15 S., R. 12 E., Gila and Salt River Base & Meridian, Pima County, Arizona. (b) Consideration.--Subject to subsection (c), as consideration for the acquisition of subsurface mineral interests under subsection (a), the Secretary shall pay to the State an amount equal to the market value of the subsurface mineral interests acquired, as determined by-- (1) a mineral assessment that is-- (A) completed by a team of mineral specialists agreed to by the State and the Tribe; and (B) reviewed and accepted as complete and accurate by a certified review mineral examiner of the Bureau of Land Management; (2) a negotiation between the State and the Tribe to mutually agree on the price of the subsurface mineral interests; or (3) if the State and the Tribe cannot mutually agree on a price under paragraph (2), an appraisal report that is-- (A)(i) completed by the State in accordance with subsection (d); and (ii) reviewed by the Tribe; and (B) on a request of the Tribe to the Bureau of Indian Affairs, reviewed and accepted as complete and accurate by the Office of the Special Trustee for American Indians of the Department of the Interior. (c) Conditions of Acquisition.--The Secretary shall acquire subsurface mineral interests under subsection (a) only if-- (1) the payment to the State required under subsection (b) is accepted by the State in full consideration for the subsurface mineral interests acquired; (2) the acquisition terminates all right, title, and interest of any party other than the United States in and to the acquired subsurface mineral interests; and (3) the Tribe agrees to fully reimburse the Secretary for costs incurred by the Secretary relating to the acquisition, including payment to the State for the acquisition. (d) Determination of Market Value.--Notwithstanding any other provision of law, unless the State and the Tribe otherwise agree to the market value of the subsurface mineral interests acquired by the Secretary under this section, the market value of those subsurface mineral interests shall be determined in accordance with the Uniform Appraisal Standards for Federal Land Acquisition, as published by the Appraisal Institute in 2000, in cooperation with the Department of Justice and the Office of Special Trustee for American Indians of the Department of Interior. (e) Additional Terms and Conditions.--The Secretary may require such additional terms and conditions with respect to the acquisition of subsurface mineral interests under this section as the Secretary considers to be appropriate to protect the interests of the United States and any valid existing right. SEC. 4. INTERESTS TAKEN INTO TRUST. (a) Land Transferred.--Subject to subsections (b) and (c), notwithstanding any other provision of law, not later than 180 days after the date on which the Tribe makes the payment described in subsection (c), the Secretary shall take into trust for the benefit of the Tribe the subsurface rights, title, and interests, formerly reserved to the United States, to the following parcels: (1) E\1/2\NE\1/4\, SW\1/4\NE\1/4\, sec. 14, T. 15 S., R. 12 E., Gila and Salt River Base & Meridian, Pima County, Arizona. (2) W\1/2\SE\1/4\, SW\1/4\, sec. 24, T. 15 S., R. 12 E., Gila and Salt River Base & Meridian, Pima County, Arizona. (b) Exceptions.--The parcels taken into trust under subsection (a) shall not include-- (1) NE\1/4\SW\1/4\, sec. 24, except the southerly 4.19 feet thereof; (2) NW\1/4\SE\1/4\, sec. 24, except the southerly 3.52 feet thereof; or (3) S\1/2\SE\1/4\, sec. 23, T. 15 S., R. 12 E., Gila and Salt River Base & Meridian, Pima County, Arizona. (c) Consideration and Costs.--The Tribe shall pay to the Secretary only the transaction costs relating to the assessment, review, and transfer of the subsurface rights, title, and interests taken into trust under subsection (a).
Pascua Yaqui Mineral Rights Act of 2005 - Directs the Secretary of the Interior, in coordination with the Attorney General and with the consent of the state of Arizona, to acquire all subsurface rights, title, and interests (including subsurface mineral interests) held by the state in specified tribally-owned parcels and in specified parcels held in trust for the benefit of the Tribe. Requires the Secretary to pay the state, as consideration for the acquisition of subsurface mineral interests, an amount equal to their market value. Directs the Secretary to take into trust for the benefit of the Tribe the subsurface rights, title, and interests, formerly reserved to the United States, to other specified parcels. Requires the Tribe to pay to the Secretary only the transaction costs relating to the assessment, review, and transfer of the subsurface rights, title, and interests taken into trust.
linear iga bullous disease ( lad ) , epidermolysis bullosa acquisita ( eba ) and mucous membrane pemphigoid ( mmp ) are reported less frequently . interestingly , by immunoblot analysis it was found that the patient sera contained igg and iga antibodies to multiple bp180 epitopes and igg antibodies to laminin gamma-1 . a 79-year - old japanese man suffered from plaque - type psoriasis vulgaris for 8 years and was treated with topical steroids , oral antihistamine , 5 - 10 mg oral prednisolone daily and 50 mg oral cyclosporine daily at a dermatologic clinic . after he stopped oral prednisolone 3 months earlier , his condition worsened and he finally developed erythroderma . the patient was febrile and had scaly erythema covering most of his body and multiple tense vesicles and bullae on his trunk and extremities [ figure 1 ] . the blisters measured 5 - 20 mm in diameter , but did not show an annular arrangement . large tense blisters on the erythema histopathological examination of the specimen obtained from a bullous lesion showed a subepidermal blister containing fibrin nets and eosinophils [ figure 2 ] . another skin biopsy from an erythematous lesion revealed a subcorneal neutrophilic infiltration forming munro 's microabscess and club - shaped extension of the epidermis . subepidermal blister with infiltration of eosinophils and lymphocytes ( h and e stain , original magnification 400 ) indirect immunofluorescence , in which normal human skin was used as a substrate , demonstrated a high titer of circulating igg autoantibodies against the basement membrane zone ( bmz ) ( a titer : > 1 : 160 ) . indirect immunofluorescence 1 m nacl - split skin revealed circulating iga and igg autoantibodies , ( both titers ; 1 : 40 ) that bound to the epidermal side of the split skin [ figure 3a and b ] . through an elisa using a bp180 nc16a domain recombinant protein , the index value was found to be 195.95 ( normal range : < 15 ) . indirect immunofluorescence using 1 m nacl - split normal human skin showed igg ( a ) and iga ( b ) antibodies bound to the epidermal side of the split immunoblot analysis using normal human epidermal extracts detected circulating igg autoantibodies against the bp180 antigen [ figure 4a ] . interestingly , both igg and iga antibodies reacted with the bp180 nc16a domain recombinant protein [ figure 4b ] . in addition , igg antibodies reacted with the bp180 c - terminal domain recombinant protein , [ figure 4c ] and both iga and igg antibodies showed reactivity with the 120-kda lad-1 by immunoblot analysis using concentrated hacat cell supernatant [ figure 5a ] . furthermore , immunoblot analysis using normal human dermal extracts detected igg antibodies against a 200-kda antigen ( laminin gamma-1 ) [ figure 5b ] . ( a ) normal human epidermal extracts demonstrated that igg antibodies reacted with clearly with bp180 ( lane 4 ) . ( b ) bp180 nc16a domain demonstrated both igg ( lane 3 ) and iga ( lane 4 ) antibodies . ( c ) bp180 c - terminal domain revealed igg antibodies ( lane 3 ) ( a ) hacat cell culture supernatant revealed that both igg ( lane 3 ) and iga ( lane 4 ) antibodies reacted with 120-kda lad-1 . ( b ) normal human dermal extract demonstrated that igg reacted strongly with a 200-kda protein ( laminin gamma-1 ) ( lane 3 ) after the patient was treated with oral prednisolone at a dose of 20 mg daily , the number of blisters decreased . although the pathogenic mechanism of coexisting psoriasis vulgaris and subepidermal blistering skin disease is unclear , a common immunogenetic mechanism might be involved . treatments , such as puva , uvb , tar , dithranol and immunomoduratory therapies , were implicated in the development of bp . it is hypothesized that anti - psoriatic treatments increase immunogenicity of bmz proteins , resulting in a higher risk of autoantibodies production . the majority of bp patients reveal igg autoantibodies against two major hemidesmosomal components : 230-kda antigen ( bp230 or bpag1 ) and 180-kda antigen ( bp180 , bpag2/type xvii collagen ) . we found that igg autoantibodies were reactive with the bp180 nc16a domain recombinant protein in both immunoblot test and elisa . in addition , igg autoantibodies against the c - terminal domain of bp180 were detected . immunoreactivity with c - terminus of bp180 ectodomain might be responsible for the scarring phenotype in patients with mmp . nakatani et al . examined the immunoreactivity of 110 bp sera against the nc16a and c - terminal domains of bp , and found that 21 ( 19% ) sera of the 110 bp sera recognized both the nc16a and c - terminal domains . lad is an autoimmune subepidermal blistering disorder characterized by linear deposits of iga at bmz . in lad , multiple target antigens , such as the 120-kda and 97-kda lad-1 , 180-kda protein or 290-kda protein , have been demonstrated by immunoblot analysis . there is a strong antigenic relationship between the 120-kda/97-kda lad-1 and bp180 , and lad-1 was shown to be generated as a proteolytic cleavage product of the bp180 ectodomain . this 120-kda/97-kda soluble ectodomain of bp180 is recognized by autoantibodies in patients with bp and lad . our patient sera also demonstrated igg antibodies to laminin gamma-1 ( p200 ) by immunoblot analysis using dermal extracts . sera from 90% patients with anti - p200 pemphigoid showed reactivity with laminin gamma-1 , therefore , the name of anti - laminin gamma-1 pemphigoid was proposed . many of anti - laminin gamma-1 pemphigoid developed in psoriasis were reported . however , to our knowledge , there is only one case of psoriatic erythroderma associated with anti - laminin gamma-1 pemphigoid . a further observation was that our patient sera contained iga autoantibodies to the bp180 nc16a domain and lad-1 . bp is mainly characterized by igg autoantibodies , whereas the autoantibodies to lad and mmp usually igg and/or iga autoantibodies . a dual igg and iga anti - bmz antibody response was associated with severity and persistent disease course in mmp . mmp patients with severe clinical phenotype demonstrated igg and iga reactivity to both bp180 and lad-1 . however , our patient sera contained igg and iga autoantibodies to multiple domains of bp180 without displaying the characteristic clinical features of mmp and the skin lesions were well controlled with prednisolone . in summary , we described a subepidermal blistering skin disease associated with psoriatic erythroderma showing autoantibodies targeting various antigenic sites . notably , a dual igg and iga autoimmune response against multiple bp180 epitopes and igg autoantibodies to laminin gamma-1 were observed by immunoblot analysis . antigenic reactive regions other than the nc16a domain of bp180 may be relevant to blister formation . epitope spreading may cause this rare autoimmune response to multiple autoantigens .
we report a 79-year - old japanese man who developed subepidermal blistering skin disease after an 8-year history of psoriasis . histology of a bullous lesion revealed a subepidermal blister with a mixed inflammatory cell infiltrate and fibrin nets . indirect immunofluorescence using normal human skin sections revealed igg and iga autoantibodies in the patient serum , which bound to the epidermal side of 1 m nacl - split skin sections . immunoblot analysis revealed that both iga and igg antibodies reacted with the bp180 nc16a domain and the 120-kda lad-1 and that igg antibodies also reacted with the bp180 c - terminal domain and laminin gamma-1 . these findings indicated that autoantibodies to laminin gamma-1 and multiple epitopes in bp180 ectodomain played a role in the pathogenesis of this unique autoimmune subepidermal blistering skin disease associated with psoriasis .
Italian Coast Guard divers have found a woman's body in a corridor of a submerged section of the capsized Costa Concordia, raising to at least 12 the number of dead in the cruise liner accident. Firefighters and Red Cross personnel take pictures of the cruise ship Costa Concordia ship, off the Tuscan island of Giglio, Italy, Saturday, Jan. 21, 2012. The cruise captain who grounded the Costa Concordia... (Associated Press) A woman checks if her clothes are dry as the grounded cruise ship Costa Concordia is seen in background, off the Tuscan island of Giglio, Italy, Saturday, Jan. 21, 2012. The cruise captain, Capt. Francesco... (Associated Press) People take pictures of the grounded cruise ship Costa Concordia off the Tuscan island of Giglio, Italy, Saturday, Jan. 21, 2012. The cruise captain who grounded the Costa Concordia off the Tuscan coast... (Associated Press) Firefighters take pictures of the cruise ship Costa Concordia ship, off the Tuscan island of Giglio, Italy, Saturday, Jan. 21, 2012. The cruise captain who grounded the Costa Concordia off the Tuscan... (Associated Press) Relatives of the missing passengers of the cruise ship Costa Concordia arrive on the Tuscan island of Giglio, Italy, Saturday, Jan. 21, 2012. The cruise captain who grounded the Costa Concordia off the... (Associated Press) A tourist sitting in a restaurant views the grounded cruise ship Costa Concordia off the Tuscan island of Giglio, Italy, Saturday, Jan. 21, 2012. The cruise captain who grounded the Costa Concordia off... (Associated Press) A statuette of the Virgin Mary and of baby Jesus are seen inside a tent of the Italian firefighters after being recovered from the chapel of the Costa Concordia cruise ship, on the Tuscan island of Giglio,... (Associated Press) Coast Guard Cmdr. Cosimo Nicastro told The Associated Press that the body, wearing a life jacket, was found in a narrow corridor near an evacuation staging point at the ship's rear. The body was brought to Giglio, the Tuscan island where the cruise liner hit a reef and ran aground on Jan. 14. Twenty persons are missing. THIS IS A BREAKING NEWS UPDATE. Check back soon for further information. AP's earlier story is below. ROME (AP) _ Light fuel, apparently from machinery aboard the capsized Costa Concordia, was detected Saturday in the sea near the ship, Italian Coast Guard officials said Saturday. But Coast Guard spokesman Cmdr. Cosimo Nicastro says there is no indication that any of the nearly half-million gallons (2,200 metric tons) of heavy fuel oil has leaked from the ship's double-bottomed tanks. Nicastro said Saturday that the leaked substance appears to be diesel, which is used to fuel rescue boats and dinghies and as a lubricant for ship machinery. There are 185 tons of diesel and lubricants on board the crippled vessel, which is lying on its side just outside the port of the tiny Tuscan island of Giglio. Nicastro described the light fuel's presence in the sea as "very light, very superficial" and appearing to be under control. Although attention has been concentrated on the heavy fuel oil in the tanks, "we must not forget that on that ship there are oils, solvents, detergents, everything that a city of 4,000 people needs," Franco Gabrielli, the head of Italy's civil protection agency, told reporters in Giglio. Gabrielli, who is leading rescue, search and anti-pollution efforts for the Concordia, was referring to the roughly 3,200 passengers and 1,000 crew who were aboard the cruise liner when it ran into a reef near Giglio's coast on Jan. 14, and then, with the sea rushing into a 70-meter (230-foot) gash in its hull, listed and finally fell onto its side. Considering all the substances aboard the Concordia, "contamination of the environment, ladies and gentlemen, already occurred" when the cruise liner capsized, Gabrielli told a news conference. Vessels equipped with machinery to suck out the light fuel oil were in the area, Italian officials told Italian TV. Earlier on Saturday, crews removed oil-absorbing booms used to prevent environmental damage in case of a leak. Originally white, the booms were grayish. Divers resumed their search of the wreckage after data indicated the cruise ship had stabilized in the sea off Tuscany. To make it easier to enter and leave, the divers blasted more holes into the carcass of the ship. They were hoping to inspect an area where many passengers had gathered during the evacuation. They were searching for bodies or survivors, although it is considered unlikely any of the 21 missing in the accident could still be alive. The search had been suspended on Friday after the Concordia shifted, prompting fears the ship could roll off a rocky ledge of sea bed and plunge deeper into the sea. An abrupt shift could also cause a leak in the Concordia's fuel tanks, polluting the pristine waters around Giglio, part of a seven-island Tuscan archipelago. ___ Barry contributed from Milan. Andrea Foa contributed from Giglio. ||||| NBC's Michelle Kosinski reports. Updated at 11:10 a.m. ET: GIGLIO, Italy -- Italian Coast Guard divers on Saturday found a woman's body in a corridor of a submerged section of the capsized Costa Concordia, raising to at least 12 the number of dead in the cruise liner accident. Coast Guard Cmdr. Cosimo Nicastro told The Associated Press that the victim, who was wearing a life vest, was found during a particularly risky inspection of an evacuation staging point at the ship's rear. "The corridor was very narrow, and the divers' lines risked snagging" on objects in the passageway, Nicastro said. To permit the coast guard divers to get into the area, Italian navy divers had preceded them, setting off charges to blast holes for easier entrance and exit, he said. The woman's nationality and identity were not immediately known. The body was brought to Giglio, the Tuscan island where the cruise liner hit a reef and ran aground on Jan. 14. Twenty people remain missing. Slideshow: Luxury cruise ship runs aground DigitalGlobe The Costa Concordia ran aground Jan. 13 off the coast of Italy, resulting in the evacuation of thousands of passengers as the ship began heavily listing. Launch slideshow Search and rescue efforts for survivors and bodies have meant that an operation to remove heavy fuel in the Concordia's tanks hasn't yet begun, although specialized equipment has been standing by for days. On Saturday, light fuel, apparently from machinery aboard the capsized Costa Concordia, was detected near the ship. But Nicastro said there was no indication that any of the nearly 500,000 gallons (2,200 metric tons) of heavy fuel oil has leaked from the ship's double-bottomed tanks. He said the leaked substance appears to be diesel, which is used to fuel rescue boats and dinghies and as a lubricant for ship machinery. There are 185 tons of diesel and lubricants on board the crippled vessel, which is lying on its side just outside Giglio's port. Nicastro described the light fuel's presence in the sea as "very light, very superficial" and appearing to be under control. NBC's Michelle Kosinski reports. Updated at 9:30 a.m. ET: The captain of the cruise ship Costa Concordia, which struck a rock and capsized off Italy, told magistrates he informed the ship's owners of the accident immediately, denying he had delayed raising the alarm, judicial sources said on Saturday. Capt. Francesco Schettino has been blamed for causing the January 13 accident in which at least 11 people died. He is under house arrest, accused of multiple manslaughter, causing a shipwreck and abandoning ship before all passengers were evacuated. His statements to prosecutors investigating the disaster, reported in the Italian press and confirmed by judicial sources, underline the growing battle between him and Costa Cruise Lines which operates the 114,500 ton vessel. According to transcripts of his questioning by prosecutors leaked to Italian media, he said that immediately after hitting the rock he sent two of his officers to the engine room to check on the state of the vessel. As soon as he realized the scale of the damage, he called Roberto Ferrarini, marine operations director for Costa Cruises. "I told him: I've got myself into a mess, there was contact with the seabed. I am telling you the truth, we passed under Giglio and there was an impact," Schettino said. "I can't remember how many times I called him in the following hour and 15 minutes. In any case, I am certain that I informed Ferrarini about everything in real time," he said, adding he had asked the company to send tug boats and helicopters. Costa Cruises Chief Executive Pier Luigi Foschi says Schettino delayed issuing the SOS and evacuation orders and gave false information to the company headquarters. "Personally, I think he wasn't honest with us," Foschi told Corriere della Sera Friday. He said the first phone conversation between Schettino and Ferrarini took place 20 minutes after the ship hit the rock. Published at 5:40 a.m. ET: Divers resumed the search of the wreckage of the capsized Costa Concordia after data indicated the cruise ship had stabilized in the sea off Tuscany. Italian coastguard spokesman Cosimo Nicastro told NBC News Saturday that the navy had punctured two holes in the carcass of the ship, which has been lying on its side near the port of Giglio island since shortly after it crashed into a reef on Jan. 13. Divers were expected to search the area around bridge number four, an emergency meeting point near to where other bodies were found. They had been hoping to reach that area for days, NBC reported. They are searching for bodies or survivors, although it is unlikely any of the missing in the accident could still be alive. The search was suspended on Friday after the Concordia shifted, prompting fears the ship could roll off a rocky ledge of sea bed and plunge deeper into the sea. There are also fears the Concordia's fuel could leak, polluting pristine waters. More from msnbc.com and NBC News: The Associated Press and Reuters contributed to this report.
– The death toll in the Costa Concordia tragedy rose to 12 today as rescuers found a woman's body in the cruise ship, reports AP. She was wearing a life jacket and found near an evacuation staging point. Authorities haven't identified her yet. The discovery means that 20 people are now unaccounted for, including Minnesota couple Barbara and Jerry Heil. Meanwhile, the ship's captain, Francisco Schettino, told prosecutors that he informed the ship's owners of the accident immediately. That's contrary to some accounts and a signal of the growing rift between him and Costa Cruise Lines, notes MSNBC.
t dwarfs are a spectral class of brown dwarfs distinguished by the presence of ch@xmath3 , h@xmath2o , and h@xmath2 collision - induced absorption ( cia ) in the near - infrared @xcite , and heavily pressure - broadened and absorption at optical wavelengths @xcite . these objects have effective temperatures ( t@xmath8s ) ranging from @xmath01300 k at the transition between l and t dwarfs @xcite to @xmath0750 k for the latest - type t dwarf 2mass 0415@xmath90935 @xcite . t dwarfs therefore comprise the coldest and intrinsically faintest brown dwarfs currently known , and as such are key objects for testing brown dwarf and extrasolar giant planet atmosphere models @xcite , probing the extreme low - mass end of the initial mass function @xcite , and expanding the census of the sun s nearest neighbors . for the past two years , we have been conducting a wide - field ( 74% of the sky ) search for t dwarfs in the two micron all sky survey ( * ? ? ? * ; * ? ? ? * hereafter 2mass ) . this three - band , near - infrared ( @xmath10 ) imaging survey samples the peak of the t dwarf spectral energy distribution , and is therefore the most sensitive wide - field sky survey currently available for identifying these cold brown dwarfs . our results to date include the discovery of the bright , and therefore potentially very close ( @xmath11 @xmath12 8 pc ) t dwarf 2mass 1503 + 2525 ( * ? ? ? * hereafter paper i ) and three new t dwarfs identified in the southern hemisphere ( * ? ? ? * hereafter paper ii ) . here we present the discovery of seven new t dwarfs in both northern and southern hemispheres , all of which were verified by low resolution ( r@xmath0150 ) spectroscopic observations obtained with the irtf 3.0 m spex instument @xcite . in @xmath13 2 we describe near - infrared imaging and spectroscopic observations of t dwarf candidates and comparison stars made using spex and other imaging instruments . in @xmath13 3 we analyze these data , classifying both the new t dwarfs and background stars , including four potential ultracool ( spectral types later than sdm7 ) subdwarfs , using the low - resolution spectra and spectral comparison stars . we also report line strengths for the 1.243/1.252 @xmath1 doublet in six t dwarfs observed at moderate resolution ( r@xmath01200 ) with spex , and proper motions for all of the t dwarf discoveries . in @xmath13 4 we discuss our results , including distance and tangential velocity estimates , signatures of gravity and/or metallicity in the near - infrared spectrum of 2mass 0034 + 0523 , and prospects for future discoveries . results are summarized in @xmath13 5 . our selection of t dwarf candidates from the 2mass working point source database ( wpsd ) is described in detail in paper i. in brief , we chose point sources with @xmath14 , @xmath15 or @xmath16 , no optical counterpart with 5@xmath17 in the usno a2.0 catalog @xcite or by visual inspection of digitized sky survey ( dss ) images , no catalogued minor planet counterpart , and @xmath18 . revised 2mass photometry for these sources is given in the all sky data release ( adr ) point source catalog , and is based on improved photometric calibration , particularly at @xmath19-band ( see cutri et al . 2003 , @xmath13iv.1.c ) . in some cases the new photometry pushes our candidates out of the initial selection criteria . however , these candidates were retained , and their adr photometry is reported throughout this article . to date , a total of 912 t dwarf candidates have been observed as part of this search program . follow - up near - infrared imaging observations of t dwarf candidates are required to eliminate the majority of contaminant sources . these include minor planets whose ephemerides were unknown or were not incorporated at the time of 2mass data processing , and image artifacts that remain in the 2mass wpsd . imaging observations also provide second epoch astrometry for confirmed t dwarfs that may be used to measure proper motion . we therefore conducted a series of imaging campaigns using various instrumentation on 1 - 4 m class telescopes . the spex instrument is comprised of a 1024@xmath201024 insb array as the primary detector for the spectrograph , and a second 512@xmath20512 insb array imaging / guiding camera with a @xmath21 field - of - view ( 0@xmath512 pixels ) . we made use of the latter detector to image t dwarf candidates during our 2003 september 1719 ( ut ) irtf observing campaign . conditions varied from clear to slightly hazy with seeing between 0@xmath550@xmath57 at @xmath19-band . dithered exposure pairs of 30 s each were obtained at @xmath19-band and pair - wise subtracted for inspection . we verified that each exposure was at least as deep as the corresponding 2mass image . table 1 lists those sources that were absent in follow - up images . five of these were identified as known asteroids using the small - body search tool maintained by the jet propulsion laboratory solar system dynamics group . ] , and were typically catalogued as asteroids after the 2mass observation . the remaining sources have ecliptic latitudes @xmath22 and are therefore also likely to be as yet uncataloged minor planets . gunn-@xmath23 band images of four new t dwarfs identified in this sample ( see @xmath13 3.1.2 ) were obtained using the palomar 1.5 m facility ccd camera on 2003 september 27 ( ut ) . conditions were clear and photometric with 0@xmath57 seeing . the ccd camera is a red - sensitive , 2048@xmath202048 , thinned detector with 24@xmath1 pixels . pixel scale on the sky is 0@xmath5378 . only the central 1024@xmath201024 region was used for a total field of view of 6@xmath245 on a side . raw images were bias - subtracted and divided by a median - combined flat field frame generated from a series of lamp on / off exposures reflected from the interior dome . all four t dwarf targets were detected in these images to sufficient signal - to - noise ( s / n @xmath25 10 ) to obtain reliable astrometry . the lick 3.0 m gemini instrument @xcite is a dual 256@xmath20256 hgcdte / insb camera with 0@xmath568 pixels and a @xmath26 field of view . j - band images of 2mass 1209@xmath91004 were obtained using this instrument on 2003 may 12 ( ut ) ; conditions were clear with seeing of 1@xmath52 . observations were similar to those described in paper i , with a dithered pair of 30 s exposures obtained and differenced to produce the final science image ; no additional calibration was required for the astrometric measurements . the aat 3.9 m iris2 near - infrared imager and spectrograph is a 1024@xmath201024 hgcdte array camera with 0@xmath54486 pixels and a 7@xmath247@xmath207@xmath247 field of view . we used this camera to observe 2mass 1231 + 0847 , 2mass 1828@xmath94849 , and 2mass 2331@xmath94718 on 2003 june 11 and 2003 september 10 and 13 ( ut ) . these data are part of an imaging program testing the use of ch@xmath3 filters to efficiently identify and classify t dwarf candidates ; the program is described in detail in tinney et al . ( in preparation ) . conditions during the observations were non - photometric , with occasional cloud patches , although data were only obtained when the sky was clear . seeing ranged from 0@xmath592@xmath50 . targets were observed in dithered sets of three 40 s exposures in each of the ch@xmath3-s and ch@xmath3-l filters , which bisect the @xmath27-band about the 1.6 @xmath1 ch@xmath3 band . images were processed via the iris2 data reduction pipeline , which is modeled after the ukirt orac - dr pipeline , and includes bad pixel masking , flat fielding , and alignment and re - sampling of the dithered exposures to produce a final , calibrated image . a total of 66 t dwarf candidates and 33 spectral comparison stars were observed using the low - resolution prism mode of the spex spectrograph primarily over two observing runs , 2123 may 2003 and 1719 september 2003 ( ut ) . a log of observations for the candidates and comparison stars are given in tables 2 and 3 , respectively . conditions during the may run ranged from light to heavy cirrus with seeing of 0@xmath550@xmath59 at @xmath19-band . one object , 2mass 0034 + 0523 , was observed on 5 september 2003 ( ut ) during clear conditions with similar seeing . the prism mode of spex provides 0.72.5 @xmath1 continuous spectroscopy in a single order . using the 0@xmath55 slit , we obtained a resolution r@xmath0150 ; dispersion on the chip is 2030 pixel@xmath6 . for all observations , the instrument rotator was positioned at the parallactic angle to mitigate differential color refraction across the broad spectral band observed . total integration times ranged from 12 ( bright m giant and supergiant stars ) to 1600 s , and were typically obtained in multiple pairs of 180 s exposures dithered along the chip for sky subtraction . except for a few low declination sources , the majority of objects were observed at airmasses @xmath28 . in all but one case , we observed a0 stars selected from the henry draper ( hd ) catalog shortly before or after the target observation and at a differential airmass @xmath29 0.1 for flux calibration . internal flat - field and ar arc lamps were observed immediately after the a0 star observations for instrumental calibration . six t dwarfs , including five of the discoveries presented here , were also observed using the cross - dispersed , moderate - resolution sxd mode of spex during the may and september runs . these observations are summarized in table 4 . we employed the same 0@xmath55 slit as the prism observations to obtain r@xmath01200 spectra from 0.92.4 @xmath1 in four orders ; an additional order subtending 0.810.95 @xmath1 was measured for the bright t5.5 2mass 1503 + 2525 . pixel dispersion on the chip ranged from 2.7 to 5.3 pixel@xmath6 . we observed all targets with the slit rotated to the parallactic angle in dithered pairs of 300 s each , with total integration times of 18003000 s. a0v / a0vn hd stars and internal calibration lamps were observed immediately after each target observation . all data were reduced using the spextool package @xcite . for both the prism and sxd datasets , science data were corrected for linearity , pair - wise subtracted , and divided by the corresponding median - combined flat field image . spectral data were optimally extracted using the default settings for aperture and background source regions , and wavelength calibration was determined from arc lamp and sky emission lines . multiple spectral observations for each source were then median - combined after scaling the spectra to match the highest signal - to - noise observation . telluric and instrumental response corrections for the science data were determined using the method outlined in @xcite . for the prism observations , line shape kernels were derived from the arc lines ; for the sxd observations , they were derived from the 1.005 @xmath1 paschen @xmath30 line in the a0 calibrator spectra . adjustments were made to the telluric spectra to compensate for differing line strengths and velocity shifts . final calibration was made by multiplying the observed target spectrum by its respective telluric correction spectrum . for the sxd data , multiple orders for each target exposure were scaled and combined using the corresponding low - resolution prism spectrum as a relative flux template . examination of the low - resolution spectra indicate that the majority of these candidate sources are late - type m dwarfs , based on the presence of weak h@xmath2o absorption at 1.4 and 1.9 @xmath1 ; co absorption at 2.3 @xmath1 ; tio and vo bands at 0.71.0 @xmath1 ; and feh , , and absorption at @xmath19-band . this result is not unexpected , as many of these sources have @xmath31 , implying @xmath32 35 at the detection limit of the dss plates @xcite , typical for late - type m dwarfs @xcite . the faintness of these objects also imply significant errors in their 2mass photometry , which explains in part their unusually blue @xmath33 colors . nevertheless , to better understand the nature of these contaminant sources , we classified their spectra by comparison to a suite of m- and l - type spectral standards and known objects , as listed in table 3 . a representative sample of these comparison spectra are shown in figure 1 . for the dwarfs , the strengthening of the 1.4 and 1.9 @xmath1 h@xmath2o bands , and 0.99 and 1.2 @xmath1 feh bands ; appearance of the 1.17 and 1.25 @xmath1 doublets ; weakening of the 0.76 , 0.82 , and 0.84 @xmath1 tio bands ; shift in peak flux toward 1.3 @xmath1 ; and reddening of the 1.32.4 @xmath1 spectral energy distribution are all correlated with spectral type . subdwarfs show distinctively stronger 0.99 @xmath1 feh absorption and weaker co and tio absorption than the dwarfs for equivalent h@xmath2o band strengths , as well as a shift in the peak of their spectral energy distributions to shorter wavelengths . giant and supergiant m stars exhibit much deeper h@xmath2o bands and stronger vo , tio , and co absorption , but fairly weak or absent metal hydride bands and atomic lines . using these diagnostics and the comparison spectra , we visually classified each of the observed sources that were not identified as t dwarfs . these classifications , which we estimate are accurate to within @xmath00.5 - 1.0 subtypes , are listed in table 2 ; uncertain numerical or luminosity classifications due to low s / n data are noted by a colon . a large percentage of the late - type m dwarfs exhibit paschen @xmath7 emission ( 1.09 @xmath1 ) : 33% ( 6 of 18 ) of the m7-m7.5 dwarfs and 66% ( 12 of 18 ) of the m8-m8.5 dwarfs appear to be in emission , but none of the m5-m6.5 dwarfs . the observed emission is consistent with the high frequency ( @xmath0100% ) of h@xmath34 emission seen in the optical spectra of m7m8 dwarfs @xcite . while most of the background sources exhibit spectral energy distributions and absorption features similar to the dwarf comparison stars , a handful of sources appear to be peculiar . these include four objects that exhibit subdwarf characteristics . as shown in figure 2 , 2mass 0142 + 0523 , 2mass 1640 + 1231 , and 2mass 1640 + 2922 all appear to be similar to or somewhat later than the sdm7.5 lsr 2036 + 5059 @xcite , with strong 0.86 , 0.99 , and 1.2 @xmath1 feh absorption ; weak 2.3 @xmath1 co absorption ; moderately strong 1.4 and 1.9 @xmath1 h@xmath2o absorption ; and a relatively blue 12.5 @xmath1 spectral slope . another source , 2mass 0041 + 3547 , exhibits spectral features similar to the l1 standard 2mass 1439 + 1929 @xcite , but has stronger 0.86 and 0.99 @xmath1 feh absorption , weaker co , and a somewhat bluer 1.32.4 @xmath1 spectral slope . this object may be a new early - type l subdwarf , similar to lsr 1610 - 0040 @xcite . however , as there currently exists no classification scheme for ultracool subdwarfs at near - infrared wavelengths , we characterize these sources as candidate subdwarfs for the time being ; further spectral analysis will be presented in a future publication . it should be noted that only four subdwarfs later than type m7 are currently known @xcite . finally , we address one additional background source , 2mass 1733 + 1529 , classified here as a dc 10 white dwarf based on its blackbody slope and absence of absorption lines ( figure 3 ) . this object was selected because of its absence in the first generation palomar sky survey ( poss - i ) @xmath35-band plate ( figure 4 , left ) , although it is present on the @xmath36-band plate as well as the @xmath36- , @xmath35- ( figure 4 , right ) , and @xmath37-band plates of poss - ii . the absence of this source in the poss - i @xmath35-band image was independently verified by m. gray and r. humphreys to a faint limit of @xmath38 19.520.0 using the minnesota automated plate scanner catalog , and by visual examination of poss - i prints available at the caltech astrophysics library . 2mass 1733 + 1529 is not listed in the @xcite white dwarf catalog ; and its small proper motion , 0@xmath505@xmath390@xmath504 yr@xmath6 , measured from lick 3 m gemini @xmath19-band imaging ( see @xmath13 3.3 ) , excludes it from the nltt @xcite and revised nltt @xcite proper motion catalogs . as this source is relatively bright in the optical ( @xmath35 = 16.9 in the usno - b1.0 catalog ; monet et al . 2003 ) , and given the absence of any obvious photographic anomaly , it is unclear as to why it was undetected on the poss - i @xmath35-band plate . it is possible that 2mass 1733 + 1529 was obscured in this image by an eclipsing or transiting faint source , such as a low - mass stellar or substellar companion . monitoring observations are planned to verify or place stringent constraints on this intriguing hypothesis . seven of the candidates listed in table 2 exhibit clear absorption features of h@xmath2o ( 1.1 , 1.4 , and 1.9 @xmath1 ) and ch@xmath3 ( 1.3 , 1.6 , and 2.2 @xmath1 ) in their low - resolution spectra , characteristic of t dwarfs . finder charts for these objects are given in figure 5 , low - resolution spectra are diagrammed in figure 6 , and spectrophotometric properties are listed in table 5 . these objects exhibit a broad range of ch@xmath3 band strengths and near - infrared colors ( @xmath40 ) , encompassing early- , mid- , and late - type t dwarf spectral morphologies @xcite . the new t dwarfs were classified by their low - resolution spectra following the technique of @xcite , which is based on the established mk process of spectral classification ( e.g. , morgan 1984 ) . in addition to our candidates , we observed a set of five t dwarf spectral standards : sdss 0423@xmath90414 ( t0 ) , sdss 1254@xmath90122 ( t2 ) , 2mass 2254 + 3123 ( t4 ) , 2mass 0243@xmath92453 ( t6 ) , and 2mass 0415@xmath90935 ( t8 ) . the t2 , t6 , and t8 standards are those established in @xcite , while the t0 and t4 standards were selected as part of an expanded list of spectral standards given in @xcite . to quantify our classifications , we used revised spectral indices from the latter reference which sample the major h@xmath2o and ch@xmath3 bands in the 12.5 @xmath1 region while avoiding strong telluric absorption regions . these indices are defined as : @xmath41 @xmath42 @xmath43 @xmath44 and @xmath45 where @xmath46 designates the integrated flux between wavelengths @xmath47 and @xmath48 , and indices are defined as the ratio of flux at the base of the absorption band to the nearby pseudo - continuum region implies that no true continuum is present ; we therefore normalize to the local spectral maximum , or pseudo - continuum . ] . these spectral indices were measured for all t dwarfs in our low - resolution spectral sample . classifications for each index ( excluding the spectral standards ) were determined as the closest match to the standard values , allowing for subtypes halfway between the standard classes ( i.e. , integer subclasses ) . final classifications for each object were determined as the mean of the individual index classifications , rounded off to the nearest 0.5 subclass . table 6 lists the spectral indices and derived classifications for the new and previously known t dwarfs . the scatter amongst individual index subtypes is typically 0.40.7 subclasses , with some objects exhibiting no scatter , justifying our 0.5 subclass precision . as a check , we compared derived subtypes to those from the literature for seven previously identified and classified t dwarfs ; all were consistent within 0.5 subclasses . we also examined the behavior of the indices with spectral type , as shown in figure 7 ; all five indices show minimal scatter about a line connecting the standard values , again consistent with the adopted classification precision . the scatter in these indices is in fact better than that seen in the classifications of @xcite and @xcite , reflecting overall higher signal - to - noise data and the improved spectral index definitions . the spectral types for the t dwarf discoveries range from t3 ( 2mass 1209@xmath91004 ) to t7 ( 2mass 0034 + 0523 ) , the former object being the earliest - type t dwarf so far identified in our 2mass search . as shown in figure 6 , the derived classifications are consistent with a monotonic increase in h@xmath2o and ch@xmath3 bandstrengths with spectral type . moderate resolution spex spectra of six t dwarfs are shown in figure 8 . these data reveal the complex h@xmath2o and ch@xmath3 molecular features in far greater detail than the prism spectra . while the s / n of the higher resolution spectra are on average lower ( 1050% ) , most of the structure seen is real and repeats between the objects . these data also resolve the 1.243/1.252 @xmath1 doublet , shown in detail in figure 8 , a key diagnostic of temperature at the brightest peak in the spectral energy distribution . pseudo - equivalent widths ( pews ; equivalent width relative to the pseudo - continuum ) for these lines were measured following the method outlined in paper ii , directly integrating over the line profiles . results are given in table 7 . the strongest lines are found in the t5 2mass 2331@xmath94718 , which have pews of [email protected] and [email protected] for the 1.243 and 1.252 @xmath1 lines , respectively . indeed , with the exception of the t5.5 2mass 0516@xmath90445 ( for which only low s / n data have been obtained ) , these are the strongest lines measured in any t dwarf to date @xcite . both 2mass 1503 + 2525 and 2mass 1231 + 0847 exhibit broadened lines , possibly due to rapid rotation or high photospheric pressure , the latter case indicative of high surface gravity . also note the nearly absent lines in the t7 2mass 0034 + 0523 , due either to its low t@xmath8 or possibly metal deficiency ( see @xmath13 4.2 ) . overall , the observed line strengths are in general agreement with previous work , with the strongest lines found amongst the t5 dwarfs and becoming progressively weaker toward the later spectral types . proper motions for the t dwarf discoveries were measured using 2mass adr data and the follow - up imaging observations described in @xmath13 2.2 . data analysis was similar to that described in paper ii . we used 2mass catalog data as first epoch astrometry , while the follow - up images , taken roughly 35 years after the 2mass observations , comprised our second epoch dataset . 2mass sources within the re - imaged areas ( excluding the t dwarf ) were matched to detected sources on the follow - up images . first - order coordinate solutions for the images were then determined by linear regression using this grid of background stars , allowing for the rejection of 3@xmath49 outliers ( i.e. , moving sources ) . for the iris2 ( ch@xmath3-s images only ) and ccd observations , @xmath01560 background sources were used , yielding positional uncertainties of 0@xmath510@xmath54 , equivalent to 2mass astrometric accuracy @xcite , and proper motion uncertainties of @xmath29 0@xmath51 yr@xmath6 . for the gemini observation of 2mass 1209@xmath91004 , only four background objects were available and astrometric uncertainties were assumed to be somewhat higher . using these coordinate solutions , the second epoch position of the t dwarf was computed and its motion derived . we note that the agreement between proper motion determinations for 2mass 0034 + 0523 obtained with the palomar 60 ccd camera and aat iris2 lend confidence to the reliability of our measurements . table 8 lists the resulting proper motions . as expected for a nearby population of faint brown dwarfs , these objects have large motions in general , with 2mass 1231 + 0847 exhibiting the largest at 1@xmath555@xmath390@xmath507 yr@xmath6 . this object would be easily identified in a near - infrared proper motion survey . on the other hand , two t dwarfs , 2mass 0407 + 1514 and 2mass 2331@xmath94718 , have motions below our sensitivity limits . we discuss the associated tangential velocities for these t dwarfs below . using the derived classifications and 2mass photometry , it is possible to estimate the spectrophotometric distances of our t dwarf discoveries . we employed the polynomial @xmath19- and @xmath50-band absolute magnitude / spectral type relations from @xcite , based on parallax measurements from their own program and from @xcite . distance estimates for each of the newly discovered t dwarfs were calculated in both bands ( with the exception of 2mass 0034 + 0523 which was not detected at @xmath50 by 2mass ) and for @xmath390.5 subclasses about the nominal classification . final distances and uncertainties were determined by the mean and standard deviation of these estimates , respectively , and are listed in table 5 . assuming that they are single sources , all of the objects listed are within 25 pc of the sun , with the most distant object being the earliest - type t dwarf 2mass 1209@xmath91004 . three objects , 2mass 0034 + 0523 , 2mass 1231 + 0847 , and 2mass 1828@xmath94849 , are at or within 10 pc from the sun within the reported uncertainties , with the first ( and latest - type ) object having an estimated distance of only [email protected] pc . it should be noted that the uncertainties in these distance estimates do not take into account systematic deviations in the absolute magnitude / spectral type relation @xcite nor possible duplicity , and should be confirmed by parallax measurement . combining the distance estimates with the proper motion determinations from @xmath13 3.3 , we calculated tangential velocities for these t dwarfs , listed in table 8 . the mean @xmath51 of those t dwarfs with detected motion is 43 km s@xmath6 with a standard deviation of 28 km s@xmath6 ; including the upper limits yields a somewhat lower mean of 33 km s@xmath6 . this value is consistent with the mean @xmath51 for disk dwarfs ( 39 km s@xmath6 ; reid & hawley 2000 ) but is somewhat higher than that for field late - type m and l dwarfs ( 22 km s@xmath6 ; gizis et al . 2000 ) , suggesting that the t dwarfs in this sample may be drawn from a somewhat older population . a similar difference in the @xmath51 distribution between field l and t dwarfs has also been noted by @xcite , and a mean age difference between these classes is predicted in field substellar mass function simulations @xcite . this possible age segregation is consistent with the evolution of brown dwarfs , as an object of a given mass will evolve from warm ( l dwarf ) to cold ( t dwarf ) as it ages ; however , a larger sample must be considered before drawing any firm conclusions about the relative ages of the l and t dwarf field populations . we note that the spectrophotometric distance of 2mass 1231 + 0847 is consistent within its uncertainties to the nearby ( [email protected] pc ; hipparchos ; perryman et al . 1997 ) k7v star gliese 471 , located roughly 8@xmath241 ( 6500 au ) to the northwest . indeed , 2mass 1231 + 0847 was also identified in a parallel search for wide brown dwarf companions to nearby stars currently being conducted by j. d. kirkpatrick . however , while their direction of motion is nearly identical ( @xmath0230@xmath52 ) , 2mass 1231 + 0847 has a proper motion nearly twice as large as gliese 471 , and is therefore not a bound companion . the t7 2mass 0034 + 0523 has the bluest @xmath33 color amongst the t dwarf discoveries , and it may be even bluer as 2mass photometry provides only an upper limit . examination of near - infrared spectral data confirms this color , as 2mass 0034 + 0523 exhibits a fairly suppressed @xmath53-band peak in comparison to the rest of the t6t8 dwarfs observed . this is quantified in figure 9 , which plots the logarithm of the spectral ratio @xmath54 as a function of spectral type for all of the t dwarfs observed . spex prism data are ideally suited for this measurement , as the full near - infrared spectral range is sampled in a single order and no correction is required to match the relative scalings between the @xmath19- and @xmath53-bands . the ratio exhibits a tight linear trend across the full spectral type range of t dwarfs , although there is a somewhat greater spread in values amongst the t5t6 dwarfs . 2mass 0034 + 0523 stands well above this trend , however , having the smallest value of @xmath55 , and hence the most suppressed @xmath53-band peak , in the entire sample . the spectral properties of this source are reminiscent of the peculiar t6 2mass 0937 + 2931 @xcite , which not only has a very blue near - infrared color ( @xmath56 ; paper i ) , likely due to enhanced cia h@xmath2 absorption , but also strong absorption from the pressure - broadened 0.77 @xmath1 resonance doublet and the 0.99 @xmath1 feh band @xcite . the enhanced pressure - sensitive features are symptomatic of a high pressure photosphere , which can exist on a brown dwarf with a high surface gravity ( i.e. , old and massive ) and/or a metal deficient atmosphere @xcite . indeed , strong feh and cia h@xmath2 absorption are hallmarks of cool halo subdwarf spectra ( figure 1 ) , which are themselves typically older and metal - poor , and 2mass 0937 + 2931 has been interpreted as a possible thick disk or halo brown dwarf @xcite . 2mass 0034 + 0523 , like 2mass 0937 + 2931 , also has exceedingly weak 1.243/1.252 @xmath1 lines that may be due to reduced metallicity or higher surface gravity @xcite , although its late spectral type ( and hence cool temperature ) may be the dominant factor . 2mass 0034 + 0523 does not have a large @xmath51 as might be expected for a halo star , although on an individual basis this does not rule out its membership in an older kinematic population . clearly , a more detailed study of both of these peculiar t dwarfs is needed to assess metallicity and/or gravity effects in cool brown dwarf spectra . to date , we have observed roughly 70% of our 2mass search sample , identifying 31 t dwarfs , 8 of which have estimated or measured @xcite distances within 10 pc of the sun . this is roughly consistent with the predicted numbers from paper i ( @xmath03545 t dwarfs ) , although we have uncovered less than half of the @xmath020 t dwarfs predicted to have distances less than 10 pc . many of the latter are probably late - type t dwarfs , t@xmath8 @xmath29 1000 k , too faint to be detected by 2mass beyond a few parsecs . in addition to these very low - luminosity sources , our search has identified one _ bona - fide _ ultracool halo subdwarf , the late - type sdl 2mass 0532 + 8246 @xcite , and now four candidate late - type subdwarfs requiring further verification . in retrospect , the search criteria employed are well - suited for identifying these metal - deficient objects , which have red optical / near - infrared colors , peak in flux at @xmath19-band , and exhibit relatively blue @xmath33 colors due to enhanced cia h@xmath2 absorption @xcite . such serendipitous discoveries provide a new opportunity for exploring the physical properties , particularly metallicity and age diagnostics , of very cool stars and brown dwarfs . we have discovered seven new t dwarfs in the 2mass survey with spectral types ranging from t3 to t7 , spectrophotometric distances of 8.2 to 23 pc , and proper motions from our detection limit ( 0@xmath51 yr@xmath6 ) to 1@xmath555 yr@xmath6 . this sample adds substantially to the current census of t dwarfs , now around 50 objects . we have also identified four candidate ultracool subdwarfs , including one possible early - type l subdwarf , which share the blue near - infrared colors and red optical / near - infrared colors of our t dwarf discoveries . we estimate that several t dwarfs remain to be identified in our sample , and possibly many more cool subdwarfs , yielding new targets in our quest to understand the observational properties of the lowest - mass stars and brown dwarfs in the solar neighborhood . we thank our telescope operators bill golisch , dave griep , and paul sears , and instrument specialist john rayner , for their support during the irtf observations , and the irtf tac for its generous allocation of time for this project . we also thank matt gray and roberta humphreys at the university of minnesota , jean mueller at palomar observatory , and the library staff at the caltech astrophysics library , for their assistance in obtaining , examining , and verifying poss - i images of 2mass 1733 + 1529 . we are grateful to our referee for her / his prompt and thorough review of the original manuscript . a. j. b. acknowledges support provided by nasa through hubble fellowship grant hst - hf-01137.01 awarded by the space telescope science institute , which is operated by the association of universities for research in astronomy , incorporated , under nasa contract nas5 - 26555 . k. l. c. acknowledges support from a nsf graduate research fellowship . this research has made use of the simbad database , operated at cds , strasbourg , france . poss - i , poss - ii , serc , and aao @xmath35-band images were obtained from the digitized sky survey image server maintained by the canadian astronomy data centre , which is operated by the herzberg institute of astrophysics , national research council of canada . the digitized sky survey was produced at the space telescope science institute under u.s . government grant nag w-2166 . the images of these surveys are based on photographic data obtained using the oschin schmidt telescope on palomar mountain and the uk schmidt telescope . the plates were processed into the present compressed digital form with the permission of these institutions . this publication makes use of data from the two micron all sky survey , which is a joint project of the university of massachusetts and the infrared processing and analysis center , funded by the national aeronautics and space administration and the national science foundation . 2mass data were obtained from the nasa / ipac infrared science archive , which is operated by the jet propulsion laboratory , california institute of technology , under contract with the national aeronautics and space administration . the authors wish to extend special thanks to those of hawaiian ancestry on whose sacred mountain we are privileged to be guests . electronic copies of the spectra presented here can be obtained directly from the primary author . lrlcccl 2mass j00344717@xmath93343404 & @xmath934 & 1998 oct 3 & [email protected] & [email protected] & @[email protected] & 1999 xj197 + 2mass j02132776 + 1939028 & 6 & 1997 oct 19 & [email protected] & [email protected] & [email protected] & + 2mass j02273983 + 1824524 & 4 & 1997 oct 20 & [email protected] & [email protected] & @[email protected] & 2001 tl10 + 2mass j02322515 + 2942172 & 14 & 1997 nov 10 & [email protected] & [email protected] & [email protected] & + 2mass j02381005 + 2725209 & 11 & 1997 nov 10 & [email protected] & [email protected] & @[email protected] & + 2mass j02470189 + 2200236 & 6 & 1997 oct 15 & [email protected] & [email protected] & [email protected] & + 2mass j03153063 + 2101434 & 3 & 2000 nov 8 & [email protected] & [email protected] & @[email protected] & + 2mass j03483407 + 3231433 & 12 & 1998 jan 25 & [email protected] & [email protected] & @[email protected] & 2002 aw2 + 2mass j04122822 + 1316535 & @xmath98 & 2000 nov 26 & [email protected] & [email protected] & [email protected] & 2000 ua110 + 2mass j04185615 + 2527216 & 4 & 1997 nov 29 & [email protected] & [email protected] & @[email protected] & + 2mass j04265381 + 1158063 & @xmath910 & 1998 nov 18 & [email protected] & [email protected] & [email protected] & 2001 ju3 + lcccccclcl 2mass j00013044 + 1010146 & [email protected] & [email protected] & @[email protected] & 2003 sep 19 & 720 & 1.02 & hd 210501 & a0 v & m6v + 2mass j00115060@xmath91523450 & [email protected] & [email protected] & [email protected] & 2003 sep 18 & 720 & 1.29 & hd 219833 & a0 v & m7.5ve + 2mass j00335534@xmath90908247 & [email protected] & [email protected] & @[email protected] & 2003 sep 19 & 720 & 1.15 & hd 1154 & a0 v & m8ve : + 2mass j00345157 + 0523050 & [email protected] & [email protected] & @xmath4 @xmath90.8 & 2003 sep 05 & 1800 & 1.05 & hd 5267 & a1 vn & t7 + 2mass j00412179 + 3547133 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 720 & 1.08 & hd 13869 & a0 v & sdl ? + 2mass j00552554 + 4130184 & [email protected] & [email protected] & @[email protected] & 2003 sep 18 & 720 & 1.16 & hd 7215 & a0 v & m8ve : + 2mass j00583814@xmath91747311 & [email protected] & [email protected] & [email protected] & 2003 sep 18 & 720 & 1.28 & hd 219833 & a0 v & m6v + 2mass j01045111@xmath93327380 & [email protected] & [email protected] & [email protected] & 2003 sep 18 & 720 & 1.72 & hd 12275 & a0 v & m8v + 2mass j01151621 + 3130061 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 720 & 1.05 & hd 13869 & a0 v & m8.5ve + 2mass j01423153 + 0523285 & [email protected] & [email protected] & @[email protected] & 2003 sep 17 & 720 & 1.05 & hd 18571 & a0 v & sdm7.5 ? + 2mass j01470204 + 2120242 & [email protected] & [email protected] & @[email protected] & 2003 sep 18 & 720 & 1.09 & hd 7215 & a0 v & m7.5ve : + 2mass j01532750 + 3631482 & [email protected] & [email protected] & [email protected] & 2003 sep 18 & 720 & 1.10 & hd 7215 & a0 v & m6v + 2mass j03023398@xmath91028223 & [email protected] & [email protected] & [email protected] & 2003 sep 17 & 720 & 1.17 & hd 25792 & a0 v & m5.5v + 2mass j04035944 + 1520502 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 720 & 1.03 & hd 25175 & a0 v & m7v + 2mass j04070885 + 1514565 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 1080 & 1.01 & hd 25175 & a0 v & t5.5 + 2mass j04071296 + 1710474 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 720 & 1.00 & hd 25175 & a0 v & m8v + 2mass j04360273 + 1547536 & [email protected] & [email protected] & [email protected] & 2003 sep 18 & 720 & 1.00 & hd 25175 & a0 v & m6.5v + 2mass j11070582 + 2827226 & [email protected] & [email protected] & [email protected] & 2003 may 23 & 1080 & 1.01 & hd 89239 & a0 v & m8ve + 2mass j11150577 + 2520467 & [email protected] & [email protected] & @[email protected] & 2003 may 23 & 1080 & 1.01 & hd 89239 & a0 v & m7.5ve : + 2mass j11323833@xmath91446374 & [email protected] & [email protected] & [email protected] & 2003 may 21 & 1200 & 1.00 & hd 101369 & a0 v & m7v + 2mass j11463232 + 0203414 & [email protected] & [email protected] & [email protected] & 2003 may 22 & 1200 & 1.05 & hd 97585 & a0 v & m5.5v + 2mass j12095613@xmath91004008 & [email protected] & [email protected] & [email protected] & 2003 may 22 & 1440 & 1.02 & hd 109309 & a0 v & t3 + & & & & 2003 may 23 & 1080 & 1.16 & hd 109309 & a0 v & + 2mass j12121714@xmath92253451 & [email protected] & [email protected] & [email protected] & 2003 may 23 & 720 & 1.39 & hd 110649 & a0 v & m8ve + 2mass j12314753 + 0847331 & [email protected] & [email protected] & [email protected] & 2003 may 21 & 1080 & 1.02 & hd 111744 & a0 v & t6 + 2mass j12575768@xmath90204085 & [email protected] & [email protected] & [email protected] & 2003 may 22 & 1080 & 1.11 & hd 109309 & a0 v & sd : m5 + 2mass j13272391 + 0946446 & [email protected] & [email protected] & @[email protected] & 2003 may 23 & 720 & 1.02 & hd 116960 & a0 v & m6v + 2mass j13593574 + 3031039 & [email protected] & [email protected] & [email protected] & 2003 may 21 & 800 & 1.05 & hd 121626 & a0 & m7v + 2mass j14171672@xmath90407311 & [email protected] & [email protected] & @[email protected] & 2003 may 22 & 1080 & 1.10 & hd 126129 & a0 v & m8ve : + 2mass j15243203 + 0934386 & [email protected] & [email protected] & @[email protected] & 2003 may 22 & 720 & 1.04 & hd 136831 & a0 v & m7ve : + 2mass j15412408 + 5425598 & [email protected] & [email protected] & [email protected] & 2003 may 21 & 1080 & 1.22 & hd 142282 & a0 & m8ve + 2mass j15561873 + 1300527 & [email protected] & [email protected] & [email protected] & 2003 may 23 & 720 & 1.01 & hd 140729 & a0 v & m8.5ve + 2mass j15590462@xmath90356280 & [email protected] & [email protected] & @xmath4 0.3 & 2003 may 21 & 720 & 1.10 & hd 143396 & a0 & m8.5ve + 2mass j16304206@xmath90232224 & [email protected] & [email protected] & [email protected] & 2003 may 22 & 720 & 1.09 & hd 145647 & a0 v & m8v + 2mass j16390818 + 2839015 & [email protected] & [email protected] & [email protected] & 2003 may 23 & 720 & 1.02 & hd 145647 & a0 v & m7.5v + 2mass j16403197 + 1231068 & [email protected] & [email protected] & @xmath4 0.4 & 2003 may 21 & 720 & 1.01 & hd 151545 & a0 & sdm8 ? + 2mass j16403561 + 2922225 & [email protected] & [email protected] & [email protected] & 2003 may 22 & 720 & 1.07 & hd 145647 & a0 v & sdm8 ? + 2mass j17252029@xmath90024508 & [email protected] & [email protected] & @[email protected] & 2003 may 23 & 720 & 1.14 & hd 171149 & a0 v & m5v + 2mass j17330480 + 0041270 & [email protected] & [email protected] & [email protected] & 2003 may 21 & 720 & 1.06 & hd 159835 & a0 & m7v + 2mass j17331764 + 1529116 & [email protected] & [email protected] & @xmath4 0.0 & 2003 may 21 & 720 & 1.01 & hd 159907 & a0 & dc 10 + 2mass j17364839 + 0220426 & [email protected] & [email protected] & [email protected] & 2003 may 23 & 720 & 1.13 & hd 171149 & a0 v & m8v + 2mass j18112466 + 3748513 & [email protected] & [email protected] & @[email protected] & 2003 may 22 & 720 & 1.07 & hd 174567 & a0 v & mid m : : + 2mass j18244344 + 2937133 & [email protected] & [email protected] & @xmath4 0.1 & 2003 may 23 & 720 & 1.08 & hd 165029 & a0 v & m6v + 2mass j18283572@xmath94849046 & [email protected] & [email protected] & @[email protected] & 2003 sep 18 & 1080 & 2.7 - 2.8 & hd 177406 & a0 v & t6 + 2mass j18411320@xmath94000124 & [email protected] & [email protected] & @[email protected] & 2003 sep 17 & 720 & 2.07 & hd 176425 & a0 v & m7.5ve : + 2mass j18530004@xmath94133275 & [email protected] & [email protected] & @[email protected] & 2003 sep 18 & 720 & 2.10 & hd 176425 & a0 v & m7 pec + 2mass j19010601 + 4718136 & [email protected] & [email protected] & @[email protected] & 2003 may 21 & 1440 & 1.13 & hd 177390 & a0 & t5 + 2mass j19240765@xmath92239504 & [email protected] & [email protected] & [email protected] & 2003 may 21 & 1080 & 1.37 & hd 174072 & a0 v & m6v + 2mass j19312708 + 5948588 & [email protected] & [email protected] & [email protected] & 2003 sep 18 & 720 & 1.35 & hd 194354 & a0 vs & m5.5v + 2mass j19445221@xmath90831036 & [email protected] & [email protected] & @[email protected] & 2003 sep 17 & 720 & 1.18 & hd 188489 & a0 v & m6v + 2mass j19522109@xmath95059169 & [email protected] & [email protected] & [email protected] & 2003 sep 18 & 720 & 3.04 & hd 183626 & a0 v & m6v + 2mass j19570817@xmath91627558 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 720 & 1.32 & hd 198787 & a0 v & m6v + 2mass j20494090 + 1140068 & [email protected] & [email protected] & [email protected] & 2003 sep 17 & 720 & 1.02 & hd 210501 & a0 v & m7.5v + 2mass j21100889 + 2132483 & [email protected] & [email protected] & [email protected] & 2003 sep 17 & 720 & 1.01 & hd 210501 & a0 v & m8v + 2mass j21105305 + 1903568 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 720 & 1.02 & hd 210501 & a0 v & m8.5ve + 2mass j21214516 + 2825375 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 720 & 1.06 & hd 210501 & a0 v & m7.5v + 2mass j21353463 + 2352085 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 720 & 1.06 & hd 210501 & a0 v & m7v + 2mass j22270083@xmath91231482 & [email protected] & [email protected] & @[email protected] & 2003 sep 17 & 720 & 1.18 & hd 219833 & a0 v & m5.5v + 2mass j22425680 + 0720249 & [email protected] & [email protected] & [email protected] & 2003 sep 18 & 720 & 1.03 & hd 210501 & a0 v & m8ve : + 2mass j22453832@xmath90722060 & [email protected] & [email protected] & [email protected] & 2003 sep 17 & 720 & 1.12 & hd 219833 & a0 & m7 pec + 2mass j22465014@xmath90643357 & [email protected] & [email protected] & [email protected] & 2003 sep 17 & 720 & 1.12 & hd 219833 & a0 & m5v + 2mass j23270985 + 2341364 & [email protected] & [email protected] & [email protected] & 2003 sep 17 & 720 & 1.10 & hd 222749 & a0 v & m7.5ve : + 2mass j23312378@xmath94718274 & [email protected] & [email protected] & [email protected] & 2003 sep 17 & 720 & 2.7 - 2.8 & hd 216009 & a0 v & t5 + 2mass j23354680 + 1257273 & [email protected] & [email protected] & [email protected] & 2003 sep 18 & 720 & 1.02 & hd 210501 & a0 v & m8.5ve + 2mass j23363834 + 4523306 & [email protected] & [email protected] & @[email protected] & 2003 sep 17 & 720 & 1.26 & hd 222749 & a0 v & m8v + 2mass j23480816 + 4052343 & [email protected] & [email protected] & [email protected] & 2003 sep 17 & 720 & 1.23 & hd 222749 & a0 v & m7.5v + 2mass j23592315@xmath91239079 & [email protected] & [email protected] & [email protected] & 2003 sep 18 & 720 & 1.22 & hd 219833 & a0 v & m7v + lcccccclcll 2mass j01514155 + 1244300 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 1080 & 1.03 & hd 13869 & a0 v & t1 : & 1 + 2mass j02431371@xmath92453298 & [email protected] & [email protected] & @[email protected] & 2003 sep 17 & 1080 & 1.42 & hd 27616 & a0 v & t6 & 2 + 2mass j04151954@xmath90935066 & [email protected] & [email protected] & [email protected] & 2003 sep 17 & 1080 & 1.15 & hd 25792 & a0 v & t8 & 2 + 2mass j04234858@xmath90414035 & [email protected] & [email protected] & [email protected] & 2003 sep 17 & 720 & 1.10 & hd 31411 & a0 v & t0 & 1 + 2mass j11040127 + 1959217 & [email protected] & [email protected] & [email protected] & 2003 may 21 & 960 & 1.02 & hd 101060 & a0 v & l4 & 3 + 2mass j11240487 + 3808054 & [email protected] & [email protected] & [email protected] & 2003 may 22 & 360 & 1.05 & hd 98152 & a0 v & m8.5v & 3 + 2mass j12255432@xmath92739466ab & [email protected] & [email protected] & [email protected] & 2003 may 23 & 720 & 1.55 & hd 110141 & a0 v & t6/t8 : & 4,5 + 2mass j12545393@xmath90122474 & [email protected] & [email protected] & [email protected] & 2003 may 22 & 720 & 1.08 & hd 109309 & a0 v & t2 & 6 + 2mass j14392836 + 1929149 & [email protected] & [email protected] & [email protected] & 2003 may 23 & 360 & 1.00 & hd 122945 & a0 v & l1 & 7 + 2mass j14571496@xmath92121477 & [email protected] & [email protected] & [email protected] & 2003 may 22 & 1080 & 1.33 & hd 133466 & a0 v & gl 570d , t8 & 8 + 2mass j15031961 + 2525196 & [email protected] & [email protected] & @[email protected] & 2003 may 22 & 720 & 1.02 & hd 136831 & a0 v & t5.5 & 9 + 2mass j15261405 + 2043414 & [email protected] & [email protected] & [email protected] & 2003 may 23 & 720 & 1.05 & hd 140729 & a0 & l7 & 10 + 2mass j16241436 + 0029158 & [email protected] & @[email protected] & @xmath4 0.0 & 2003 may 22 & 1440 & 1.09 & hd 145647 & a0 v & t6 & 11 + 2mass j16322911 + 1904407 & [email protected] & [email protected] & [email protected] & 2003 may 22 & 1080 & 1.07 & hd 145647 & a0 v & l8 & 7 + 2mass j17072343@xmath90558249ab & [email protected] & [email protected] & [email protected] & 2003 may 23 & 960 & 1.19 & hd 171149 & a0 v & m9v / l3 & 12,13 + 2mass j17361766 + 1346225 & [email protected] & [email protected] & [email protected] & 2003 may 21 & 180 & 1.05 & hd 165029 & a0 v & lp 508 - 14 , m4v & 14 + 2mass j17503293 + 1759042 & [email protected] & [email protected] & [email protected] & 2003 may 23 & 1080 & 1.08 & hd 165029 & a0 v & t3.5 & 1 + 2mass j18261131 + 3014201 & [email protected] & [email protected] & [email protected] & 2003 may 21 & 240 & 1.18 & hd 174567 & a0 vs & m8.5v & 15 + 2mass j19165762 + 0509021 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 200 & 1.04 & hd 189920 & a0 v & vb10 , m8v & 16 + 2mass j20282035 + 0052265 & [email protected] & [email protected] & [email protected] & 2003 may 23 & 720 & 1.06 & hd 198070 & a0 vn & l3 & 17 + 2mass j20362186 + 5059503 & [email protected] & [email protected] & [email protected] & 2003 sep 18 & 720 & 1.18 & hd 194354 & a0 vs & lsr 2036 + 50 ; sdm7.5 & 15,18 + 2mass j20392378@xmath92926335 & [email protected] & [email protected] & [email protected] & 2003 may 22 & 720 & 1.57 & hd 202941 & a0 v & m6v & 19 + 2mass j20491972@xmath91944324 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 360 & 1.34 & hd 198787 & a0 v & m7.5v & 19 + 2mass j20575409@xmath90252302 & [email protected] & [email protected] & [email protected] & 2003 may 23 & 360 & 1.09 & hd 198070 & a0 vn & l1.5 & 20 + 2mass j21073169@xmath90307337 & [email protected] & [email protected] & [email protected] & 2003 may 23 & 720 & 1.10 & hd 198070 & a0 vn & m9v & 3 , 21 + 2mass j21225635 + 3656002 & [email protected] & [email protected] & [email protected] & 2003 sep 18 & 720 & 1.05 & hd 209932 & a0 v & lsr 2122 + 36 ; esdm5 & 15,18 + 2mass j22120345 + 1641093 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 240 & 1.03 & hd 210501 & a0 v & m5v & 14 + 2mass j22282889@xmath94310262 & [email protected] & [email protected] & [email protected] & 2003 sep 17 & 1080 & 2.3 - 2.4 & hd 216009 & a0 v & t6.5 & 22 + 2mass j22341394 + 2359559 & [email protected] & [email protected] & [email protected] & 2003 sep 19 & 360 & 1.02 & hd 210501 & a0 v & m9.5v & 19 + 2mass j22541892 + 3123498 & [email protected] & [email protected] & [email protected] & 2003 sep 18 & 720 & 1.04 & hd 209932 & a0 v & t4 & 2 + v * v1451 aql & [email protected] & [email protected] & [email protected] & 2003 may 22 & 12 & 1.16 & hd 185533 & a0 v & m5iii & 23 + v * z cep & [email protected] & [email protected] & [email protected] & 2003 may 22 & 72 & 1.27 & hd 203893 & a0 v & miab & 23 + sv * p 2312 & [email protected] & [email protected] & [email protected] & 2003 may 22 & 12 & 1.28 & hd 203893 & a0 v & m7iii & 23 + llccclc 2mass j00345157 + 0523050 & t7 & 2003 sep 19 & 3000 & 1.04 & hd 6457 & a0 vn + 2mass j12314753 + 0847331 & t6 & 2003 may 21 & 3000 & 1.08 & hd 111744 & a0 v + 2mass j15031961 + 2525196 & t5.5 & 2003 may 23 & 1800 & 1.01 & hd 131951 & a0 v + 2mass j18283572@xmath94849046 & t6 & 2003 sep 19 & 3000 & 2.7 - 2.8 & hd 177406 & a0 v + 2mass j19010601 + 4718136 & t5 & 2003 sep 19 & 2400 & 1.19 & hd 178207 & a0 vn + 2mass j23312378@xmath94718274 & t5 & 2003 sep 18 & 2400 & 2.5 - 2.6 & hd 216009 & a0 v + llccccc 2mass 0034 + 0523 & t7 & [email protected] & [email protected] & @xmath4 @xmath90.8 & [email protected] + 2mass 0407 + 1514 & t5.5 & [email protected] & [email protected] & [email protected] & 19@xmath393 + 2mass 1209@xmath91004 & t3 & [email protected] & [email protected] & [email protected] & [email protected] + 2mass 1231 + 0847 & t6 & [email protected] & [email protected] & [email protected] & [email protected] + 2mass 1828@xmath94849 & t6 & [email protected] & [email protected] & @[email protected] & [email protected] + 2mass 1901 + 4718 & t5 & [email protected] & [email protected] & @[email protected] & 20@xmath393 + 2mass 2331@xmath94718 & t5 & [email protected] & [email protected] & [email protected] & [email protected] + llcccccl + sdss 0423@xmath90414 & t0 & 0.62 & 0.95 & 0.64 & 1.07 & 0.83 & + sdss 1254@xmath90122 & t2 & 0.46 & 0.91 & 0.48 & 1.00 & 0.59 & + 2mass 2254 + 3123 & t4 & 0.36 & 0.87 & 0.39 & 0.63 & 0.31 & + 2mass 0243@xmath92453 & t6 & 0.14 & 0.66 & 0.30 & 0.36 & 0.16 & + 2mass 0415@xmath90935 & t8 & 0.04 & 0.44 & 0.18 & 0.11 & 0.05 & + + sdss 0151 + 1244 & t1@xmath391 & 0.64(0 ) & 0.95(0 ) & 0.64(0 ) & 1.03(1 ) & 0.70(1 ) & t0.5 + sdss 1750 + 1759 & t3.5 & 0.46(2 ) & 0.87(4 ) & 0.45(3 ) & 0.71(4 ) & 0.35(4 ) & t3.5 + 2mass 1503 + 2525 & t5.5 & 0.24(5 ) & 0.74(5 ) & 0.34(5 ) & 0.42(6 ) & 0.20(5 ) & t5 + 2mass 1225@xmath92739ab & t6 & 0.17(6 ) & 0.67(6 ) & 0.28(6 ) & 0.35(6 ) & 0.18(6 ) & t6 + sdss 1624 + 0029 & t6 & 0.16(6 ) & 0.71(6 ) & 0.31(6 ) & 0.34(6 ) & 0.14(6 ) & t6 + 2mass 2228@xmath94310 & t6.5 & 0.15(6 ) & 0.68(6 ) & 0.29(6 ) & 0.28(7 ) & 0.12(7 ) & t6.5 + gliese 570d & t8 & 0.06(8 ) & 0.50(8 ) & 0.20(8 ) & 0.15(8 ) & 0.10(8 ) & t8 + + 2mass 1209@xmath91004 & & 0.40(3 ) & 0.83(4 ) & 0.45(3 ) & 0.77(3 ) & 0.64(3 ) & t3 + 2mass 1901 + 4718 & & 0.28(5 ) & 0.78(5 ) & 0.36(5 ) & 0.47(5 ) & 0.24(5 ) & t5 + 2mass 2331@xmath94718 & & 0.19(6 ) & 0.72(5 ) & 0.33(5 ) & 0.49(5 ) & 0.21(5 ) & t5 + 2mass 0407 + 1514 & & 0.23(5 ) & 0.76(5 ) & 0.34(5 ) & 0.40(6 ) & 0.16(6 ) & t5.5 + 2mass 1828@xmath94849 & & 0.18(6 ) & 0.70(6 ) & 0.31(6 ) & 0.40(6 ) & 0.20(5 ) & t6 + 2mass 1231 + 0847 & & 0.18(6 ) & 0.68(6 ) & 0.27(6 ) & 0.39(6 ) & 0.17(6 ) & t6 + 2mass 0034 + 0523 & & 0.10(7 ) & 0.64(6 ) & 0.23(8 ) & 0.25(7 ) & 0.13(7 ) & t7 + llccccc 2mass 1901 + 4718 & t5 & 1.243 & [email protected] & & 1.252 & [email protected] + 2mass 2331@xmath94718 & t5 & 1.244 & [email protected] & & 1.252 & [email protected] + 2mass 1503 + 2525 & t5.5 & 1.243 & [email protected] & & 1.252 & [email protected] + 2mass 1828@xmath94849 & t6 & 1.243 & [email protected] & & 1.252 & [email protected] + 2mass 1231 + 0847 & t6 & 1.243 & [email protected] & & 1.253 & [email protected] + 2mass 0034 + 0523 & t7 & 1.241 & [email protected] & & 1.252 & [email protected] + llcccccc 2mass 0034 + 0523 & t7 & [email protected] & 72@xmath394 & 26@xmath395 & 3.07 & 17 & o + & & [email protected] & 74@xmath3913 & 27@xmath397 & 3.02 & 21 & a + 2mass 0407 + 1514 & t5.5 & @xmath4 0.09 & ... & @xmath4 8 & 5.19 & 13 & o + 2mass 1209@xmath91004 & t3 & [email protected] & 140@xmath398 & 51@xmath3912 & 4.26 & 4 & g + 2mass 1231 + 0847 & t6 & [email protected] & 228@xmath393 & 88@xmath3918 & 3.24 & 20 & a + 2mass 1828@xmath94849 & t6 & [email protected] & 50@xmath3912 & 17@xmath395 & 2.92 & 39 & a + 2mass 1901 + 4718 & t5 & [email protected] & 197@xmath393 & 35@xmath396 & 5.27 & 62 & o + 2mass 2331@xmath94718 & t5 & @xmath4 0.10 & ... & @xmath4 9 & 2.90 & 13 & a +
we present the discovery of seven new t dwarfs identified in the two micron all sky survey . low - resolution ( r@xmath0150 ) 0.82.5 @xmath1 spectroscopy obtained with the irtf spex instrument reveal the characteristic h@xmath2o and ch@xmath3 bands in the spectra of these brown dwarfs . comparison to spectral standards observed with the same instrument enable us to derive classifications of t3 to t7 for the objects in this sample . moderate - resolution ( r@xmath01200 ) near - infrared spectroscopy for a subset of these discoveries reveal line strengths consistent with previously observed trends with spectral type . follow - up imaging observations provide proper motion measurements for these sources , ranging from @xmath4 0@xmath51 to 1@xmath555 yr@xmath6 . one object , 2mass 0034 + 0523 , has a spectrophotometric distance placing it within 10 pc of the sun . this source also exhibits a depressed k - band peak reminiscent of the peculiar t dwarf 2mass 0937 + 2931 , and may be a metal - poor or old , high - mass brown dwarf . we also present low resolution spex data for a set of m and l - type dwarf , subdwarf , and giant comparison stars used to classify 59 additional candidates identified as background stars . these are primarily m5-m8.5 dwarfs , many exhibiting paschen @xmath7 , but include three candidate ultracool m subdwarfs and one possible early - type l subdwarf .
they have become a global public health hazard and more than one billion adults estimated to be overweight and over 400 million of them are obese ( 1 ) . overweight and obesity have significant contribution in the development of various chronic diseases such as cardiovascular disease , hypertension , diabetes mellitus , stroke , osteoarthritis , and certain cancers . obesity does not have a known precise defined effect on the immune response through a variety of immune mediators . it has been recognized that the adipose tissue participates actively in inflammation and immunity , producing and releasing a variety of proinflammatory and anti - inflammatory factors ( 3 ) . besides being a risk factor for some chronic diseases , overweight and obesity several epidemiological investigations and emerging data indicated that obesity may increase infection susceptibility in clinical settings ( 4 ) . some studies showed that the incidence of infections , especially in hospital and after surgery , is increased in overweight and obese patients , compared with normal weight patients ( 5 - 7 ) . relation of body mass index ( bmi ) with infection has not been adequately studied and the various aspects of this association have not been reviewed . some epidemiological studies have evaluated the potential association between obesity and increased risk of infection with controversial results ( 8) . the literature is so far lacking enough studies that could verify obvious or expected associations between bmi and specific infections , especially community - acquired infections ( 3 ) . urinary tract infection ( uti ) is one of the most common bacterial infections encountered in outpatient and inpatients settings ( 9 ) . the relationship between bmi and uti has been explored in few studies - often with inconsistent findings . most previous studies were limited to diabetic patients or conducted in hospital setting ( 10 - 12 ) . in addition , in most of these studies , no adjustment was done for confounder variables such a diabetes mellitus -a condition that is associated with both obesity and increased risk of infection - which potentially could confound the association . as the cause - effect relationship between obesity and infection remains obscure in many infectious diseases including uti the aim of this study was to compare bmi of adult patients with community - acquired uti with control group in order to clarify the association between bmi and uti . this cross - sectional study was conducted from march 2012 to june 2013 in a university affiliated hospital of semnan university of medical sciences , semnan , iran . adult patients ( 18 years old ) who were referred to clinics or admitted in hospital with diagnosis of uti were considered for participation in the study . control group were selected from healthy adult normal population , whom underwent medical check - ups at the same hospital and without any history of uti . lower urinary tract infection ( acute cystitis ) was defined as the acute onset of symptoms of dysuria , urgency , and frequency in the absence of fever or costovertebral - angle pain or tenderness , and in the presence of pyuria and positive urine culture . the diagnosis of acute pyelonephritis was based on the clinical findings of fever ( > 38c ) , flank pain and/or tenderness , with pyuria and positive urine culture ( 13 ) . demographic factors such as gender , age and history of diabetes mellitus were collected for individuals who met inclusion criteria . diabetes mellitus was defined as self - reported history of diabetes mellitus and use of oral hypoglycemic agents or insulin . individuals with history of urinary stone , neurogenic bladder , pregnant or post - partum women and treatment with immunosuppressive agents were excluded . weight was determined using a digital electronic weighing scale with accuracy to 0.1 kg and wearing lightweight clothing . height was measured to the nearest centimeter by using a tape measure and women standing upright without shoes . bmi was calculated by the weight in kilograms divided by the height in meters squared ( kg / m ) . bmi classified as underweight ( < 18.5 ) normal weight ( 18.5 - 24.9 ) , overweight ( 25.0 - 29.9 ) and obesity as having a bmi equal or greater than 30.0 kg / m ( 14 ) . the study protocol was approved by research council and ethical committee of the semnan university of medical sciences . data were analyzed by chi square , student 's t - test , one way anova test and logistic regression analysis using spss version 16.00 ( spss , inc . of all screened patients with uti , 116 met our inclusion criteria and were enrolled and 156 were selected for the control group . from these patients 56 had upper and 60 had lower uti . eighty - one of patients ( 69.8.7% ) nd one hundred of controls ( 64.1% ) were women . the gender distribution of both groups was not statistically difference ( p = 0.322 ) . mean age of the patients was 58.5 19.7 years and for controls was 59.4 14.4 years ( p = 0.670 ) . history of diabetes mellitus was positive in 32.8% of patients and 23.1% in control group ( p = 0.076 ) . escherichia coli was the most common pathogen ( 87.6% ) cultured in patients followed by klebsiella spp ( 10.2% ) . mean bmi of the patients was 25.2 4.0 kg / m and for control was 25.1 3.6 kg / m . mean bmi of the patients with upper uti was 25.6 4.1 kg / m and for lower uti was 24.9 4.0 kg / m . there was no significant difference between bmi of controls and patients with any type of uti ( p = 0.573 ) . logistic regression analysis also did not show any association between bmi and uti ( or = 0.996 , 95% ci : 0.933 - 1.064 , p = 0.910 ) . the recent studies on several infectious diseases have drawn attention to the association between the obesity and infectious diseases . however , the associations have not been assessed in a wide range . in our study that patients and controls were matched for age , gender and history of diabetes mellitus , findings showed that there is no association between bmi and the risk of uti . also , when dividing uti as upper and lower types , there was still no significant association . in agreement with our finding , hammar et al . study on patients with diabetes mellitus reported that they did not found an association with bmi and increased risk of uti ( 12 ) . their study did not describe any relationship between obesity and symptomatic uti ( 15 ) . a study was conducted to review the risk factors for infection in trauma patients especially the importance of obesity as an independent risk factor for nosocomial infections . the earliest study in this field showed that the risk of urinary tract infection was higher in non - obese than in obese women ( 17 ) . the positive association between high bmi and uti reported in some previous studies . a cohort study by semins et al . obese patients were more likely to have an uti especially in males ; furthermore the obese females were at particularly higher risk for pyelonephritis ( 18 ) . in another cohort study on adult patients that include lower uti only , results showed that the proportion of subjects diagnosed with lower uti increased with increasing bmi , particularly in males but not in females ( 19 ) . another study aimed to assess the prevalence of uti and its risk factors among saudi diabetic patients . bmi was significantly higher in patients with uti compared with patients without uti ( 11 ) . in a korean study , the relationship between obesity and febrile urinary tract infection in young children was evaluated . multivariate analysis revealed that obese and overweight children were more likely to have an uti than lean population ( 20 ) . studies on pregnant and postpartum women showed increased risk of uti in obese women ( 21 , 22 ) . in a retrospective study authors examined the effect of bmi on the incidence of various infectious diseases in institutionalized , geriatric subjects . their findings showed that subjects with a lower bmi and obese had a higher incidence rate of infections - including uti - compared with normal weight subjects ( 23 ) . the difference in these findings might at least partly be explained by differences in the study design , patients ' selection , number of samples and confounding variables . the association between obesity and infections including uti may be due to some confounders such as diabetes mellitus and other co - morbidity associated with obesity . in addition , some previous studies examined patients that were not culture - proven . when we analyzed data based on gender , again , there was no association between bmi and urinary tract infections in men and women . but , some studies showed that relationship between bmi and uti was gender - dependent . for examples , in a cohort study , obesity was proven to be a risk factor for uti in male patients with diabetes mellitus but not for women ( 10 ) . in another study , results showed that lower uti increased with increasing bmi in males , but not in females ( 19 ) . some studies showed positive association between higher bmi with surgical site infections ( 24 ) , nosocomial infections ( 25 , 26 ) , pneumonia ( 27 ) , cellulitis ( 28 , 29 ) and periodontal infections ( 30 ) . others studies showed opposing results . a study that evaluated complications after hysterectomy showed no associations between bmi and risk of infections ( 31 ) . in another study , the risk of infections was elevated among women with bmi < 20 kg / m2 , who underwent laparoscopic surgery ( 32 ) . documented that adjustment for major chronic diseases eliminated the association between obesity and pneumonia risk : documented in a univariate model in one large epidemiological study ( 33 ) . the results of the study for complication of cardiac surgery demonstrated that obesity was a risk factor only for superficial sternal wound infection but , not deep sternal wound infection ( 34 ) . bmi inversely correlated with biliary bacteria , bacteremia , and increased illness severity on bivariate and multivariate analysis . most patients with severe biliary infections had a normal bmi and authors suggested that obesity may be protective in biliary infections ( 35 ) . almirall et al . reported a slightly lower risk of pneumonia among obese individuals in their patients ( 36 ) . one of the strengths of our study lies in its matching the patients ' age , gender and the presence of diabetes mellitus . we have also subcategorized lower uti and upper uti so these two different conditions were examined separately . , our findings did not found an association with bmi and uti and does not support obesity as a risk factor for uti in adult patients . large prospective studies are needed to further clarify the association of bmi with different infections .
background : overweight and obesity have become a global public health over the last decades . obesity has been suggested to be a risk factor for some infections , but studies often showed controversial findings . few studies examined the relationship between body mass index ( bmi ) and urinary tract infection ( uti ) , showing inconsistent results.objectives:the purpose of this study was to determine the relationship between bmi and uti in adult patients.patients and methods : adult patients ( 18 years old ) who were referred to clinics or admitted in hospital with diagnosis of uti were considered for participation in the study . control group were selected from healthy adult normal population whom underwent medical check - ups at the same hospital and without history of uti . data about age , gender , history of diabetes mellitus and bmi were registered for individuals who met inclusion criteria.results:a total of 116 patients with uti and 156 people as the control group were included in the study . two groups were matched for age , gender and history of diabetes mellitus . mean bmi sd of patients was 25.2 4.0 kg / m2 and the controls was 25.1 3.6 kg / m2 . there was no significant correlation between bmi and uti ( p = 0.757 ) . mean bmi sd of patients with upper uti was 25.6 4.1 kg / m2 and for lower uti was 24.9 4.0 kg / m2 . there was no significant difference between bmi of controls and patients with any type of uti ( p = 0.573).conclusions : our findings did not found an association between bmi and uti and does not support obesity as a risk factor for uti in adult patients .
MURSITPINAR Turkey/ISTANBUL American-led forces have sharply intensified air strikes in the past two days against Islamic State fighters threatening Kurds on Syria's Turkish border after the jihadists' advance began to destabilize Turkey. The coalition had conducted 21 attacks on the militants near the Syrian Kurdish town of Kobani over Monday and Tuesday and appeared to have slowed Islamic State advances there, the U.S. military said, but cautioned the situation remained fluid. U.S. President Barack Obama voiced deep concern on Tuesday about the situation in Kobani as well as in Iraq's Anbar province, which U.S. troops fought to secure during the Iraq war and is now at risk of being seized by Islamic State militants. "Coalition air strikes will continue in both of these areas," Obama told military leaders from coalition partners including Turkey, Arab states and Western allies during a meeting outside Washington. The fight against Islamic State will be among the items on the agenda when Obama holds a video conference on Wednesday with British, French, German and Italian leaders, the White House said. War on the militants in Syria is threatening to unravel a delicate peace in neighboring Turkey where Kurds are furious with Ankara over its refusal to help protect their kin in Syria. The plight of the Syrian Kurds in Kobani provoked riots among Turkey's 15 million Kurds last week in which at least 35 people were killed. Turkish warplanes were reported to have attacked Kurdish rebel targets in southeast Turkey after the army said it had been attacked by the banned PKK Kurdish militant group, risking reigniting a three-decade conflict that killed 40,000 people before a ceasefire was declared two years ago. Kurds inside Kobani said the U.S.-led strikes on Islamic State had helped, but that the militants, who have besieged the town for weeks, were still on the attack. "Today there were air strikes throughout the day, which is a first. And sometimes we saw one plane carrying out two strikes, dropping two bombs at a time," said Abdulrahman Gok, a journalist with a local Kurdish paper who is inside the town. "The strikes are still continuing," he said by telephone, as an explosion sounded in the background. "In the afternoon, Islamic State intensified its shelling of the town," he said. "The fact that they're not conducting face-to-face, close-distance fight but instead shelling the town from afar is evidence that they have been pushed back a bit." Asya Abdullah, co-chair of the dominant Kurdish political party in Syria, PYD, said the latest air strikes had been "extremely helpful". "They are hitting Islamic State targets hard and because of those strikes we were able to push back a little. They are still shelling the city center." It was the largest number of air strikes on Kobani since the U.S.-led campaign in Syria began last month, the Pentagon said. The White House said the impact was constrained by the absence of forces on the ground but that evidence so far showed its strategy was succeeding. CEASEFIRE THREATENED The Turkish Kurds' anger and resulting unrest is a new source of turmoil in a region consumed by Iraqi and Syrian civil wars and an international campaign against Islamic State fighters. The PKK accused Ankara of violating the ceasefire with the air strikes, on the eve of a deadline set by its jailed leader to salvage the peace process. "For the first time in nearly two years, an air operation was carried out against our forces by the occupying Turkish Republic army," the PKK said. "These attacks against two guerrilla bases at Daglica violated the ceasefire," the PKK said, referring to an area near the border with Iraq. Obama, who ordered the bombing campaign that started in August against Islamic State fighters, told the meeting of military leaders from 22 countries to expect a "long-term effort" in the battle against Islamic State militants. "There will be days of progress and there are going to be some periods" of setbacks, he said. A U.S. military official told Reuters after the talks there was an acknowledgement that Islamic State was making some gains on the ground, despite the air strikes. But there was also a sense that the coalition, working together, would ultimately prevail, the official said. "In the short term, there are some gains that they have been able to make. In the long term, that momentum will be reversed," the official said, adding the coalition would adjust its tactics as Islamic State fighters increasingly blend into the population and become harder to target. Washington has faced the difficult task of building a coalition to intervene in Syria and Iraq, two countries with complex multi-sided civil wars in which most of the nations of the Middle East have enemies and clients on the ground. In particular, U.S. officials have expressed frustration at Turkey's refusal to help them fight against Islamic State. Washington has said Turkey has agreed to let it strike from Turkish air base. Ankara has said that is still under discussion. NATO-member Turkey has refused to join the coalition unless it also confronts Syrian President Bashar al-Assad, a demand that Washington, which flies its air missions over Syria without objection from Assad, has so far rejected. U.S. Secretary of State John Kerry said on Tuesday there was no discrepancy between Ankara and Washington over the strategy for fighting Islamic State in Kobani and that Ankara would define its role according to its own timetable. The fate of Kobani, where the United Nations says thousands could be massacred, could wreck efforts by the Turkish government to end the insurgency by PKK militants, a conflict that largely ended with the start of a peace process in 2012. The peace process with the Kurds is one of the main initiatives of President Tayyip Erdogan's decade in power, during which Turkey has enjoyed an economic boom underpinned by investor confidence in future stability. The unrest shows the difficulty Turkey has had in designing a Syria policy. Turkey has already taken in 1.2 million refugees from Syria's three-year civil war, including 200,000 Kurds who fled the area around Kobani in recent weeks. 'PROVOCATIONS COULD BRING MASSACRE' Jailed PKK co-founder Abdullah Ocalan has said peace talks between his group and the Turkish state could come to an end by Wednesday. After visiting him in jail last week, Ocalan's brother Mehmet quoted him as saying: "We will wait until October 15. ... After that there will be nothing we can do." A pro-Kurdish party leader read out a statement from Ocalan in parliament on Tuesday in which the PKK leader said Kurdish parties should work with the government to end street violence. "Otherwise we will open the way to provocations that could bring about a massacre," Ocalan said in the statement, which the party said he wrote last week. Turkish attacks on Kurdish positions were once a regular occurrence in southeast Turkey but had not taken place for two years. The PKK said the strikes took place on Monday, although some Turkish news reports said they happened on Sunday. Prime Minister Ahmet Davutoglu said the Turkish military had retaliated against a PKK attack in the border area, without referring specifically to air strikes. Hurriyet newspaper said the air strikes caused "major damage" to the PKK. "F-16 and F-4 warplanes which took off from (bases in the southeastern provinces of) Diyarbakir and Malatya rained down bombs on PKK targets after they attacked a military outpost in the Daglica region," Hurriyet said. 'TOO LATE FOR US' The battle for Kobani has ground on for nearly a month, although Kurdish fighters on Monday managed to replace an Islamic State flag in the West of the town with one of their own. The fighters, known as Popular Protection Units (YPG) want Turkey to allow them to bring arms across the border. In the Turkish town of Suruc, 10 km (6 miles) from the Syrian frontier, a funeral for four female YPG fighters was being held. Hundreds at the cemetery chanted: "Murderer Erdogan". At least six air strikes, gunfire and shelling could be heard from Mursitpinar on the Turkish side of the border on Tuesday, where Kurds, many with relatives fighting in Kobani, have maintained a vigil, watching the fighting from hillsides. In Iraq, Kurdish forces and government troops have rolled back some Islamic State gains in the north of the country in recent weeks, but the fighters have advanced in the west, seizing territory in the Euphrates valley within striking distance of the capital, Baghdad. Members of Iraq's Shi'ite minority have been targeted by recent bomb attacks in Baghdad, some claimed by Islamic State. On Tuesday, 25 people were killed by a car bomb, including a Shi'ite Muslim member of Iraq's parliament. (Additional reporting by Jeff Mason, Steve Holland, Roberta Rampton and Phil Stewart in Washington; Writing by Peter Graff, Oliver Holmes and Philippa Fletcher; Editing by David Stamp, Toni Reinhold and Peter Cooney) ||||| members of the No Surrender biker gang are fighting against Islamic State militants in northern Iraq (AFP Photo/Olaf Kraak) The Hague (AFP) - The Dutch public prosecutor said on Tuesday that motorbike gang members who have reportedly joined Kurds battling the Islamic State group in Iraq are not necessarily committing any crime. "Joining a foreign armed force was previously punishable, now it's no longer forbidden," public prosecutor spokesman Wim de Bruin told AFP. "You just can't join a fight against the Netherlands," he told AFP after reports emerged that Dutch bikers from the No Surrender gang were fighting IS insurgents alongside Kurds in northern Iraq. The head of No Surrender, Klaas Otto, told state broadcaster NOS that three members who travelled to near Mosul in northern Iraq were from Dutch cities Amsterdam, Rotterdam and Breda. A photograph on a Dutch-Kurdish Twitter account shows a tattooed Dutchman called Ron in military garb, holding a Kalashnikov assault rifle while sat with a Kurdish comrade. Video footage apparently from a Kurdish broadcaster shows an armed European man with Kurdish fighters saying in Dutch: "The Kurds have been under pressure for a long time." Many countries including the Netherlands have been clamping down on their nationals trying to join IS jihadists who have taken over swathes of Iraq and Syria. Measures include confiscating would-be jihadists' passports before travelling and threatening prosecution should they return. "The big difference with IS is that it's listed as a terrorist group," said De Bruin. "That means that even preparing to join IS is punishable." Dutch citizens could not however join the Kurdistan Workers' Party (PKK), as it is blacklisted as a terrorist organisation by Ankara and much of the international community, De Bruin said. Dutch citizens fighting on the Kurdish side would of course be liable to prosecution if they committed crimes such as torture or rape, De Bruin said. "But this is also happening a long way away and so it'll be very difficult to prove," said De Bruin. ||||| Members of a notorious Dutch motorcycle gang who have been pictured helping Kurdish forces fight ISIS in Syria have been told they are not committing any crime. Three bikers from the 'No Surrender' Banditos gang travelled to Syria to help fight the Islamic militants last week, according to Klaas Otto, the head of the group. Now the Dutch prosecutor has told gang members that they will not be prosecuted for going to fight abroad, because such actions are only illegal if you are fighting troops from the Netherlands. Scroll down for video A biker from the No Surrender gang in the Netherlands, identified only as Ron (right) poses alongside a Kurdish soldier in Syria after going to fight against ISIS Public prosecutor spokesman Wim de Bruin said: 'Joining a foreign armed force was previously punishable, now it's no longer forbidden. You just can't join a fight against the Netherlands.' While several countries including Britain have taken steps to stop their citizens joining ISIS, joining the Kurds is generally permissible because they are not considered a terrorist organisation. However, anyone going to fight ISIS would be banned from joining the Kurdistan Workers' Party, who run several of the brigades fighting ISIS, because they are considered to be terrorists. Dutch citizens fighting on the Kurdish side would of course be liable to prosecution if they committed crimes such as torture or rape, De Bruin said. 'But this is also happening a long way away and so it'll be very difficult to prove,' he added. Video footage apparently from a Kurdish broadcaster shows an armed European man with Kurdish fighters saying in Dutch: 'The Kurds have been under pressure for a long time.' There are estimated to be around 70,000 Kurds living in the Netherlands, most of whom are political refugees who fled from Turkey and the middle east looking for work. Fierce fighting between Kurdish troops and Islamic State militants has continued in the Syrian border town of Kobane today as the Islamic fighters push to take control US-led airstrikes have slowed the ISIS fighters down, but troops have still managed to push into the city where there are reports of beheadings and executions The heaviest fighting between ISIS forces and Kurdish troops has been centered around the town of Kobane in recent days, as the militants push to take control of the strategic border town. While Islamic fighters have entered the city, US-led airstrikes have slowed their advance. Kurdish fighters captured the strategic hill of Tel Shair and pulled down the flag that had been fluttering for more than a week. It followed a sustained stepping up of the U.S.-led airstrikes, with locals counting more than 30 bombs dropped on jihadi positions in the past 24 hours. ‘Over the past night there has been very intense airstrikes by the coalition that targeted several Daesh [an Arabic word for IS] positions in and near Kobane,’ said Idriss Nassan, deputy head of Kobani's foreign relations committee. The Dutch bikers are not the first westerners to join Kurdish forces. Former American soldier Jordan Matson became the first to fight alongside the Kurds after going to Syria earlier this month. Desert Storm Air Force veteran Brian Wilson also spoke out last week to explain why he had elected to join the Kurdish Peoples' Protection Units.
– The notorious Dutch biker gang No Surrender appears to be the newest member of the coalition fighting ISIS. At least three members of the gang are believed to be fighting with Kurds in northern Iraq, and authorities in the Netherlands say they haven't got a problem with that. "Joining a foreign armed force was previously punishable; now it's no longer forbidden," a public prosecutor spokesman tells AFP. "You just can't join a fight against the Netherlands." The spokesman says Dutch citizens fighting abroad could be prosecuted if they committed crimes like torture, "but this is also happening a long way away, so it'll be difficult to prove," reports the Daily Mail, which notes that there are around 70,000 Kurds living in the Netherlands. In the Syrian Kurdish town of Kobani, meanwhile, fighting continues, but at least 21 coalition airstrikes this week appear to have slowed down the ISIS advance, reports Reuters. President Obama plans to discuss the fight with British, French, German, and Italian leaders during a videoconference today, the White House says.
salmonellosis is one of the most important zoonotic diseases that affect both people and animals . for example , the center for disease control and prevention ( cdc ) in the united states has estimated that salmonella caused 1.4 million episodes of infection between 1999 and 2003 , with over 7% of these infections caused by reptile - associated salmonellosis . reptiles have become increasingly common as domestic pets , and there has been an associated increased incidence of reptile - associated salmonella infection in humans . reptiles are asymptomatic carriers of salmonella infection , and they intermittently excret these organisms in their feces . salmonella infections can be fatal in humans , and especially for those who are immature or immunocompromised , including babies , children younger than 5 years of age , pregnant women , elderly people and people with aids . the us cdc has recommended that these individuals should avoid contact with reptiles and that they should not keep pet reptiles in their homes . since reptiles are not popular pets in korea , people most frequently come into contact with reptiles in zoos . in modern zoos , animals are kept in more natural environmental surroundings , with harmless animals , including nonpoisonous reptiles and docile mammals , often allowed to roam freely in natural looking exhibits . in particular , there are no fences , so visitors can touch these animals and make contact with the animals ' feces and their living environment . furthermore , many events at zoos allow visitors to become more familiar with the animals . in addition to direct transmission via animals to humans , salmonella , which is relatively resistant to the environment , can be indirectly transmitted to humans through contact with the infected exhibit furnishings . for example , 39 children who attended a komodo dragon exhibit at the denver zoo in colorado in 1996 became infected with salmonella , although none touched the animals . in the denver zoo case , only a fence separated the visitors from the komodo dragons and the dragons were allowed to wander freely behind the fence , suggesting that the 39 children became infected by contact with the salmonella infected wooden barrier . in addition to reptiles , mammals in a zoo can be infected by salmonella spp . moreover , if one animal in an exhibit or cage is infected , then it can transmit the infection to all the other animals in the same exhibit or cage . furthermore , animals in an outdoor exhibit can be contaminated with salmonella by contact with wild animals ( e.g. birds , rats etc . ) . to determine the risk of salmonella infection from human - to - animal contact in korea , we assessed the rate of salmonella spp . from september to october 2006 , fecal samples were obtained by anal or cloacal swabs from 294 animals ( 46 reptiles , 14 birds and 233 mammals ) housed at seoul grand park , korea ( table 1 ) . the swabs were placed in sterile ames transport medium ( difco , usa ) and they were stored at 4 for 24 - 48 h prior to processing . the samples were selectively enriched for salmonella by incubating the swabs in tetrathionate broth ( difco , usa ) at 37 for 24 - 48 h. the selective enrichment cultures were streaked onto salmonella chromogenic agar ( oxoid , uk ) and this was incubated at 37 for 24 h . violet colored colonies suspected of being salmonella spp . were inoculated onto api20e biochemical profiles ( biomerieux sa , france ) . the stocks were made after first isolation in this experiment and then were stored in -20 for a year . 5 ml aliquots of cultured buffered peptone water were inoculated onto mueller - hinton ( oxoid , uk ) agar plates with using a sterilized swab , followed by placing antibiotic discs that contained ampicillin - sulbactam 20 g , polymyxin b 300 g , cephalothin 30 g , tetracycline 30 g , chloramphenicol 30 g , gentamicin 10 g , cefotaxime 30 g , sulfamethazole - trimethoprim 25 g or nitrofuratonin 300 g onto the agar plates , respectively . the plates were incubated for 18 h at 35 , and the zones of inhibition were interpreted by the guideline of the national committee for clinical laboratory standards ( nccls , 1990 ) . positive samples identified with api20e were serotyped , with using the kaufmann - white scheme , by the national veterinary research quarantine service ( table 2 ) . the presence of salmonella spp . subspecies iii was confirmed by utilization of malonate broth ( difco , usa ) and the absence of dulcitol fermentation ( biolife , italy ) . from september to october 2006 , fecal samples were obtained by anal or cloacal swabs from 294 animals ( 46 reptiles , 14 birds and 233 mammals ) housed at seoul grand park , korea ( table 1 ) . the swabs were placed in sterile ames transport medium ( difco , usa ) and they were stored at 4 for 24 - 48 h prior to processing . the samples were selectively enriched for salmonella by incubating the swabs in tetrathionate broth ( difco , usa ) at 37 for 24 - 48 h. the selective enrichment cultures were streaked onto salmonella chromogenic agar ( oxoid , uk ) and this was incubated at 37 for 24 h . violet colored colonies suspected of being salmonella spp . were inoculated onto api20e biochemical profiles ( biomerieux sa , france ) . the stocks were made after first isolation in this experiment and then were stored in -20 for a year . 5 ml aliquots of cultured buffered peptone water were inoculated onto mueller - hinton ( oxoid , uk ) agar plates with using a sterilized swab , followed by placing antibiotic discs that contained ampicillin - sulbactam 20 g , polymyxin b 300 g , cephalothin 30 g , tetracycline 30 g , chloramphenicol 30 g , gentamicin 10 g , cefotaxime 30 g , sulfamethazole - trimethoprim 25 g or nitrofuratonin 300 g onto the agar plates , respectively . the plates were incubated for 18 h at 35 , and the zones of inhibition were interpreted by the guideline of the national committee for clinical laboratory standards ( nccls , 1990 ) . positive samples identified with api20e were serotyped , with using the kaufmann - white scheme , by the national veterinary research quarantine service ( table 2 ) . subspecies iii was confirmed by utilization of malonate broth ( difco , usa ) and the absence of dulcitol fermentation ( biolife , italy ) . salmonella spp . was isolated from 17 of the 294 ( 5.8% ) anal swab samples ( from 14 of 46 reptiles ( 30.4% ) , 1 of 15 birds ( 6.7% ) and 2 of 233 mammals ) ( 0.9% ) ( table 3 ) . after about a year of storage at -20 , these 17 salmonella positive samples were re - inoculated . this yielded 15 positives , which were then tested for their antimicrobial susceptibility and serotype ( tables 2 and 4 ) . human infection by reptile - associated salmonellosis has been increasing throughout the world because more people have started keeping exotic pets , including turtles , snakes and iguanas . in 1975 , legislation in the usa banned the sale of small turtles , which led to an 18% reduction of salmonellosis in children 1 - 9 years old . yet zoo visitors becoming infected with salmonella is not common , although 39 children visiting the denver zoo in 1996 became infected . between 1966 and 2000 , there have been 11 published zoonotic disease outbreaks associated with animal exhibits , as well as 16 unpublished outbreaks . therefore , although zoonotic disease outbreaks from zoos or animal exhibitions are infrequent , zoo visitors and zookeepers are at risk of infection from animal carriers . fecal samples were collected from 294 animals ( 46 reptiles , 15 birds and 233 mammals ) , and salmonella spp . strains were found in 14 ( 30.4% ) , 1 ( 6.7% ) and 2 ( 0.9% ) of these animals , respectively . of the 15 salmonella isolates we examined , 8 belonged to subspecies i and 4 belonged to subspecies iii , with the other 3 could not be typed . subspecies i is responsible for more than 99% of salmonella infections in humans . generally , salmonella subspecies i is found in warm blooded animals , whereas subspecies ii , iiia , iiib and iv are isolated from cold - blooded vertebrates and their environments . however , the most common subspecies isolated from reptiles was recently reported to be subspecies i . the most frequent serovar was s. enterica newport , a pathogen of growing importance because of its epidemic spread in dairy cattle and its increasing rate of antimicrobial resistance . between 1987 and 1997 , this serotype was the fourth most common strain seen in human salmonellosis cases in the us . newport isolated in this study originated from mangrove snakes , suggesting that the prevalence of salmonella spp . in reptiles reptiles could then excrete these organisms into the environment and so infect zookeepers and other humans . evaluation of the environmental spread of salmonella strains in the reptile department of the antwerp zoo found contamination of the floor , window benches , cage furniture , the kitchen used for preparing animal food , water containers and fences , suggesting that people can be infected with salmonella spp . by indirect transmission through contaminated environments . iguanas have become more popular as pets and so they play an important role in reptile - associated salmonellosis . therefore , zoos should take care prior to offering ' opportunities to touch reptiles ' to their visitors . although most reptiles at seoul grand park are kept in their own cages , the turtles and korean terrapins are kept in a more natural environment that basically resembles a small stream . these animals can therefore roam freely around a fish tank surrounded by rocks and wooden fences , and visitors can touch these surroundings . in addition , burmese pythons are very docile and they are frequently used in reptile contact programs . of the 3 burmese pythons we tested , 1 was an asymptomatic salmonella carrier . since many zoos have programs in which humans can feed and touch animals , this can lead to infection of children and immunocompromised individuals . fortunately , most of the isolated salmonella spp . in our study were susceptible to most antibiotics . our findings also highlight the requirement for better personal hygiene practices to minimizing the risk of infection for zoo visitors and the zoo personnel , as well as the need for educating zoo personal and visitors about proper hygiene practices .
salmonellosis is an important zoonotic disease that affects both people and animals . the incidence of reptile - associated salmonellosis has increased in western countries due to the increasing popularity of reptiles as pets . in korea , where reptiles are not popular as pets , many zoos offer programs in which people have contact with animals , including reptiles . so , we determined the rate of salmonella spp . infection in animals by taking anal swabs from 294 animals at seoul grand park . salmonella spp . were isolated from 14 of 46 reptiles ( 30.4% ) , 1 of 15 birds ( 6.7% ) and 2 of 233 mammals ( 0.9% ) . these findings indicate that vigilance is required for determining the presence of zoonotic pathogen infections in zoo animals and contamination of animal facilities to prevent human infection with zoonotic diseases from zoo facilities and animal exhibitions . in addition , prevention of human infection requires proper education about personal hygiene .
Tweet with a location You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more ||||| President Trump keeps up attack on Disney's Bob Iger: 'Where is my call of apology?' President Trump speaks at a rally Tuesday, May 29, 2018, in Nashville, Tenn. (Photo: Mark Humphrey, AP) President Trump again called on Bob Iger to apologize for its coverage of him, saying that he and ABC had "offended millions of people." "Iger, where is my call of apology?" Trump tweeted Thursday morning. "You and ABC have offended millions of people, and they demand a response. How is Brian Ross doing? He tanked the market with an ABC lie, yet no apology. Double Standard!" The president was referring to ABC News reporter Brian Ross, who erroneously reported late last year that Trump had asked Michael Flynn to make contact with the Russians during the 2016 presidential campaign. Ross corrected the report afterward, saying that Trump made the request when he was president-elect, not a candidate. Iger, where is my call of apology? You and ABC have offended millions of people, and they demand a response. How is Brian Ross doing? He tanked the market with an ABC lie, yet no apology. Double Standard! — Donald J. Trump (@realDonaldTrump) May 31, 2018 ABC suspended Ross and also apologized for the report: "We deeply regret and apologize for the serious error we made yesterday. The reporting conveyed by Brian Ross during the special report had not been fully vetted through our editorial standards process." The president's latest missive came a day after he first criticized Iger — the CEO of Disney, ABC's parent company — for apologizing to Valerie Jarrett over a racist tweet from Roseanne Barr, but not apologizing to him. Iger and Trump has previously been at odds. Iger left Trump's business advisory council after Trump pulled out of the Paris climate agreement. He also criticized Trump for ending the Obama-era Deferred Action on Childhood Arrivals program (DACA). The White House has said that the president was merely calling out media bias, not defending Barr. "No one is defending what she said," Press Secretary Sarah Sanders said Wednesday. "The president is the president of all the country.” More: Trump weighs in on Roseanne Barr, Bob Iger and Valerie Jarrett: No one apologizes to me Read or Share this story: https://usat.ly/2LJOcyW ||||| Not that it matters but I never fired James Comey because of Russia! The Corrupt Mainstream Media loves to keep pushing that narrative, but they know it is not true! ||||| Tweet with a location You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
– President Trump is still waiting for that apology from Bob Iger. Trump first showed some resentment Wednesday after the target of Roseanne Barr's racist tweet, Valerie Jarrett, got an apology from the Disney CEO while Iger has never called Trump "to apologize for the HORRIBLE statements made and said about me on ABC." The president returned to the subject on Twitter Thursday. "Iger, where is my call of apology? ... How is Brian Ross doing? He tanked the market with an ABC lie, yet no apology. Double Standard!" he wrote. USA Today reports it's a reference to December's incorrect report from Disney's ABC News that Trump asked Michael Flynn to make contact with Russians while he was running for president. Reporter Brian Ross later issued a correction stating Trump wasn't a candidate, but rather president-elect, when he gave Flynn those instructions. "We deeply regret and apologize for the serious error we made," ABC said at the time. Addressing Iger, Trump added in his tweet, "You and ABC have offended millions of people, and they demand a response." Minutes later, Trump repeated claims there were spies in his campaign and stated, "I never fired James Comey because of Russia!"
400 Years After His Death, Shakespeare's First Folio Goes Out On Tour Enlarge this image toggle caption Folger Shakespeare Library Folger Shakespeare Library One of the world's most precious volumes starts a tour on Monday, in Norman, Okla. The Folger Shakespeare Library in Washington, D.C., is sending out William Shakespeare's First Folio to all 50 states to mark the 400th anniversary of the bard's death. Published seven years after he died, the First Folio is the first printed collection of all of Shakespeare's plays. The Folger has 82 First Folios — the largest collection in the world. It's located several stairways down, in a rare manuscript vault. To reach them, you first have to get through a fire door ... (if a fire did threaten these priceless objects, it would be extinguished not with water — never water near priceless paper — but with a system that removes oxygen from the room). A massive safe door comes next — so heavy it takes two burly guards to open it, and then yet another door, which triggers a bell to alert librarians that someone has entered. After that, there's yet another door and an elevator waaaay down to a vault that nearly spans the length of a city block, says Folger director Michael Witmore. This is where the library stores tens of thousands of pieces of paper — folios, plus half of everything printed in England from 1473 to 1660 and much more. And there, propped open on spongy wedges to protect the binding, is the First Folio. "If you had to pick one book to represent Shakespeare, this is it," Witmore says. Two of Shakespeare's pals put it together in 1623, after he died. John Heminges and Henry Condell were fellow actors who felt the plays should be collected in a single large volume. They also added in 18 of Shakespeare's plays that had never appeared in print, explains Witmore. "Without this book we probably wouldn't have ... Twelfth Night, Julius Caesar, Macbeth, The Winter's Tale ..." Enlarge this image toggle caption Folger Shakespeare Library Folger Shakespeare Library It adds up to a total of 36 plays in the Folio. The others had been printed, as individual works in smaller format (quarto — single pages folded in four, and bound). Some of those were published in Shakespeare's lifetime. Shakespeare didn't spend much of his time coordinating or supervising the printing of his work. He really just wanted to write. He wrote on vellum, with a goose quill pen he cut himself, and ink he may have made. There was a ready audience for Shakespeare's work, which was sold in bookseller shops and stalls in London. For 20 shillings ($200 in today's money) you could buy loose sheets of paper and take them to your binder, who put them between hard covers. Of the approximately 750 copies of the First Folio that were printed, 233 survive. The book was so popular that more editions were printed and eventually there were four folios. Each time it was printed, someone made changes in the text — which has kept centuries of Shakespeare scholars busy! To scholars and actors, this 4 pound, 13 ounce volume is the paper equivalent of The Holy Grail. "I can't think of another playwright with an output of this magnitude," says Sarah Pretz, a graduate of the D.C. Shakespeare Theatre's Academy of Classical Acting. Shakespeare's works, she believes, changed "the way people think about the world and the way people think about each other." That is exactly why those two Shakespeare friends, the folio-makers, wanted the plays preserved. Thomas Keegan, another graduate of the Shakespeare Theatre Academy, says if the folios hadn't been made all those centuries ago, he probably wouldn't have the opportunity to read and perform these works today. Eighteen of the Folger Shakespeare Library's First Folios will be touring the country — six at a time for a year — to mark the 400th anniversary of the master's death. ||||| Folger Shakespeare LibraryArchive-It Partner Since: Sep, 2011Organization Type: Other InstitutionsOrganization URL: http://www.folger.edu The mission of the Folger Shakespeare Library is to preserve and enhance its collections; to render the collections, in appropriate formats, accessible to scholars; and to advance understanding and appreciation of Shakespeare's writings and of the culture of early modern Europe more generally through various programs designed for all students and for the general public. ||||| A book called “Mr. William Shakespeares Comedies, Histories, & Tragedies” — rules for use of apostrophes apparently have changed since 1623 — published just seven years after the world-famous playwright’s death, will be on display in Eugene for a month, starting Wednesday. The U.S. tour of what’s now referred to as the “First Folio,” of which only 233 are believed still to be in existence, is stopping at one location in each state. After the UO won a competition, Oregon’s venue is the Jordan Schnitzer Museum of Art on the University of Oregon campus. The exhibit has a long name, “First Folio! The Book That Gave Us Shakespeare, on Tour From the Folger Shakespeare Library.” And it’s just one of the special events that will be happening during the next few weeks in honor of The Bard’s local presence. The Schnitzer museum’s exhibit also will present other Shakespeare-related items, including the second and fourth folios of his collected works, illustrations of “The Tempest” by 19th century British artist Walter Crane and the first folio of plays by English playwright Ben Jonson, who was born eight years after Shakespeare and died 21 years after. According to a PBS program about Jonson, he and Shakespeare were friends as well as competitors, with Shakespeare acting in at least one of Jonson’s plays, “Every Man in His Humour.” Jonson also is credited with coining Shakespeare’s most lasting epitaph, “He was not of an age, but for all time.” In addition, the UO Knight Library’s Special Collections and University Archives division will present “Time’s Pencil: Shakespeare After the Folio,” which explores changes throughout the centuries in how Shakespeare’s works were contemplated, published and performed, as well as taking a look at history’s view of Shakespeare as man, actor and playwright after the publication of the First Folio. The coordination of all these pieces of Shakespeare lore and accomplishment that brings the First Folio to the University of Oregon is the work of Lara Bovilsky, associate professor of English at the UO. The exhibits “offer Oregonians so many exciting experiences,” Bovilsky said, not only providing a glimpse of the original, centuries-old volume, but also “additional ways to enjoy and understand Shakespeare’s changing, ongoing impact.” The public display of all of this material is timed to coincide with the 400th anniversary of Shakespeare’s death, which is believed to have occurred on April 16, 1616, at age 52. It’s possible that Shakespeare died on his 52nd birthday. That’s because church records at Holy Trinity Church in Stratford-upon-Avon show an entry for Shakespeare’s baptism three days later, often the interval between the two events. Relatively little is known of Shakespeare’s life, and were it not for the First Folio, much less might also be known of his work. The thick book, which contains hundreds of pages, is considered the only source for 18 of his 38 plays, including such favorites as “The Tempest,” “Macbeth,” “Twelfth Night” and “As You Like It.” During its visit to the Schnitzer, the folio will be open to display one of Shakespeare’s most famous monologues, the “to be or not to be” speech from “Hamlet.” Admission to the museum will be free during the exhibition, an attempt to encourage as many people as possible to see it. Building a biography The Folger Shakespeare Library, located near Capitol Hill in Washington, D.C., has the world’s largest collection of Shakespeare-related materials. Materials range in age from Shakespeare’s early 16th century years to the present. Although relatively little is known about Shakespeare’s life, the Folger library has pieced together a narrative, starting with his birth on or about April 23, 1564. Shakespeare was the third child born to John and Mary Shakespeare in Stratford, but the two first children, girls named Joan and Judith, died in infancy. Three boys — Gilbert, Richard and Edmund — followed him, as did two more girls, Anne, who died at age 7, and another girl, also named Joan. John Shakespeare was a well-to-do fine leather worker, and his wife, born Mary Arden, also came from a prominent local family. When William was a young child, his father became the town bailiff, similar to mayors of today. Shakespeare’s schooling, emphasizing classical subjects, probably ended when he was about 15 years old. Several years later, in 1582, he married Anne Hathaway, when he was 18 and she 26. As was common for the period, she was pregnant at the time of the marriage with their first daughter, Susanna, and in 1585, twins Judith and Hamnet were born. Anne and the children continued to live in Stratford, while William moved to London to work in the theatrical milieu. Their son, Hamnet, died at age 11. Both daughters eventually married — Susanna is known to have had a daughter — but the family eventually died out, leaving no direct descendants. One of Shakespeare’s last playwriting projects was “The Two Noble Kinsmen,” written with frequent collaborator John Fletcher about 1613. The cause of Shakespeare’s death three years later is not known. What is known is that his brother-in-law died a week before him, raising the possibility that they may have succumbed to the same infectious disease. Follow Randi on Twitter @BjornstadRandi . Email [email protected] . ||||| Folger Shakespeare LibraryArchive-It Partner Since: Sep, 2011Organization Type: Other InstitutionsOrganization URL: http://www.folger.edu The mission of the Folger Shakespeare Library is to preserve and enhance its collections; to render the collections, in appropriate formats, accessible to scholars; and to advance understanding and appreciation of Shakespeare's writings and of the culture of early modern Europe more generally through various programs designed for all students and for the general public.
– William Shakespeare is on a US tour: Or at least his so-called First Folio—the first printed collection of all of the Bard's 36 plays—is, NPR reports. Of some 750 copies printed in 1623, seven years after his death, 233 survive, per NPR. Now, to mark the 400th anniversary of Shakespeare's 1616 death, the Folger Shakespeare Library in DC is sending out 18 of its 82 First Folios to be put on temporary display throughout the year at museums, universities, libraries, and other venues in all 50 states. “We’re excited to see the many different ways that communities across the country will be celebrating Shakespeare," Folger Library Director Michael Witmore says in a press release. The tour—"First Folio! The Book that Gave Us Shakespeare"— kicks off this week with copies going to Notre Dame University, Norman, Okla., and Eugene, Ore. First Folios will be on display at each tour stop for three to four weeks, per the Folger Library, and only a max of six will travel (in custom-built traveling cases) at any one time. The Folger Library has the largest collection of Shakespeare materials, according to the Eugene Register-Guard, and NPR notes that First Folios are kept on a lower level of the library. To reach them, one must go through a fire door, a heavy safe door, an alarmed door, yet another door, and then take an elevator down to a vault. Clearly, it's an important work, the "paper equivalent of the Holy Grail," NPR writes. "If you had to pick one book to represent Shakespeare," Witmore tells NPR, "this is it."
Security guards at the Federal Communications Commission headquarters here manhandled a well-regarded reporter at a public hearing today and forced him to leave the premises after he had tried to politely ask questions of FCC commissioners. The reporter, John M. Donnelly of CQ Roll Call, is an award-winning journalist. He is also chairman of the National Press Club’s Press Freedom Team and president of the Military Reporters & Editors association. He has chaired the NPC Board of Governors and formerly served on the Standing Committee of Correspondents in the U.S. Congress, which credentials the Washington press corps. Donnelly said he ran afoul of plainclothes security personnel at the FCC when he tried to ask commissioners questions when they were not in front of the podium at a scheduled press conference. Throughout the FCC meeting, the security guards had shadowed Donnelly as if he were a security threat, he said, even though he continuously displayed his congressional press pass and held a tape recorder and notepad. They even waited for him outside the men’s room at one point. When Donnelly strolled in an unthreatening way toward FCC Commissioner Michael O’Rielly to pose a question, two guards pinned Donnelly against the wall with the backs of their bodies until O’Rielly had passed. O’Rielly witnessed this and continued walking. One of the guards, Frederick Bucher, asked Donnelly why he had not posed his question during the press conference. Then Bucher proceeded to force Donnelly to leave the building entirely under implied threat of force. Bucher has been implicated in at least one other incident involving harassment of a journalist. Bloomberg News reporter Todd Shields told Donnelly today that Bucher took his (Shields’) press badge last July when Shields was talking to a protester at an FCC meeting. The agency later apologized and said it restored Shields' credentials. “I could not have been less threatening or more polite,” Donnelly said of today’s encounter. “There is no justification for using force in such a situation.” “Donnelly was doing his job and doing it with his characteristic civility,” said NPC President Jeff Ballou. “Reporters can ask questions in any area of a public building that is not marked off as restricted to them. Officials who are fielding the questions don’t have to answer. But it is completely unacceptable to physically restrain a reporter who has done nothing wrong or force him or her to leave a public building as if a crime had been committed.” Barbara Cochran, president of the National Press Club Journalism Institute, concurred. “The FCC and other government buildings are paid for by U.S. tax dollars, and officials who work there are accountable to the public through its representatives in the media,” Cochran said. “The FCC should apologize for this incident and ensure that their guards are trained to respect the right of journalists to cover FCC public events. In other words: hands off reporters!” The National Press Club is the world’s leading professional organization for journalists. The National Press Club Journalism Institute is the Club’s nonprofit affiliate and executes professional development and press freedom programs. Contact: Julie Schoo [email protected] (202) 662-7507 ||||| People enter the Federal Communications Commission building December 11, 2014 in Washington, D.C. The commission held its monthly meeting as activists held a rally outside to call for net neutrality. People enter the Federal Communications Commission building December 11, 2014 in Washington, D.C. The commission held its monthly meeting as activists held a rally outside to call for net neutrality. Alex Wong—Getty Images Security guards “manhandled” an award-winning journalist after he asked Federal Communications Commission officials questions at a public hearing on Thursday, according to a statement from the National Press Club . John Donnelly, a reporter for CQ Roll Call, was thrown out of the scheduled press conference after he tried to ask commissioners questions when they were not behind the podium, the statement said. . @FCC guards manhandled me, forced me out of building when I tried to ask @AjitPaiFCC & @mikeofcc questions. https://t.co/qQHQ4O82lc 1 - John M. Donnelly (@johnmdonnelly) May 18, 2017 Donnelly, who is the chair of the National Press Club's Press Freedom Team, said two security guards pinned him against the wall with their backs while FCC Commissioner Michael O’Rielly walked past. They then forced him to leave the building. “I could not have been less threatening or more polite,” Donnelly said in the National Press Club release. “There is no justification for using force in such a situation.” Thursday’s open meeting was closely watched as the FCC voted on the hot-button issue of net neutrality. NPC President Jeff Ballou condemned the security guards' actions. “Donnelly was doing his job and doing it with his characteristic civility,” Ballou said in his organization’s statement. “Reporters can ask questions in any area of a public building that is not marked off as restricted to them. Officials who are fielding the questions don't have to answer. But it is completely unacceptable to physically restrain a reporter who has done nothing wrong or force him or her to leave a public building as if a crime had been committed.” An FCC spokesman told NPR journalist David Folkenflik that they had apologized to Donnelly and said the agency “was on heightened alert today based on several threats.”
– A veteran reporter says he was "manhandled" by two security guards at an FCC public hearing Thursday in DC after he attempted to ask a follow-up question. CQ Roll Call senior writer John Donnelly says that plainclothes security detail pinned him against the wall with their backs after he approached FCC Commissioner Mike O'Rielly to ask him a question, reports Time. The National Press Club issued a statement on the incident, saying Donnelly was shadowed by security throughout the event, even posting outside a bathroom he entered. After cornering Donnelly, the guards inquired why he didn't ask his question during the press conference while O’Rielly was at the podium. Donnelly was then made to leave the building "under implied threat of force." "I could not have been less threatening or more polite," Donnelly says. "There is no justification for using force in such a situation." The NPC statement notes that while officials don't have to answer, "reporters can ask questions in any area of a public building that is not marked off as restricted to them." The Washington Post adds it's "standard practice" to approach officials after a news conference. The FCC issued an apology to Donnelly, stating the agency was under high alert due to unspecified threats at the time; the meeting covered contentious topics like net neutrality regulations. O’Rielly responded apologetically to tweets from Donnelly about the incident, saying he didn’t recognize him in the hallway or see the guards touch him, and that he was "freezing and starving" at the time and was happy to answer Donnelly’s questions.
Ariel Walden, KFYO.com The U.S. House last week voted to preserve spending for vehicle leases for members of Congress; a practice that a West Texas Congressman is taking advantage of. In a story written by Jamie Dupree, with WSB Radio in Atlanta, Congressman Randy Neugebauer of Lubbock (pictured) tops the list House members spending on vehicle leases. Neugebauer spends $1,318.97 per month on two vehicles. One lease costs $333.33 and the second $985.64. Congressman Neugebauer explained the spending on the Monday edition of Lubbock’s First News on KFYO, “One of the things members (of Congress) get is an allowance and they hire all of their people, lease all of their office space through (the fund). One of the things I do is lease vehicles so that my employees can travel throughout the (19th Congressional) district. We’ve looked at both ways, both reimbursing them on a mileage basis or leasing the cars. Now for a member like me, that has 29,000 miles, 29 counties, we drive our vehicles a lot of miles.” Congressman Neugebauer continued, “It is more cost effective, when we ran the numbers, for us to lease the vehicles than to reimburse for mileage.” In explaining the cost of the leases Neugebauer said, “The House administration makes us lease vehicles the length of our term, two years. We get penalized for the shorter term of the leases. The other thing too is that because of the fuel standards the House administration puts on us, we have to lease hybrid vehicles. The hybrid vehicles are a more expensive vehicle to lease.” The vehicle which costs $985.64 per month to lease is a hybrid Chevy Tahoe according to Congressman Neugebauer. 63 members of the U.S. House use federal money for their vehicle leases, averaging $610.23 per month, per member. Florida Congressman Richard Nugent’s amendment to ban the practice failed 221-196. Here is a partial list of the U.S. House Member Vehicle Lease Spending (per month): ||||| As the U.S. House last week approved a funding bill for the operations of the Legislative Branch 2015, lawmakers voted to freeze the House budget and to again block any pay raise for members, but they refused to take another step on the budget, rejecting a plan to stop members from using some of their office account money to lease automobiles. "Today, members of Congress can lease Lexuses, BMW's, Infinitis, Acuras, Mercedes," groused Rep. Richard Nugent (R-FL), who pressed the case for banning car leases. "Having a personal car - basically paid for by the taxpayers - should no longer be allowed," Nugent argued in vain on the floor, as the House rejected his amendment on a vote of 221-196. In expense records filed by members of the U.S. House, the number of lawmakers spending their office account money to lease a vehicle has declined slightly in recent years - there are 63 members right now using taxpayer money for a vehicle lease. The numbers from the House Statement of Disbursements show this: + 63 House members are spending $38,444.20 per month on auto leases, which totals out to $461,330.40 per year - that is an average of $610.23 per month per lawmaker lease. + 38 Democrats are using official office account money to lease a vehicle, spending an average of $600.18 per month. + 25 Republicans are using official office account money to lease a vehicle, spending an average of $625.39 per month. + The most money is being spent for auto leases each month is by Rep. Randy Neugebauer (R-TX) at $1,318.97 for two leases; one is for $333.33, and the other is for $985.64. Neugebauer defended the leases in an interview with KFYO Radio in Texas, saying that his staff uses the vehicles to get around his district. "It is more cost effective, when we ran the numbers, for us to lease the vehicles than to reimburse for mileage," Neugebauer said, adding that his $985.64 lease is for a hybrid Chevy Tahoe. Not all of the leases are for cars for personal use - like the $800 per month for a "Mobile Constituent Service Center" spent by Rep. Kerry Bentivolio (R-MI), which his spokesman says is a trailer used for district work, not an automobile for personal use. + The ten most expensive auto leases are split evenly by party, with five Democrats and five Republicans in the Top Ten. Here are the 63 members who are leasing a vehicle with money from their office accounts; the amount is the per month cost of the vehicle lease(s): Rep. Randy Neugebauer R-TX $1,318.97 Rep. Emanuel Cleaver D-MO $999.55 Rep. G.K. Butterfield D-NC $999.50 Rep. Gregory Meeks D-NY $989.90 Rep. Dana Rohrabacher R-CA $943.87 Rep. Bobby Rush D-IL $927.97 Rep. Eddie Bernice Johnson D-TX $914.95 Rep. Joe Barton R-TX $884.67 Rep. John Culberson R-TX $880.00 Rep. Kerry Bentivolio R-MI $800.00 Rep. Michael Michaud D-ME $795.00 Rep. Alcee Hastings D-FL $753.28 Rep. Don Young R-AK $748.73 Rep. Gene Green D-TX $725.82 Rep. William Clay D-MO $723.26 Rep. Juan Vargas D-CA $712.09 Rep. Terri Sewell D-AL $700.05 Rep. John Conyers D-MI $699.72 Rep. Duncan Hunter D-CA $699.58 Rep. Mike Simpson R-IL $698.13 Rep. Buck McKeon R-CA $695.00 Rep. Gary Miller R-CA $695.00 Rep. Hank Johnson D-GA $669.16 Rep. Collin Peterson D-MN $663.66 Rep. John Carney D-DE $657.99 Rep. Henry Cuellar D-TX $643.65 Rep. Adrian Smith R-NE $629.52 Rep. Mario Diaz-Balart R-FL $615.10 Rep. Joe Garcia D-FL $606.46 Rep. William Keating D-MA $579.05 Rep. Bill Shuster R-PA $568.81 Rep. Nick Rahall D-WV $567.07 Rep. Karen Bass D-CA $562.11 Rep. Kenny Marchant R-TX $549.84 Rep. David Scott D-GA $523.32 Rep. George Miller D-CA $516.52 Rep. Danny Davis D-IL $516.39 Rep. Mike Pompeo R-CA $514.16 Rep. Ed Royce R-CA $509.98 Rep. Tony Cardenas D-CA $507.66 Rep. Barbara Lee D-CA $507.07 Rep. Steve Womack R-AR $501.87 Rep. Louie Gohmert R-TX $492.57 Rep. Ileana Ros-Lehtinen R-FL $479.86 Rep. Phil Gingrey R-GA $479.26 Rep. Mike Turner R-OH $477.50 Rep. William Owens D-NY $465.21 Rep. Morgan Griffith R-VA $464.42 Rep. Bill Flores D-TX $455.55 Rep. Kevin McCarthy R-CA $440.54 Rep. Al Green D-TX $436.89 Rep. Bill Johnson R-OH $431.57 Rep. Kevin Brady R-TX $431.00 Rep. Sheila Jackson Lee D-TX $429.05 Rep. Peter Welch D-VT $425.68 Rep. John Lewis D-GA $417.93 Rep. Linda Sanchez D-CA $413.65 Rep. Anna Eshoo D-CA $392.30 Rep. Gus Bilirakis R-FL $387.00 Rep. Jim McDermott D-WA $357.92 Rep. Ted Deutch D-FL $301.17 Rep. Adam Schiff D-CA $278.90 Rep. Xavier Becerra D-CA $271.80 Only House members can use their official funds to lease vehicles; the Senate got rid of that option in recent years. Under House rules, it is okay to lease a vehicle with money raised in campaign contributions, instead of official office funds. "It is permissible for a Member to lease or purchase a motor vehicle with campaign funds and to use that vehicle on an unlimited basis for travel for both campaign and official House purposes," the Ethics Committee guidance states. "Campaign funds may also be used to pay the expenses incurred in operating the vehicle, such as insurance, maintenance and repair, registration fees, and any property tax," it adds. The current roster of lawmakers leasing vehicles includes people from rural districts and big cities; there are members of leadership teams in both parties, veteran lawmakers, fairly new members, liberals and conservatives.
– When the House voted against raising its own pay last week, it also quietly voted to keep a cherished perk: taxpayer-funded car leases. Rep. Rich Nugent had offered an amendment to disallow the practice, but it lost by 20 votes, the Washington Post reports. "Today, members of Congress can lease Lexuses, BMWs, Infinities, Mercedes," Nugent argued. "Does that send a message to our folks back home that this is the right way to do it?" How much is all this costing? Jamie Dupree at the Atlanta Journal-Constitution dove into the numbers and found that 63 reps (38 Democrats and 25 Republicans) use the allowance, spending $461,330 per year, and an average of $610.23 per month each. The biggest spender is Texas Rep. Randy Neugebauer, who dishes out $1,319 a month for two vehicles. But Neugebauer defended the expense to KFYO, citing the size of his district. "We drive our vehicles a lot of miles," he said.
fibro - osseous lesions are group of lesions characterized histopathologically by the presence of fibrous stroma with varying amount of mineralized material resembling bone or cementum . fibro - osseous lesions of the maxilla are not an uncommon tumor . majority of the lesions with fibrous and osseous components include ossifying fibroma , fibrous dysplasia , cemento - ossifying fibroma , and cementifying fibroma with other less common lesions include focal sclerosing osteomyelitis , florid osseous dysplasia , periapical dysplasia , proliferative periostitis of garre , and osteitis deformans . fibro - osseous lesions other than fibrous dysplasia arise from a layer of fibrous connective - tissue surrounding the roots of teeth . this layer contains multipotential cells that are capable of forming cementum , lamellar bone , and fibrous tissue . the differential diagnosis of fibro - osseous lesions includes osteoblastoma , osteoid osteoma , chronic sclerosing osteomyelitis , ameloblastoma of maxillary sinus , pindborg tumor , calcifying odontogenic cyst ( gorlin cyst ) , odontogenic myxoma , osteogenesis imperfecta , and paget 's disease . a 15-year - old woman was presented to us with complaints of gradually progressive swelling left side face with upward and outward bulging of the left eye for 6 years . she also had nasal obstruction and watering from the left eye for 1 year [ figure 1 ] . the physical examination showed left side maxillary enlargement with marked upward and outward displacement of left eyeball . oral cavity examination revealed obliterated gingiva - buccal groove , displaced , and misaligned teeth with normal oral mucosa and mouth opening . on palpation , swelling was hard in consistency with no fluctuation . there were no signs of inflammation over the face with an irregular surface and free overlying skin . a computed tomography ( ct ) scan showed a mixed density mass with diffuse scattered calcification occupying and expanding the left maxillary antrum with marked displacement of left eyeball [ figures 2 and 3 ] . a tissue sample was obtained for histopathological study which showed a lamellar bone with osteoblastic rimming with subepithelial zone showing fibrous element [ figure 4 ] . a diagnosis of ossifying fibroma was made , and the patient underwent complete surgical resection via weber - ferguson approach [ figure 5 ] . patient with a left maxillary enlargement with gross disfigurement and marked proptosis computed tomography scan axial section shows a well - defined lesion with radiolucent and radio - opaque foci computed tomography scan coronal section shows a well - defined lesion with radiolucent and radio - opaque foci microphotograph showing fibrous element along with areas of dense ossification as well as psammomatous calcification ( h and e , 10 ) the surgical specimen ( gross appearance ) ossifying fibromas and fibrous dysplasias are the two major groups of benign fibro - osseous lesions that involve the maxilla leading to significant cosmetic and functional disturbances . because of peculiar patterns of disease progression , it is important to distinguish between the two . ossifying fibroma is well - circumscribed , slow growing , and sharply defined margins with a radiolucent peripheral component . the etiology of ossifying fibroma remains unknown , and it is considered a tumor arising from periodontal membrane . the lesions are most commonly seen in the third and fourth decades of life with female preponderance . the lesion is generally asymptomatic until the growth produces a pain , paresthesias , and facial asymmetry . ossifying fibroma most commonly involves mandible and extension of tumor mass into the ramus of the mandible and involvement of the inferior border may lead paresthesia of the inferior alveolar nerve . involvement of maxilla causes cortical expansion with obliteration of the gingivobuccal sulcus , extension into the nasal cavity , and orbital floor leads to epistaxis and epiphora . on ct , ossifying fibroma appears as a solitary radiolucent cyst - like mass with minimal or absent internal calcified components in early stage while it is radiodense in late stage . histopathologically , ossifying fibroma is composed of lamellar bone with prominent osteoblastic rimming in dense fibrous stroma . the differential diagnosis having radiopacities within a well - defined radiolucent mass includes chondrosarcoma , osteosarcoma , fibrous dysplasia , squamous cell carcinomas , odontogenic cysts , calcifying odontogenic cysts , and calcifying epithelial odontogenic tumors ( pindborg tumors ) . the well - defined border of the ossifying fibroma helps differentiate it from the aggressive sarcomas and carcinomas . fibrous dysplasia has a characteristic ground glass appearance not seen in the ossifying fibroma . differentiation of ossifying fibroma and fibrous dysplasia may be difficult due to marked histological and radiological overlapping . there is radiological overlapping among ossifying fibroma , gorlin cysts , and pindborg tumors necessitating the final diagnosis on the basis of histologic appearance . ossifying fibroma should be completely enucleated from the surrounding bone because of high chances of recurrence . the authors certify that they have obtained all appropriate patient consent forms . in the form the patient(s ) has / have given his / her / their consent for his / her / their images and other clinical information to be reported in the journal . the patients understand that their names and initials will not be published and due efforts will be made to conceal their identity , but anonymity can not be guaranteed . the authors certify that they have obtained all appropriate patient consent forms . in the form the patient(s ) has / have given his / her / their consent for his / her / their images and other clinical information to be reported in the journal . the patients understand that their names and initials will not be published and due efforts will be made to conceal their identity , but anonymity can not be guaranteed .
maxillofacial fibro - osseous lesions comprise a group of face and jaw disorders characterized by the replacement of bone by a benign connective - tissue matrix with varying amount of mineralized substances . fibro - osseous lesions of the maxilla are not an uncommon tumor . majority of the lesions with fibrous and osseous components include ossifying fibroma , fibrous dysplasia , cemento - ossifying fibroma , and cementifying fibroma . we present a case of 15-year - old female with huge fibroosseous lesion which was treated with total maxillectomy via a weber - ferguson approach . histopathology established that fibroosseous lesion as an ossifying fibroma .
null
the identification of effective polypeptide ligands for magnetic iron oxide nanoparticles ( ionps ) could considerably accelerate the high - throughput analysis of ionp - based reagents for imaging and cell labeling . we developed a procedure for screening ionp ligands and applied it to compare candidate peptides that incorporated carboxylic acid side chains , catechols , and sequences derived from phage display selection . we found that only l-3,4-dihydroxyphenylalanine ( dopa)-containing peptides were sufficient to maintain particles in solution . we used a dopa - containing sequence motif as the starting point for generation of a further library of over 30 peptides , each of which was complexed with ionps and evaluated for colloidal stability and magnetic resonance imaging ( mri ) contrast properties . optimal properties were conferred by sequences within a narrow range of biophysical parameters , suggesting that these sequences could serve as generalizable anchors for formation of polypeptide ionp complexes . differences in the amino acid sequence affected t1- and t2-weighted mri contrast without substantially altering particle size , indicating that the microstructure of peptide - based ionp coatings exerts a substantial influence and could be manipulated to tune properties of targeted or responsive contrast agents . a representative peptide ionp complex displayed stability in biological buffer and induced persistent mri contrast in mice , indicating suitability of these species for in vivo molecular imaging applications .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Transitional Housing Assistance for Victims of Domestic Violence Act of 2002''. SEC. 2. TRANSITIONAL HOUSING ASSISTANCE GRANTS. The Attorney General, in consultation with the Secretary of Housing and Urban Development and the Secretary of Health and Human Services, shall award grants under this Act to organizations, States, units of local government, and Indian tribes (referred to in this Act as the ``recipient'') to carry out programs to provide assistance to individuals, and the dependents of individuals-- (1) who are homeless or in need of transitional housing or other housing assistance as a result of fleeing a situation of domestic violence; and (2) for whom emergency shelter services or other crisis intervention services are unavailable or insufficient. SEC. 3. TYPES OF ASSISTANCE. Grants awarded under this Act may be used for programs that provide-- (1) short-term housing assistance, including rental or utilities payments assistance and assistance with related expenses such as payment of security deposits and other costs incidental to relocation to transitional housing for persons described in section 2; and (2) support services designed to enable an individual, or dependent of an individual, who is fleeing a situation of domestic violence to-- (A) locate and secure permanent housing; and (B) integrate into a community by providing that individual or dependent with services, such as transportation, counseling, child care services, case management, employment counseling, and other assistance. SEC. 4. DURATION. (a) In General.--Except as provided in subsection (b), an individual, or dependent of an individual, who receives assistance under this Act shall receive that assistance for not more than 18 months. (b) Waiver.--The recipient of a grant under this Act may waive the restriction under subsection (a) for not more than an additional 6 month period with respect to any individual, or dependent of an individual, who-- (1) has made a good-faith effort to acquire permanent housing; and (2) has been unable to acquire permanent housing. SEC. 5. APPLICATION. (a) In General.--Each eligible entity desiring a grant under this Act shall submit an application to the Attorney General at such time, in such manner, and accompanied by such information as the Attorney General may reasonably require. (b) Contents.--Each application submitted pursuant to subsection (a) shall-- (1) describe the activities for which assistance under this Act is sought; and (2) provide such additional assurances as the Attorney General determines to be essential to ensure compliance with the requirements of this Act. (c) Application.--Nothing in this section shall be construed to require-- (1) victims to participate in the criminal justice system in order to receive services; or (2) domestic violence advocates to breach client confidentiality. SEC. 6. REPORTS. (a) Report to the Attorney General.-- (1) In general.--A recipient of a grant under this Act shall annually prepare and submit to the Attorney General a report describing-- (A) the number of individuals and dependents assisted under this Act; and (B) the types of housing assistance and support services provided under this Act. (2) Contents.--Each report prepared and submitted under paragraph (1) shall include information regarding-- (A) the amount of housing assistance provided to each individual, or dependent of an individual, assisted under this Act and the reason for that assistance; (B) the number of months each individual, or dependent of an individual, received assistance under this Act; (C) the number of individuals and dependents of those individuals who-- (i) were eligible to receive assistance under this Act; and (ii) were not provided with assistance under this Act solely due to a lack of available housing; and (D) the type of support services provided to each individual, or dependent of an individual, assisted under this Act. (b) Report to Congress.--The Attorney General shall annually prepare and submit to the Committee on the Judiciary of the House of Representatives and the Committee on the Judiciary of the Senate a report that contains a compilation of the information contained in the report submitted under subsection (a). (c) Availability of Report.--In order to coordinate efforts to assist the victims of domestic violence, the Attorney General shall transmit a copy of the report submitted under subsection (b) to-- (1) the Office of Community Planning and Development at the United States Department of Housing and Urban Development; and (2) the Office of Women's Health at the United States Department of Health and Human Services. SEC. 7. AUTHORIZATION OF APPROPRIATIONS. (a) In General.--There are authorized to be appropriated to carry out this Act $30,000,000 for each of fiscal years 2003 through 2006. (b) Limitations.--Of the amount made available to carry out this Act in any fiscal year, not more than 3 percent may be used by the Attorney General for salaries and administrative expenses. (c) Minimum Amount.-- (1) In general.--Except as provided in paragraph (2), unless all eligible applications submitted by any States, units of local government, Indian tribes, or organizations within a State for a grant under this Act have been funded, that State, together with the grantees within the State (other than Indian tribes), shall be allocated in each fiscal year, not less than .75 percent of the total amount appropriated in the fiscal year for grants pursuant to this Act. (2) Exception.--The United States Virgin Islands, American Samoa, Guam, and the Northern Mariana Islands shall each be allocated .25 percent of the total amount appropriated in the fiscal year for grants pursuant to this Act.
Transitional Housing Assistance for Victims of Domestic Violence Act of 2002 - Directs the Attorney General to make grants to State and local governments, Indian tribes, and organizations to provide transitional housing and related support services (18-month maximum with a six-month extension) to individuals and dependents: (1) who are homeless as a result of domestic violence; and (2) for whom emergency shelter services or other crisis intervention services are unavailable or insufficient.
the socioeconomic burden associated with wet age - related macular degeneration ( wamd ) is predicted to rise as the prevalence increases with aging populations.13 this will have a major impact on direct and indirect costs , including costs associated with informal care and lost productivity , which are estimated to be in the region of us$23 billion and $ 34 billion , respectively.4 based on these estimates , it is essential to monitor the effectiveness of long - term management strategies with a view to identifying treatment barriers , particularly from a patient perspective . this is important for newer treatments such as antivascular endothelial growth factor ( anti - vegf ) ( intravitreal ) injections , which have offered remarkable clinical benefits for patients with wamd . anti - vegf agents are known to target a key underlying pathway in the development and progression of wamd and have been shown to be clinically effective in large - scale studies;58 however , surveys on long - term treatment patterns indicate that these agents are underutilized in real - life clinical settings.9 in addition , few studies have examined the impact of anti - vegf treatments from a caregiver perspective , with evidence suggesting that the impact may be similar to that experienced by caregivers of patients with atrial fibrillation.10 such feedback will be invaluable for identifying any barriers to treatment provision and compliance , which could be addressed by the health provider . the aim of this noninterventional , cross - sectional survey was to evaluate the impact of wamd on a global cohort of patients who were currently receiving ( or had previously received ) anti - vegf injections . the survey also identified caregivers ( both paid and unpaid ) and evaluated the effect that caring for someone with wamd had on them . the survey was conducted via a questionnaire that was devised by ophthalmologists and experts in the field of ophthalmology . this paper reports the findings associated with current approaches to the treatment of wamd , including diagnosis and follow - up , and obstacles to treatment , from the perspective of both patient and caregiver responders . this was a global , noninterventional , cross - sectional survey conducted between june 2012 and september 2012 , with data analysis staggered from july 2012 to december 2012 . the survey was devised 1 ) to evaluate the emotional and physical impact of wamd in patients and caregivers and 2 ) to identify current approaches to diagnosis and management of wamd , including barriers to treatment from the perspective of patients and caregivers . the survey was performed using a questionnaire , which was developed through collaboration between an independent steering committee consisting of ophthalmologists and experts and two research organizations ( blueprint partnership , manchester , uk , and survey sampling international [ ssi ] , london , uk ) . the survey link was soft - launched , allowing a small number of responders to complete the questionnaire so that the data could be checked to ensure accurate capture . for those responders with poor eyesight , face - to - face and telephone collection methods were used , wherein a member of ssi or one of their online partners would read aloud the questions and collect and input the responders answers . the online , face - to - face , and telephone surveys were translated for each participating country . the survey was conducted in nine countries ( australia , brazil , canada , france , germany , italy , japan , spain , and the uk ) . patients with a wamd diagnosis who were treated by a health care professional ( hcp ) and received current or prior anti - vegf injections to treat their wamd were included . caregivers who provided care and support to a patient with wamd ( based on the aforementioned criteria ) were also included . support was defined as assisting with one or more of the following : daily activities ( eg , reading , cooking , cleaning , and shopping ) ; driving / traveling with the patient to clinical appointments ; being actively involved in clinical appointments ; and influencing treatment decisions ( eg , advising the patient or helping him or her to understand things and giving an opinion about the treatment he or she will receive ) . recruitment of patients and caregivers was conducted using a combination of online recruitment ( via the ssi website ) and physician referral . physicians identified suitable patients / caregivers and , with their consent , passed on their details to the research organization . the questionnaire was divided into patient and caregiver sections ( the questions are listed with the tables and figures in the results section ) . module a included a number of questions related to initial symptoms , diagnosis ( including first hcp visit ) , time since diagnosis , and information provided ( including source ) . module b included a number of questions related to treatment , follow - up , obstacles to managing wamd , and emotional impact . the caregiver questionnaire was similar , but also included questions on type of support provided . the responders ( patients and caregivers ) were asked to provide yes / no / not sure answers based on a number of available options or to rate questions using impact scales ( positive impact , no impact , negative impact ) , dependency scales ( not dependent , neither dependent nor independent , dependent ) , or convenience scales ( not inconvenient , neither convenient nor inconvenient , inconvenient ) . all completed questionnaire data were stored and captured in spss format ( spss inc . , prior to analyses , data checks were undertaken to ensure that all responders met the screening criteria ; only eligible responders answered relevant questions , responders who clicked through the survey without giving thoughtful responses were removed , and outliers were removed from relevant questions . all data were presented as descriptive statistics based on absolute percentages and means . where possible , data were stratified according to whether patients had wamd in one or two eyes , and these data were compared and analyzed using either a two - sided t - test ( to compare mean values ) or two - tailed z - test ( to compare percentages ) . these analyses were based on the assumption of equal variance with a 5% significance level ( p<0.05 ) . tests were adjusted using the bonferroni correction to counteract the problem of multiple and pairwise comparisons . data analyses were performed in spss version 21 , and all analyses were documented in syntax files . this was a global , noninterventional , cross - sectional survey conducted between june 2012 and september 2012 , with data analysis staggered from july 2012 to december 2012 . the survey was devised 1 ) to evaluate the emotional and physical impact of wamd in patients and caregivers and 2 ) to identify current approaches to diagnosis and management of wamd , including barriers to treatment from the perspective of patients and caregivers . the survey was performed using a questionnaire , which was developed through collaboration between an independent steering committee consisting of ophthalmologists and experts and two research organizations ( blueprint partnership , manchester , uk , and survey sampling international [ ssi ] , london , uk ) . the survey link was soft - launched , allowing a small number of responders to complete the questionnaire so that the data could be checked to ensure accurate capture . for those responders with poor eyesight , face - to - face and telephone collection methods were used , wherein a member of ssi or one of their online partners would read aloud the questions and collect and input the responders answers . the online , face - to - face , and telephone surveys were translated for each participating country . the survey was conducted in nine countries ( australia , brazil , canada , france , germany , italy , japan , spain , and the uk ) . patients with a wamd diagnosis who were treated by a health care professional ( hcp ) and received current or prior anti - vegf injections to treat their wamd were included . caregivers who provided care and support to a patient with wamd ( based on the aforementioned criteria ) were also included . support was defined as assisting with one or more of the following : daily activities ( eg , reading , cooking , cleaning , and shopping ) ; driving / traveling with the patient to clinical appointments ; being actively involved in clinical appointments ; and influencing treatment decisions ( eg , advising the patient or helping him or her to understand things and giving an opinion about the treatment he or she will receive ) . recruitment of patients and caregivers was conducted using a combination of online recruitment ( via the ssi website ) and physician referral . physicians identified suitable patients / caregivers and , with their consent , passed on their details to the research organization . the questionnaire was divided into patient and caregiver sections ( the questions are listed with the tables and figures in the results section ) . module a included a number of questions related to initial symptoms , diagnosis ( including first hcp visit ) , time since diagnosis , and information provided ( including source ) . module b included a number of questions related to treatment , follow - up , obstacles to managing wamd , and emotional impact . the caregiver questionnaire was similar , but also included questions on type of support provided . the responders ( patients and caregivers ) were asked to provide yes / no / not sure answers based on a number of available options or to rate questions using impact scales ( positive impact , no impact , negative impact ) , dependency scales ( not dependent , neither dependent nor independent , dependent ) , or convenience scales ( not inconvenient , neither convenient nor inconvenient , inconvenient ) . all completed questionnaire data were stored and captured in spss format ( spss inc . , prior to analyses , data checks were undertaken to ensure that all responders met the screening criteria ; only eligible responders answered relevant questions , responders who clicked through the survey without giving thoughtful responses were removed , and outliers were removed from relevant questions . all data were presented as descriptive statistics based on absolute percentages and means . where possible , data were stratified according to whether patients had wamd in one or two eyes , and these data were compared and analyzed using either a two - sided t - test ( to compare mean values ) or two - tailed z - test ( to compare percentages ) . these analyses were based on the assumption of equal variance with a 5% significance level ( p<0.05 ) . tests were adjusted using the bonferroni correction to counteract the problem of multiple and pairwise comparisons . data analyses were performed in spss version 21 , and all analyses were documented in syntax files . the caregivers included in the survey were a child or grandchild of the patient ( 47.3% ; n=421/890 ) , partner ( 23.3% ; n=207/890 ) , neighbor / friend / other relatives ( 13.7% ; n=122/890 ) , sibling ( 6.0% ; n=53/890 ) , or volunteer ( 3.3% ; n=29/890 ) . wamd was diagnosed in two eyes in 45.1% ( n=410/910 ) of patients and in one eye in 54.9% ( n=500/910 ) of patients . the majority of patients ( 74.7% ; n=680/910 ) had been diagnosed with wamd for > 1 year ( table 1 ) . most patients ( 72.9% ; n=663/910 ) visited an hcp within 1 month of first noticing a change in vision ( table 1 ) . nearly half of all patients ( 41.2% ; n=187/454 ) who delayed visiting an hcp thought that the symptoms would resolve . significantly more patients with wamd in one eye delayed visiting an hcp , as they were unaware that their vision had changed ( 9.4% [ n=23/245 ] vs 2.4% [ n=5/209 ] ; p<0.05 ) ( table 1 ) . patients with wamd in two eyes were more likely to be diagnosed earlier ( ie , 13 weeks ) than the patients with wamd in one eye ( 33.9% [ n=139/410 ] vs 27.2% [ n=136/500 ] , respectively ; p<0.05 ) ( table 1 ) . the majority of patients ( 63.8% ; n=581/910 ) had been receiving anti - vegf injections for > 1 year ( table 2 ) . anti - vegf injections had been started immediately in 54.4% ( n=495/910 ) of patients , and this number was significantly higher in those with wamd in two eyes compared with one eye ( 62.0% [ n=254/410 ] vs 48.2% [ n=241/500 ] , respectively ; p<0.05 ) . patients with wamd in two eyes were more likely to attend more frequently ( every 2 months ) compared with patients with wamd in one eye ( 33.4% [ n=137/410 ] vs 25.4% [ n=127/500 ] ; p<0.05 ) . significantly more patients with wamd in two eyes compared with one eye had injections at every visit ( 55.4% [ n=227/410 ] vs 32.0% [ n=160/500 ] ; p<0.05 ) . a temporary improvement or stabilization in vision as a result of current treatment was reported by 51.6% of patients ( n=470/910 ) , and 22.3% of patients ( n=203/910 ) reported a return to prediagnosis vision or that their vision was still improving . most caregivers always attended appointments ( 60.1% ; n=535/890 ) and were involved in discussions about the treatment plan ( 83.3% ; n=555/666 ; table 3 ) . many caregivers were able to reduce the level of domestic assistance provided after the patient started treatment ( 30.2% ; n=269/890 ) , with many also reporting that the patient had a temporary improvement or stabilization in their vision ( 53.4% ; n=475/890 ) . however , a number of caregivers reported that frequent appointments were inconvenient ( figure 1 ) . the level and source of information on wamd that had been provided is summarized in table 4 . the main source of information for both patients ( 75.6% ; n=688/910 ) and caregivers ( 71.6% ; n=637/890 ) was the physician , followed by the internet ( 8.6% [ n=78/910 ] and 11.2% [ n=100/890 ] ) . however , only 23.0% ( n=209/910 ) of patients were enrolled in a patient support program that aimed to provide appointment reminders ( 72.2% [ n=151/209 ] ) and emotional support ( 58.9% [ n=123/209 ] ) . most patients ( 65.4% ; n=595/910 ) and caregivers ( 77.0% ; n=685/890 ) reported a number of obstacles in managing wamd ( figure 2 ) . for patients , the main barrier was the treatment itself ( 34.8% ; n=317/910 ) ( this refers to anti - vegf agents only , and the most common barriers would relate to having injections , frequency of injections , and possible injection - related side effects ) . other barriers included treatment costs ( 27.8% ; n=253/910 ) and finding the right treatment option ( 27.4% ; n=249/910 ) ( this refers to anti - vegf agents and laser and relates to information on choosing the best option , including whether to have anti - vegf injections [ any type ] , issues relating to frequency of treatments , or if / when to have laser ) . several obstacles were reported by a significantly higher proportion of patients with wamd in two eyes compared with one eye , including the treatment itself ( 39.0% [ n=160/410 ] vs 31.4% [ n=157/500 ] , respectively ; p<0.05 ) , and finding the right treatment option ( 35.1% [ n=144/410 ] vs 21.0% [ n=105/500 ] ; p<0.05 ) . however , 34.6% ( n=315/910 ) of all patients also reported that they were willing to do whatever it takes to maintain their vision ; this was significantly higher for patients with wamd in one eye than those with wamd in two eyes ( 43.8% [ n=219/500 ] vs 23.4% [ n=96/410 ] ; p<0.05 ) . for caregivers , the main barriers were also the patient s treatment itself ( 38.8% [ n=345/890 ] ) and finding the right treatment option for the patient ( 31.0% [ n=276/890 ] ) ( figure 2 ) . despite these obstacles , 84.3% ( n=767/910 ) of patients and 74.2% ( n=660/890 ) of caregivers reported that the patient was compliant with treatment ( ie , attended every clinic appointment ) . for the 15.7% ( n=143/910 ) of patients who missed a clinic appointment , the main obstacles were that the caregiver was unable to take them to the appointment ( 25.9% ; n=37/143 ) , fear about receiving an injection ( 21.0% ; n=30/143 ) , and patient illness ( reason not stated ) ( 18.9% ; n=27/143 ) . most patients ( 56.7% ; n=516/910 ) were usually taken to the appointment by a caregiver ; however , 20.4% ( n=186/910 ) went by public transport , 12.4% ( n=113/910 ) drove themselves , 8.1% ( n=74/910 ) used a taxi , and 2.3% ( n=21/910 ) were taken by an ambulance . travel time to appointments , however , did not affect the impact that wamd had on a patient s life ( figure 3 ) . the caregivers included in the survey were a child or grandchild of the patient ( 47.3% ; n=421/890 ) , partner ( 23.3% ; n=207/890 ) , neighbor / friend / other relatives ( 13.7% ; n=122/890 ) , sibling ( 6.0% ; n=53/890 ) , or volunteer ( 3.3% ; n=29/890 ) . wamd was diagnosed in two eyes in 45.1% ( n=410/910 ) of patients and in one eye in 54.9% ( n=500/910 ) of patients . the majority of patients ( 74.7% ; n=680/910 ) had been diagnosed with wamd for > 1 year ( table 1 ) . most patients ( 72.9% ; n=663/910 ) visited an hcp within 1 month of first noticing a change in vision ( table 1 ) . nearly half of all patients ( 41.2% ; n=187/454 ) who delayed visiting an hcp thought that the symptoms would resolve . significantly more patients with wamd in one eye delayed visiting an hcp , as they were unaware that their vision had changed ( 9.4% [ n=23/245 ] vs 2.4% [ n=5/209 ] ; p<0.05 ) ( table 1 ) . patients with wamd in two eyes were more likely to be diagnosed earlier ( ie , 13 weeks ) than the patients with wamd in one eye ( 33.9% [ n=139/410 ] vs 27.2% [ n=136/500 ] , respectively ; p<0.05 ) ( table 1 ) . the majority of patients ( 63.8% ; n=581/910 ) had been receiving anti - vegf injections for > 1 year ( table 2 ) . anti - vegf injections had been started immediately in 54.4% ( n=495/910 ) of patients , and this number was significantly higher in those with wamd in two eyes compared with one eye ( 62.0% [ n=254/410 ] vs 48.2% [ n=241/500 ] , respectively ; p<0.05 ) . patients with wamd in two eyes were more likely to attend more frequently ( every 2 months ) compared with patients with wamd in one eye ( 33.4% [ n=137/410 ] vs 25.4% [ n=127/500 ] ; p<0.05 ) . significantly more patients with wamd in two eyes compared with one eye had injections at every visit ( 55.4% [ n=227/410 ] vs 32.0% [ n=160/500 ] ; p<0.05 ) . a temporary improvement or stabilization in vision as a result of current treatment was reported by 51.6% of patients ( n=470/910 ) , and 22.3% of patients ( n=203/910 ) reported a return to prediagnosis vision or that their vision was still improving . most caregivers always attended appointments ( 60.1% ; n=535/890 ) and were involved in discussions about the treatment plan ( 83.3% ; n=555/666 ; table 3 ) . many caregivers were able to reduce the level of domestic assistance provided after the patient started treatment ( 30.2% ; n=269/890 ) , with many also reporting that the patient had a temporary improvement or stabilization in their vision ( 53.4% ; n=475/890 ) . however , a number of caregivers reported that frequent appointments were inconvenient ( figure 1 ) . wamd was diagnosed in two eyes in 45.1% ( n=410/910 ) of patients and in one eye in 54.9% ( n=500/910 ) of patients . the majority of patients ( 74.7% ; n=680/910 ) had been diagnosed with wamd for > 1 year ( table 1 ) . most patients ( 72.9% ; n=663/910 ) visited an hcp within 1 month of first noticing a change in vision ( table 1 ) . nearly half of all patients ( 41.2% ; n=187/454 ) who delayed visiting an hcp thought that the symptoms would resolve . significantly more patients with wamd in one eye delayed visiting an hcp , as they were unaware that their vision had changed ( 9.4% [ n=23/245 ] vs 2.4% [ n=5/209 ] ; p<0.05 ) ( table 1 ) . patients with wamd in two eyes were more likely to be diagnosed earlier ( ie , 13 weeks ) than the patients with wamd in one eye ( 33.9% [ n=139/410 ] vs 27.2% [ n=136/500 ] , respectively ; p<0.05 ) ( table 1 ) . the majority of patients ( 63.8% ; n=581/910 ) had been receiving anti - vegf injections for > 1 year ( table 2 ) . anti - vegf injections had been started immediately in 54.4% ( n=495/910 ) of patients , and this number was significantly higher in those with wamd in two eyes compared with one eye ( 62.0% [ n=254/410 ] vs 48.2% [ n=241/500 ] , respectively ; p<0.05 ) . patients with wamd in two eyes were more likely to attend more frequently ( every 2 months ) compared with patients with wamd in one eye ( 33.4% [ n=137/410 ] vs 25.4% [ n=127/500 ] ; p<0.05 ) . significantly more patients with wamd in two eyes compared with one eye had injections at every visit ( 55.4% [ n=227/410 ] vs 32.0% [ n=160/500 ] ; p<0.05 ) . a temporary improvement or stabilization in vision as a result of current treatment was reported by 51.6% of patients ( n=470/910 ) , and 22.3% of patients ( n=203/910 ) reported a return to prediagnosis vision or that their vision was still improving . most caregivers always attended appointments ( 60.1% ; n=535/890 ) and were involved in discussions about the treatment plan ( 83.3% ; n=555/666 ; table 3 ) . many caregivers were able to reduce the level of domestic assistance provided after the patient started treatment ( 30.2% ; n=269/890 ) , with many also reporting that the patient had a temporary improvement or stabilization in their vision ( 53.4% ; n=475/890 ) . however , a number of caregivers reported that frequent appointments were inconvenient ( figure 1 ) . the level and source of information on wamd that had been provided is summarized in table 4 . the main source of information for both patients ( 75.6% ; n=688/910 ) and caregivers ( 71.6% ; n=637/890 ) was the physician , followed by the internet ( 8.6% [ n=78/910 ] and 11.2% [ n=100/890 ] ) . however , only 23.0% ( n=209/910 ) of patients were enrolled in a patient support program that aimed to provide appointment reminders ( 72.2% [ n=151/209 ] ) and emotional support ( 58.9% [ n=123/209 ] ) . most patients ( 65.4% ; n=595/910 ) and caregivers ( 77.0% ; n=685/890 ) reported a number of obstacles in managing wamd ( figure 2 ) . for patients , the main barrier was the treatment itself ( 34.8% ; n=317/910 ) ( this refers to anti - vegf agents only , and the most common barriers would relate to having injections , frequency of injections , and possible injection - related side effects ) . other barriers included treatment costs ( 27.8% ; n=253/910 ) and finding the right treatment option ( 27.4% ; n=249/910 ) ( this refers to anti - vegf agents and laser and relates to information on choosing the best option , including whether to have anti - vegf injections [ any type ] , issues relating to frequency of treatments , or if / when to have laser ) . several obstacles were reported by a significantly higher proportion of patients with wamd in two eyes compared with one eye , including the treatment itself ( 39.0% [ n=160/410 ] vs 31.4% [ n=157/500 ] , respectively ; p<0.05 ) , and finding the right treatment option ( 35.1% [ n=144/410 ] vs 21.0% [ n=105/500 ] ; p<0.05 ) . however , 34.6% ( n=315/910 ) of all patients also reported that they were willing to do whatever it takes to maintain their vision ; this was significantly higher for patients with wamd in one eye than those with wamd in two eyes ( 43.8% [ n=219/500 ] vs 23.4% [ n=96/410 ] ; p<0.05 ) . for caregivers , the main barriers were also the patient s treatment itself ( 38.8% [ n=345/890 ] ) and finding the right treatment option for the patient ( 31.0% [ n=276/890 ] ) ( figure 2 ) . despite these obstacles , 84.3% ( n=767/910 ) of patients and 74.2% ( n=660/890 ) of caregivers reported that the patient was compliant with treatment ( ie , attended every clinic appointment ) . for the 15.7% ( n=143/910 ) of patients who missed a clinic appointment , the main obstacles were that the caregiver was unable to take them to the appointment ( 25.9% ; n=37/143 ) , fear about receiving an injection ( 21.0% ; n=30/143 ) , and patient illness ( reason not stated ) ( 18.9% ; n=27/143 ) . most patients ( 56.7% ; n=516/910 ) were usually taken to the appointment by a caregiver ; however , 20.4% ( n=186/910 ) went by public transport , 12.4% ( n=113/910 ) drove themselves , 8.1% ( n=74/910 ) used a taxi , and 2.3% ( n=21/910 ) were taken by an ambulance . travel time to appointments , however , did not affect the impact that wamd had on a patient s life ( figure 3 ) . most patients ( 65.4% ; n=595/910 ) and caregivers ( 77.0% ; n=685/890 ) reported a number of obstacles in managing wamd ( figure 2 ) . for patients , the main barrier was the treatment itself ( 34.8% ; n=317/910 ) ( this refers to anti - vegf agents only , and the most common barriers would relate to having injections , frequency of injections , and possible injection - related side effects ) . other barriers included treatment costs ( 27.8% ; n=253/910 ) and finding the right treatment option ( 27.4% ; n=249/910 ) ( this refers to anti - vegf agents and laser and relates to information on choosing the best option , including whether to have anti - vegf injections [ any type ] , issues relating to frequency of treatments , or if / when to have laser ) . several obstacles were reported by a significantly higher proportion of patients with wamd in two eyes compared with one eye , including the treatment itself ( 39.0% [ n=160/410 ] vs 31.4% [ n=157/500 ] , respectively ; p<0.05 ) , and finding the right treatment option ( 35.1% [ n=144/410 ] vs 21.0% [ n=105/500 ] ; p<0.05 ) . however , 34.6% ( n=315/910 ) of all patients also reported that they were willing to do whatever it takes to maintain their vision ; this was significantly higher for patients with wamd in one eye than those with wamd in two eyes ( 43.8% [ n=219/500 ] vs 23.4% [ n=96/410 ] ; p<0.05 ) . for caregivers , the main barriers were also the patient s treatment itself ( 38.8% [ n=345/890 ] ) and finding the right treatment option for the patient ( 31.0% [ n=276/890 ] ) ( figure 2 ) . despite these obstacles , 84.3% ( n=767/910 ) of patients and 74.2% ( n=660/890 ) of caregivers reported that the patient was compliant with treatment ( ie , attended every clinic appointment ) . for the 15.7% ( n=143/910 ) of patients who missed a clinic appointment , the main obstacles were that the caregiver was unable to take them to the appointment ( 25.9% ; n=37/143 ) , fear about receiving an injection ( 21.0% ; n=30/143 ) , and patient illness ( reason not stated ) ( 18.9% ; n=27/143 ) . most patients ( 56.7% ; n=516/910 ) were usually taken to the appointment by a caregiver ; however , 20.4% ( n=186/910 ) went by public transport , 12.4% ( n=113/910 ) drove themselves , 8.1% ( n=74/910 ) used a taxi , and 2.3% ( n=21/910 ) were taken by an ambulance . travel time to appointments , however , did not affect the impact that wamd had on a patient s life ( figure 3 ) . this global survey provided an overview of the diagnosis and management of wamd and current barriers to treatment from the perspective of 1,800 patients and caregivers . responders from nine countries were recruited via physician referral and the internet , thus representing a broad cross - section of the wamd cohort in a general population , and the distribution of patients with amd in one or two eyes suggests that the sample was not skewed toward most severe patients only . most patients had also been diagnosed ( 75% ) and receiving anti - vegf injections ( 64% ) for .1 year ; they were , therefore , a suitable sample to survey regarding issues related to long - term wamd treatment . the study found that most patients ( 73% ) had visited an hcp within 1 month of experiencing vision changes ; however , fewer patients were diagnosed ( 43% ) and treated ( 54% ) during the first visit . some patients also delayed visiting an hcp as they thought the symptoms would resolve ( 41% ) or that it was part of the aging process ( 20% ) , with 20% being diagnosed between 1 and 2 months and 8% receiving delayed treatment . other studies have found that delaying diagnosis and subsequent treatment adversely affects the outcomes.11 in one study of patients with wamd ( 1,149 eyes ) , those with a shorter waiting time between diagnosis and first injection ( 10 days ) experienced a smaller loss of visual acuity and greater improvement after first treatment compared with those patients with a longer lag time ( > 10 days).12 patients with wamd who were treated early with anti - vegf injections or usual care also incurred lower total direct costs over a lifetime , including incremental costs per vision - year gained ( $ 15,279 vs $ 57,230 , respectively ) and quality - adjusted life years ( $ 36,282 vs $ 132,281).13 unfortunately , further evaluation of the impact of delayed diagnosis and treatment was beyond the scope of the current survey . the survey also revealed that 42% of patients had checkups every 46 weeks , and 43% received treatment at every visit . overall , 84% of patients and 74% of caregivers reported that the patient was compliant with treatment ( ie , attended every clinic appointment ) . vision had improved as a result of treatment , with 74% of patients and 71% of caregivers reporting a return to prediagnosis vision , vision still improving , temporary improvement , or stabilization . in addition , 30% of caregivers reduced the level of care provided following patient treatment . unfortunately , this survey did not monitor the costs associated with patient care and treatment patterns , but a us survey of 803 responders highlighted that the annual costs of caregiving ( paid and unpaid ) ranged from $ 225 to $ 47,086 , depending on visual acuity.14 despite the treatment benefits described here , many patients and caregivers reported a number of obstacles associated with wamd management that were related to 1 ) the treatment itself ( reasons not stated ) , 2 ) finding the right option , and 3 ) treatment costs . these three obstacles were comparable between patients and caregivers but were significantly higher in patients with wamd in two eyes compared with one eye . patients with wamd in one eye were significantly more likely to do whatever it takes to maintain their vision and to report that there were no obstacles associated with wamd management . these differences may be linked to the emotional impact of wamd ( particularly the level of depression and disease severity ) , which is discussed elsewhere.15 current evidence has shown that monthly and as - needed anti - vegf treatment regimens with ranibizumab are effective,7,8 though it might be difficult to replicate monthly clinical regimens in real - life settings.16 in the aura study , which followed 2,227 patients who received one or more ranibizumab injections for up to 2 years , there were fewer injections administered compared with clinical studies ( the mean was 5.0 [ year 1 ] and 2.2 [ year 2 ] ) ; the initial improvement observed in visual acuity was not maintained over time , and there was a return to near - baseline values by year 2.17 in an attempt to address these issues , two studies focused on quarterly versus monthly dosing with ranibizumab , and though both were effective in etdrs letters gained , the results with quarterly dosing were less impressive compared with monthly dosing.18,19 unfortunately , the current survey did not explore the effect of different dosing regimens on compliance and treatment barriers , but a different approach ( such as treat - and - extend ) could address some of the issues raised by the responders . not surprisingly , inadequate information on wamd was also perceived as a barrier by 11% of patients and 16% of caregivers , with 16% of patients and 25% of caregivers highlighting a lack of understanding about the disease as an issue . teleconsultation networks have been used successfully in italian practices , particularly in minimizing the delay between retreatments.20 this survey highlights the lack of professional patient support and treatment delays , and both could be further examined in a larger study on the role of telemedicine or with similar initiatives . although the survey is valuable in providing an overview of the impact of wamd on patients and caregivers , it does have a number of limitations inherent with the observational and retrospective design . the questionnaire was devised by experts to cover all aspects of wamd , but it is not validated and it is still subjective , and some questions may be perceived differently by responders from different countries . the questionnaire did not quantify some of the obstacles associated with wamd management , and the results may therefore have a number of biases , including selection bias based on the exclusion of nonresponders . it is also not possible to distinguish the severity of the outcomes reported , and it did not include a control . however , the large sample size and use of physician and online referral would capture a wide sample , as shown by the distribution of patients with wamd in one or two eyes . in summary , the findings from this survey give a useful overview of the diagnosis , management , and barriers to treatment for wamd from a patient and caregiver perspective . the results highlight that education in symptom awareness , wider provision of information and support , and tailoring long - term follow - up to adjust for difficulties associated with monthly clinic visits and injections are key areas for improvement .
purposea cross - sectional survey to evaluate the current management of wet age - related macular degeneration ( wamd ) and to identify barriers to treatment from a patient and caregiver perspective.methodsan ophthalmologist - devised questionnaire was given to a global cohort of patients who were receiving ( or had previously received ) antivascular endothelial growth factor injections and to caregivers ( paid and unpaid ) to evaluate the impact of wamd on their lives.resultsresponders included 910 patients and 890 caregivers ; wamd was diagnosed in both eyes in 45% of patients , and 64% had been receiving injections for > 1 year . many caregivers were a child / grandchild ( 47% ) or partner ( 23% ) of the patient ; only 7% were professional caregivers . most ( 73% ) patients visited a health care professional within 1 month of experiencing vision changes and 54% began treatment immediately . most patients and caregivers reported a number of obstacles in managing wamd , including the treatment itself ( 35% and 39% , respectively ) . sixteen percent of patients also missed a clinic visit.conclusionmost patients seek medical assistance promptly for a change in vision ; however , about a quarter of them do not . this highlights a lack of awareness surrounding eye health and the impact of a delayed diagnosis . most patients and caregivers identified a number of obstacles in managing wamd .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Ocean and Coastal Observation System Act of 2005''. SEC. 2. FINDINGS AND PURPOSES. (a) Findings.--Congress finds the following: (1) Ocean and coastal observations provide vital information for protecting human lives and property from marine hazards, predicting weather, improving ocean health and providing for the protection and enjoyment of the resources of the Nation's coasts, oceans, and Great Lakes. (2) The continuing and potentially devastating threat posed by tsunamis, hurricanes, storm surges, and other marine hazards requires immediate implementation of strengthened observation and data management systems to provide timely detection, assessment, and warnings to the millions of people living in coastal regions of the United States and throughout the world. (3) The 95,000-mile coastline of the United States, including the Great Lakes, is vital to the Nation's prosperity, contributing over $117 billion to the national economy in 2000, supporting jobs for more than 200 million Americans, and supporting commercial and sport fisheries valued at more than $50 billion annually. (4) Responding to coastal hazards and managing fisheries and other coastal activities require improved monitoring of the Nation's waters and coastline, including the ability to provide rapid response teams with real-time environmental conditions necessary for their work. (5) While knowledge of the ocean and coastal environment and processes is far from complete, advances in sensing technologies and scientific understanding have made possible long-term and continuous observation from shore, from space, and in situ of ocean and coastal characteristics and conditions. (6) Many elements of an ocean and coastal observing system are in place, but require national investment, consolidation, completion, and integration at Federal, regional, State, and local levels. (7) The Commission on Ocean Policy recommends a national commitment to a sustained and integrated ocean and coastal observing system and to coordinated research programs in order to assist the Nation and the world in understanding the oceans, improving weather forecasts, strengthening management of ocean and coastal resources, and mitigating marine hazards. (8) In 2003, the United States led more than 50 nations in affirming the vital importance of timely, quality, long-term global observations as a basis for sound decision-making, recognizing the contribution of observation systems to meet national, regional, and global needs, and calling for strengthened cooperation and coordination in establishing a Global Earth Observation System of Systems, of which an integrated ocean and coastal observing system is an essential part. (b) Purposes.--The purposes of this Act are to provide for-- (1) the planning, development, and maintenance of an integrated ocean and coastal observing system that provides the data and information to sustain and restore healthy marine and Great Lakes ecosystems and the resources they support, enable advances in scientific understanding of the oceans and the Great Lakes, and strengthen science education and communication; (2) implementation of research, development, education, and outreach programs to improve understanding of the oceans and Great Lakes and achieve the full national benefits of an integrated ocean and coastal observing system; (3) implementation of a data and information management system required by all components of an integrated ocean and coastal observing system and related research to develop early warning systems and insure usefulness of data and information for users; and (4) establishment of a system of regional ocean, coastal, and Great Lakes observing systems to address local needs for ocean information. SEC. 3. DEFINITIONS. In this Act: (1) Council.--The term ``Council'' means the National Ocean Research Leadership Council. (2) Observing system.--The term ``observing system'' means the integrated coastal, ocean and Great Lakes observing system to be established by the Committee under section 4(a). (3) Interagency program office.--The term ``interagency program office'' means the office established under section 4(d). SEC. 4. INTEGRATED OCEAN AND COASTAL OBSERVING SYSTEM. (a) Establishment.--The President, acting through the Council, shall establish and maintain an integrated system of ocean and coastal observations, data communication and management, analysis, modeling, research, education, and outreach designed to provide data and information for the timely detection and prediction of changes occurring in the ocean, coastal and Great Lakes environment that impact the Nation's social, economic, and ecological systems. The observing system shall provide for long-term, continuous and quality-controlled observations of the coasts, oceans, and Great Lakes for the following purposes: (1) Improving the health of the Nation's coasts, oceans, and Great Lakes. (2) Protecting human lives and livelihoods from hazards such as tsunamis, hurricanes, coastal erosion, and fluctuating Great Lakes water levels. (3) Understanding the effects of human activities and natural variability on the state of the coasts, oceans, and Great Lakes and the Nation's socioeconomic well-being. (4) Providing for the sustainable use, protection, and enjoyment of ocean, coastal, and Great Lakes resources. (5) Providing information that can support the eventual implementation and refinement of ecosystem-based management. (6) Supplying critical information to marine-related businesses such as aquaculture and fisheries. (7) Supporting research and development to ensure continuous improvement to ocean, coastal, and Great Lakes observation measurements and to enhance understanding of the Nation's ocean, coastal, and Great Lakes resources. (b) System Elements.--In order to fulfill the purposes of this Act, the observing system shall consist of the following program elements: (1) A national program to fulfill national observation priorities, including the Nation's ocean contribution to the Global Earth Observation System of Systems and the Global Ocean Observing System. (2) A network of regional associations to manage the regional ocean and coastal observing and information programs that collect, measure, and disseminate data and information products to meet regional needs. (3) A data management and dissemination system for the timely integration and dissemination of data and information products from the national and regional systems. (4) A research and development program conducted under the guidance of the Council. (5) An outreach, education, and training program that augments existing programs, such as the National Sea Grant College Program, the Centers for Ocean Sciences Education Excellence program, and the National Estuarine Research Reserve System, to ensure the use of the data and information for improving public education and awareness of the Nation's oceans and building the technical expertise required to operate and improve the observing system. (c) Council Functions.--In carrying out responsibilities under this section, the Council shall-- (1) serve as the oversight body for the design and implementation of all aspects of the observing system; (2) adopt plans, budgets, and standards that are developed and maintained by the interagency program office in consultation with the regional associations; (3) coordinate the observing system with other earth observing activities including the Global Ocean Observing System and the Global Earth Observing System of Systems; (4) coordinate and administer programs of research, development, education, and outreach to support improvements to and the operation of an integrated ocean and coastal observing system and to advance the understanding of the oceans; (5) establish pilot projects to develop technology and methods for advancing the development of the observing system; (6) provide, as appropriate, support for and representation on United States delegations to international meetings on ocean and coastal observing programs; and (7) in consultation with the Secretary of State, coordinate relevant Federal activities with those of other nations. (d) Interagency Program Office.--The Council shall establish an interagency program office to be known as ``OceanUS''. The interagency program office shall be responsible for program planning and coordination of the observing system. The interagency program office shall-- (1) prepare annual and long-term plans for consideration by the Council for the design and implementation of the observing system that promote collaboration among Federal agencies and regional associations in developing the global and national observing systems, including identification and refinement of a core set of variables to be measured by all systems; (2) coordinate the development of agency priorities and budgets for implementation of the observing system, including budgets for the regional associations; (3) establish and refine standards and protocols for data management and communications, including quality standards, in consultation with participating Federal agencies and regional associations; (4) develop a process for the certification of the regional associations and their periodic review and recertification; (5) establish an external technical committee to provide biennial review of the observing system; and (6) provide for opportunities to partner or contract with private sector companies in deploying ocean observation system elements. (e) Lead Federal Agency.--The National Oceanic and Atmospheric Administration shall be the lead Federal agency for implementation and operation of the observing system. Based on the plans prepared by the interagency program office and adopted by the Council, the Administrator of the National Oceanic and Atmospheric Administration shall-- (1) coordinate implementation, operation and improvement of the observing system; (2) establish efficient and effective administrative procedures for allocation of funds among Federal agencies and regional associations in a timely manner and according to the budget adopted by the Council; (3) implement and maintain appropriate elements of the observing system; (4) provide for the migration of scientific and technological advances from research and development to operational deployment; (5) integrate and extend existing programs and pilot projects into the operational observation system; (6) certify regional associations that meet the requirements of subsection (f); and (7) integrate the capabilities of the National Coastal Data Development Center and the Coastal Services Center of the National Oceanic and Atmospheric Administration, and other appropriate centers, into the observing system for the purpose of assimilating, managing, disseminating, and archiving data from regional observation systems and other observation systems. (f) Regional Associations of Ocean and Coastal Observing Systems.--The Administrator of the National Oceanic and Atmospheric Administration may certify one or more regional associations to be responsible for the development and operation of regional ocean and coastal observing systems to meet the information needs of user groups in the region while adhering to national standards. To be certifiable by the Administrator, a regional association shall-- (1) demonstrate an organizational structure capable of supporting and integrating all aspects of ocean and coastal observing and information programs within a region; (2) operate under a strategic operations and business plan that details the operation and support of regional ocean and coastal observing systems pursuant to the standards established by the Council; (3) provide information products for multiple users in the region; (4) work with governmental entities and programs at all levels within the region to provide timely warnings and outreach to protect the public; and (5) meet certification standards developed by the interagency program office in conjunction with the regional associations and approved by the Council. Nothing in this Act authorizes a regional association to engage in lobbying activities (as defined in section 3(7) of the Lobbying Disclosure Act of 1995 (2 U.S.C. 1602(7)). (g) Civil Liability.--For purposes of section 1346(b)(1) and chapter 171 of title 28, United States Code, the Suits in Admiralty Act (46 U.S.C. App. 741 et seq.), and the Public Vessels Act (46 U.S.C. App. 781 et seq.), any regional ocean and coastal observing system that is a designated part of a regional association certified under this section shall, in carrying out the purposes of this Act, be deemed to be part of the National Oceanic and Atmospheric Administration, and any employee of such system, while acting within the scope of his or her employment in carrying out such purposes, shall be deemed to be an employee of the Government. SEC. 5. RESEARCH, DEVELOPMENT, AND EDUCATION. The Council shall establish programs for research, development, education, and outreach for the ocean and coastal observing system, including projects under the National Oceanographic Partnership Program, consisting of the following: (1) Basic research to advance knowledge of ocean and coastal systems and ensure continued improvement of operational products, including related infrastructure and observing technology. (2) Focused research projects to improve understanding of the relationship between the coasts and oceans and human activities. (3) Large scale computing resources and research to advance modeling of ocean and coastal processes. (4) A coordinated effort to build public education and awareness of the ocean and coastal environment and functions that integrates ongoing activities such as the National Sea Grant College Program, the Centers for Ocean Sciences Education Excellence, and the National Estuarine Research Reserve System. SEC. 6. INTERAGENCY FINANCING. The departments and agencies represented on the Council are authorized to participate in interagency financing and share, transfer, receive, obligate, and expend funds appropriated to any member of the Council for the purposes of carrying out any administrative or programmatic project or activity under this Act or under the National Oceanographic Partnership Program, including support for the interagency program office, a common infrastructure, and system integration for a ocean and coastal observing system. Funds may be transferred among such departments and agencies through an appropriate instrument that specifies the goods, services, or space being acquired from another Council member and the costs of the same. SEC. 7. APPLICATION WITH OUTER CONTINENTAL SHELF LANDS ACT. Nothing in this Act supersedes, or limits the authority of the Secretary of the Interior under the Outer Continental Shelf Lands Act (43 U.S.C. 1331 et seq.). SEC. 8. AUTHORIZATION OF APPROPRIATIONS. There are authorized to be appropriated to the National Oceanic and Atmospheric Administration for the implementation of an integrated ocean and coastal observing system under section 4, and the research and development program under section 5, including financial assistance to the interagency program office, the regional associations for the implementation of regional ocean and coastal observing systems, and the departments and agencies represented on the Council, $150,000,000 for each of fiscal years 2006 through 2010. At least 50 percent of the sums appropriated for the implementation of the integrated ocean and coastal observing system under section 4 shall be allocated to the regional associations certified under section 4(f) for implementation of regional ocean and coastal observing systems. Sums appropriated pursuant to this section shall remain available until expended. SEC. 9. REPORTING REQUIREMENT. Not later than March 31, 2010, the President, acting through the Council, shall transmit to Congress a report on the programs established under sections 4 and 5. The report shall include a description of activities carried out under the programs, an evaluation of the effectiveness of the programs, and recommendations concerning reauthorization of the programs and funding levels for the programs in succeeding fiscal years. Passed the Senate July 1, 2005. Attest: EMILY J. REYNOLDS, Secretary.
Ocean and Coastal Observation System Act of 2005 - (Sec. 4) Directs the President, acting through the National Ocean Research Leadership Council, to establish and maintain an integrated system of ocean and coastal observations, data communication and management, analysis, modeling, research, education, and outreach designed to provide data and information for the timely detection and prediction of changes occurring in the ocean and coastal environment that impact the Nation's social, economic, and ecological systems. Requires the system to provide for long-term, continuous, and quality-controlled observations of the coasts, oceans, and Great Lakes. Requires the Council to establish an interagency program office (OceanUS) responsible for program planning and coordination of the system. Requires OceanUS, among other duties, to provide for opportunities to partner or contract with private sector companies in deploying ocean observation system elements. Requires the National Oceanic and Atmospheric Administration (NOAA) to be the lead Federal agency for implementation and operation of the system. Requires NOAA, among other duties, to integrate the capabilities of the National Coastal Data Development Center and the Coastal Services Center, and other appropriate centers, into the observing system for the purpose of assimilating, managing, disseminating, and archiving data from regional and other observation systems. Authorizes the Administrator of NOAA to certify one or more regional associations to be responsible for the development and operation of regional ocean and coastal observing systems to meet the information needs of user groups in the region while adhering to national standards. Deems certified regional systems to be part of NOAA when carrying out this Act, and employees of such systems acting within the scope of their employment to be federal government employees, for purposes of civil liability under specified laws. (Sec. 5) Directs the Council to establish programs for research, development, and education for the system. (Sec. 6) Authorizes departments and agencies represented on the Council to participate in interagency financing and to share funds appropriated to any Council member. (Sec. 7) Declares that nothing in this Act supersedes, or limits the authority of the Secretary of the Interior under the Outer Continental Shelf Lands Act. (Sec. 8) Authorizes appropriations to NOAA for FY2006-FY2010 for implementation of the integrated ocean and coastal observing system and the research and development program required by this Act. Requires the allocation of 50 percent of appropriations for the observing system to certified regional associations for regional systems. (Sec. 9) Requires the President, acting through the Council, to report to Congress on the programs established under this Act.
study participants were recruited in a consecutive manner from our glaucoma clinic at the asan medical center , between december 2008 and january 2009 . all procedures conformed to the declaration of helsinki , and the study was approved by the institutional review board of the asan medical center . all subjects underwent a complete ophthalmologic examination ; visual acuity testing ; the humphrey field analyzer swedish interactive threshold algorithm 24 - 2 test ( carl zeiss meditec inc . , dublin , ca , usa ) ; multiple intraocular pressure ( iop ) measurements using goldmann applanation tonometry ; stereoscopic optic nerve photography ; and spectralis oct , including medical , ocular , and family history . glaucomatous eyes were defined as those with a glaucomatous visual field ( vf ) defect confirmed by two reliable vf examinations and by the appearance of a glaucomatous optic disc , irrespective of the level of iop . a glaucomatous optic disc was defined by increased cupping ( vertical cup - disc ratio > 0.6 ) , a difference in vertical cup - disc ratio > 0.2 between eyes , diffuse or focal neural rim thinning , hemorrhage , and rnfl defects . glaucoma - suspect eyes were defined as those with glaucomatous optic disc test results , but showing normal vf test data . healthy eyes were defined as those with healthy optic discs and normal vf test results . glaucomatous vf defects were defined when eyes met at least two of the following criteria : 1 ) a cluster of three points with a probability of less than 5% on the pattern deviation map in at least one hemifield and including at least one point with a probability of less than 1% , or a cluster of two points with a probability less than 1% ; 2 ) a glaucoma hemifield test result outside 99% of the age - specific normal limits ; and 3 ) a pattern standard deviation outside 95% of the normal limit . among subjects qualified by other inclusion criteria , patients with discernable ppa , regardless of ppa size on stereoscopic optic disc photography , were analyzed . ppa was differentiated into ppa of the peripheral -zone , with irregular pigmentation , and ppa of the central -zone , with visible sclera and large choroidal vessels . the presence and extent of a -zone on an optic disc photograph was independently assessed by three glaucoma experts ( msk , krs , and jhn ) , and those cases agreed upon by all three experts were included in the analyses . the raster scan mode of the spectralis oct , which covers an area of 6 mm 6 mm , was used to acquire optic disc images . the spectralis oct obtains two images , using simultaneous dual laser scanning which include an infrared image in the scanning laser ophthalmoscope ( slo ) mode and an oct scan . a three - dimensional volumetric dataset composed of 25 line scans was obtained for each subject . all images obtained by the spectralis oct were reviewed and independently evaluated by two glaucoma experts ( msk and krs ) . as the spectralis oct does not provide any scan quality score measurement , images of poor quality were subjectively excluded when over 10% of reflectance signals were absent in the line data . all images were acquired by a single , well - trained operator ( yl ) . optic disc scan images acquired by the spectralis oct show the detailed configuration of the retinal layer posterior boundary . with the stratus oct , the optic disc image acquired in the fast mode , featuring 128 a scans , shows the posterior boundary as a single thick hyper - reflective red - colored band , displayed in false color ( fig . 1a , black arrow ) . this posterior boundary has been interpreted as the complex of the rpe and the junction between the inner and outer segment ( is / os ) of the photoreceptor layer . in fig . the spectralis oct images , however , show posterior retinal boundaries composed of at least two layers , including one thinner and one thicker layer ( fig . 1b , the red arrow indicates peripapillary rnfl , and the inner thinner layer has been defined as the junction between the is / os of the photoreceptor layer ( yellow arrow ) , whereas the outer thicker layer has been considered to represent the bruch 's membrane / rpe border ( blue arrow ) . due to these difficulties in interpretation , each of two glaucoma specialists ( msk and krs ) independently evaluated the detailed features of the posterior boundaries and rnfls in ppa -zones of eyes with healthy , glaucoma - suspect , and glaucomatous optic discs imaged by the spectralis oct . the evaluation of the ppa -zones was performed along the multiple straight horizontal lines on the temporal sides of optic disc . the presence or absence of various layers ( rnfl , is / os complex , and bruch 's membrane / retinal pigment epithelium complex layer [ brl ] ) was noted . nineteen eyes of 10 healthy , glaucoma , and glaucoma - suspect subjects , all with ppa , were consecutively imaged . all three glaucoma experts agreed that the subject showed a glaucomatous optic disc and accompanying ppa . the extent of the -zone ( red arrow ) and optic disc margin ( blue arrow ) was demarcated on the temporal side of optic disc ( fig . 2b ) . cross - sectional imaging of the optic disc scanned by the spectralis oct showed retinal layer details at high resolution . the extent of the -zone ( red arrow ) was shown in slo and cross - sectional images of the spectralis oct . the rnfl ( yellow arrow ) and brl ( green arrow ) were easily seen in the -zone of the oct ppa images . however , the is / os complexes ( pink arrow ) were absent from the -zone of the ppa area ( fig . 2c ) . a 61-year - old woman with a healthy optic disc showed ppa on the temporal side and the extent of the -zone ( red arrow ) and optic disc margin ( blue arrow ) was indicated on the temporal side of optic disc ( fig . 3b ) . in a cross - sectional image of the optic disc scanned by the spectralis oct , the rnfl ( yellow arrow ) and brl ( green arrow ) were observed in the -zone ppa area , whereas the is / os complexes ( pink arrow ) were absent . the brl was intact and showed strong reflectance , but the brl edge showed slight posterior bowing around the optic disc margin ( fig . 3c ) . the spectralis oct image and the image obtained from the stratus oct fast optic disc mode were compared . the stratus oct image also showed slight posterior bowing of the brl ( white arrow ) , but the automatic disc margin detection algorithm failed to detect the edge ( fig . examination of the optic disc revealed advanced cupping , multiple rnfl bundle defects , and marked ppa in fundus photography ( fig . the extent of the -zone ( red arrow ) and optic disc margin ( blue arrow ) was shown on the temporal side of the optic disc ( fig . 4a ) . ppa and rnfl bundle defects were also seen on spectralis oct slo imaging ( fig . 4b ) . in a cross - sectional image of the optic disc obtained by the spectralis oct , the rnfl was thinner when the scan line passed through the rnfl bundle defect area ( fig . 4b , green scan line ) , but the rnfl ( yellow arrow ) was nonetheless observed in the -zone of ppa . the is / os complexes ( pink arrow ) were absent from the -zone of the ppa area . the brl was atrophic and posteriorly bowed in the -zone of the ppa area ( fig . all three glaucoma experts agreed that the subject showed a glaucomatous optic disc and accompanying ppa . the extent of the -zone ( red arrow ) and optic disc margin ( blue arrow ) was demarcated on the temporal side of optic disc ( fig . 2b ) . cross - sectional imaging of the optic disc scanned by the spectralis oct showed retinal layer details at high resolution . the extent of the -zone ( red arrow ) was shown in slo and cross - sectional images of the spectralis oct . the rnfl ( yellow arrow ) and brl ( green arrow ) were easily seen in the -zone of the oct ppa images . however , the is / os complexes ( pink arrow ) were absent from the -zone of the ppa area ( fig . 2c ) . a 61-year - old woman with a healthy optic disc showed ppa on the temporal side and the extent of the -zone ( red arrow ) and optic disc margin ( blue arrow ) was indicated on the temporal side of optic disc ( fig . 3b ) . in a cross - sectional image of the optic disc scanned by the spectralis oct , the rnfl ( yellow arrow ) and brl ( green arrow ) were observed in the -zone ppa area , whereas the is / os complexes ( pink arrow ) were absent . the brl was intact and showed strong reflectance , but the brl edge showed slight posterior bowing around the optic disc margin ( fig . 3c ) . the spectralis oct image and the image obtained from the stratus oct fast optic disc mode were compared . the stratus oct image also showed slight posterior bowing of the brl ( white arrow ) , but the automatic disc margin detection algorithm failed to detect the edge ( fig . examination of the optic disc revealed advanced cupping , multiple rnfl bundle defects , and marked ppa in fundus photography ( fig . the extent of the -zone ( red arrow ) and optic disc margin ( blue arrow ) was shown on the temporal side of the optic disc ( fig . 4a ) . ppa and rnfl bundle defects were also seen on spectralis oct slo imaging ( fig . 4b ) . in a cross - sectional image of the optic disc obtained by the spectralis oct , the rnfl was thinner when the scan line passed through the rnfl bundle defect area ( fig . 4b , green scan line ) , but the rnfl ( yellow arrow ) was nonetheless observed in the -zone of ppa . the is / os complexes ( pink arrow ) were absent from the -zone of the ppa area . the brl was atrophic and posteriorly bowed in the -zone of the ppa area ( fig . when using stratus oct to examine optic discs , we sometimes find that disc margin detection is inappropriate and this ( if uncorrected ) may lead to unreliable optic disc parameter measurements . this may be one reason why oct optic disc analysis has been less frequently used in both research and clinical settings , as compared to rnfl analysis . first , although stratus oct employs an rpe / choriocapillary edge detection algorithm to determine the optic disc margin , we have demonstrated that the rpe / choriocapillary edge may not be an accurate disc margin marker in optic discs with ppa [ 8 - 10 ] . second , the poor scan quality around the disc margin afforded by oct may contribute to errors in optic disc margin detection . in the vicinity of the optic disc , the fast optic disc mode of the stratus oct instrument makes only 128 a scans in each optic disc pass , leading to poor image resolution . thus , we aimed to image the complex structures of the peripapillary retinal layer , including the -zone of ppa , using high - resolution sd - oct . the higher - quality images of sd - oct showed that the -zone of the ppa features were not uniform in healthy and glaucomatous optic discs , or in discs of glaucoma - suspect eyes . the presence or absence of particular structures varied among ppa eyes . a common finding was that the is / os junction was not observed in the -zones of ppa eyes , usually indicating that photoreceptors are absent from the -zone of the ppa areas . the other important finding was that detectible rnfls were observed in the -zone of most ppa areas . thus , rnfls were retained despite glaucoma - induced rnfl thickness reductions in the -zone of the ppa areas . we found that 12 of 19 eyes showed intact brl complexes within the -zone of the ppa areas . however , as exemplified in case 2 above , some eyes showed posterior bowing of the bruch 's membrane / rpe complex rather than a relatively linear arrangement . in such cases , the current automatic disc margin detection algorithm of the stratus oct may fail to demarcate disc margins with precision . as shown in case 2 , automatic disc margin detection by the stratus oct did not include the posteriorly bowed terminus of the brl , resulting in delineation of an erroneously large disc margin . in 7 of 19 eyes , the brl was atrophic or absent within the ppa -zone . as illustrated in case 3 , the brl appeared to be thinner and incomplete around the optic disc margin , and showed posterior bowing . this finding is in line with the original definition of the ppa -zone , which is an area devoid of , or atrophic for , rpe and choriocapillaries . this emphasizes that the brl can not serve as a useful disc marginal marker in ppa eyes . in these situations , a limitation of our study is that we did not quantitatively define the -zone of ppa areas , nor did we match such areas with oct images . however , our primary goal was to show how the ppa -zone is presented on high - quality oct images . furthermore , we demonstrate that retinal layer features are not uniform in the -zone of ppa areas . in conclusion , the -zones of ppa showed fine - structure variability when evaluated by sd - oct imaging . if both ppa and the disc margin are important concepts in glaucoma diagnosis , then determination of the optic disc margin needs to be customized , based on ppa characteristics . application of automated disc margin detection software without consideration of specific ppa architecture may be scientifically invalid .
purposeto characterize the features of peripapillary atrophy ( ppa ) , as imaged by spectral - domain optical coherence tomography ( sd - oct).methodssd - oct imaging of the optic disc was performed on healthy eyes , eyes suspected of having glaucoma , and eyes diagnosed with glaucoma . from the peripheral -zone , the retinal nerve fiber layer ( rnfl ) , the junction of the inner and outer segments ( is / os ) of the photoreceptor layer , and the bruch 's membrane / retinal pigment epithelium complex layer ( brl ) were visualized.resultsnineteen consecutive eyes of 10 subjects were imaged . the rnfl was observed in the ppa -zone of all eyes , and no eye showed an is / os complex in the -zone . the brl was absent in the -zone of two eyes . the brl was incomplete or showed posterior bowing in the -zone of five eyes.conclusionsthe common findings in the ppa -zone were that the rnfl was present , but the photoreceptor layer was absent . presence of the brl was variable in the -zone areas .
in this appendix we wish to explain a method for calculating ( connected ) expectation values of a string of plaquette operators in the background of our trial action @xmath3 ( eq . ( [ eq - trial_links ] ) ) . the method involves expressing the expectation values of products of single link operators in terms of tensor projection operators @xcite , which can then be multiplied together within an algebraic manipulation package . of those available , we found form the most suitable because it has an explicit summation convention and extensive substitution facilities . let us start with the simplest example , @xmath109 , where @xmath110 is a single - link element of @xmath0 . it is clear that @xmath111 , and by taking the trace we establish the coefficient as @xmath40 ( see eq . ( [ eq - vdefs ] ) ) : the notation has been chosen with a view to subsequent examples and in general refers to the young tableau associated with a particular permutation symmetry , in this case trivial , of the upper indices . if we now go on to the product of three @xmath110 s , there are three irreducible representations of the relevant permutation group @xmath117 . however , since we are constructing irreducible tensors of @xmath0 , the completely antisymmetric operator @xmath118 effectively vanishes and only the completely symmetric and mixed symmetry operators @xmath119 and @xmath120 survive . the form of these tensors can be deduced from the group algebra of the conjugacy classes of @xmath117 , which comprise @xmath121 , @xmath122 and @xmath123 . the non - trivial products in the algebra are @xmath124 , @xmath125 and @xmath126 . the vanishing of @xmath118 for @xmath0 corresponds to the equivalence relation @xmath127 when acting on the identity element @xmath128 . thus in our search for projection operators we can limit ourselves to the sub - algebra of @xmath129 generated by the even permutations of @xmath121 , @xmath8 . it is then easy to construct the required projection operators as @xmath130 acting on @xmath128 by permutation of the upper indices . by taking traces we can establish the coefficients as @xmath131 , @xmath40 in the expansion @xmath132 for the product of four @xmath110 s , we can anticipate that @xmath133 the problem is to establish the specific form of the three projection operators . again the procedure is to look at the group algebra of @xmath134 , which has five conjugacy classes . because of the equivalence relations arising from the vanishing in @xmath0 of completely antisymmetric combinations involving more than two indices , we can eliminate the classes of odd permutations and work with the group algebra of the alternating group @xmath135 . this has three classes , @xmath121 , @xmath122 , @xmath136 , with algebra @xmath137 , @xmath138 and @xmath139 . from these it is possible to construct the projection operators @xmath140 these formulae are sufficient to evaluate all the diagrams we encountered up to @xmath78 . diagrams involving @xmath141 can be dealt with @xcite by converting @xmath141 to @xmath110 according to @xmath142 in particular , @xmath143 a given diagram will consist of a number of plaquettes with certain links in common . the procedure is then to write down the general expression for the corresponding amplitude , identify the shared links , apply the appropriate substitutions for @xmath144 , @xmath145 , @xmath146 and @xmath147 and then sum over all repeated indices . in fact we actually want the connected expectation values ( cumulants ) @xmath148 . these can be obtained by identifying the different terms in the expansion of @xmath149 with products of expectation values of the corresponding partitions of the @xmath7 plaquettes . thus @xmath150 corresponds to @xmath151 etc .
the linear delta expansion is applied to a calculation of the @xmath0 mass gap on the lattice . our results compare favourably with the strong - coupling expansion and are in good agreement with recent monte carlo estimates . -0.5 in the linear delta expansion ( lde ) is an analytic approach to field theory which has been applied to a number of different problems ( see for example ref . @xcite ) . the approach is non - perturbative in the sense that a power series expansion is made in a parameter @xmath1 artificially inserted into the action , rather than in a coupling constant of the theory . the calculational techniques required do not differ greatly from conventional feynman diagrams . an essential part of the approach is an optimization with respect to another parameter , in the present case @xmath2 , appearing in the @xmath1-extended action . the linear delta expansion uses @xmath1 as an interpolation between a soluble action @xmath3 and the action for the desired theory @xmath4 . the action is written : @xmath5 where @xmath3 contains some dependence on the optimization parameter @xmath2 . a vacuum generating functional or appropriate green function may then be evaluated as a power series in @xmath1 , which is set equal to unity at the end of the calculation . of course this power series is only calculated to a finite number of terms , and will therefore retain some dependence on @xmath2 which would be absent in the sum to all orders when @xmath1 is set equal to one . a well - motivated criterion for fixing @xmath2 is to demand that , at least locally , the truncated result should be independent of @xmath2 . this is the principle of minimal sensitivity ( pms ) @xcite . if @xmath6 denotes the @xmath7th approximant to a quantity @xmath8 , the requirement is @xmath9 this , or some similar criterion , is an intrinsic part of the lde , providing the non - perturbative dependence on the coupling constant of the theory . for example , in the delta expansion of the integral @xmath10 , the pms correctly reproduces its @xmath11 dependence . the application of the pms is also vital for the convergence of the @xmath1 series , which has been rigorously proved for the zero - dimensional @xmath12 vacuum generating functional @xcite and the finite temperature partition function of the anharmonic oscillator in quantum mechanics @xcite . the proof has been recently extended @xcite to the connected vacuum generating function @xmath13 in zero dimensions . a number of non - perturbative approaches to field theory are related to the lde . at first order in @xmath1 , the lde is related to the gaussian approximation @xcite , and at higher orders to generalizations of this @xcite . it also has much in common with work of kleinert @xcite and of sissakian et al . @xcite . in the context of lattice gauge theories the lde has been applied , with various choices of the trial action @xcite-@xcite , to the groups @xmath14 , @xmath15 and @xmath0 , mainly in calculating the plaquette energy @xmath16 . a particularly useful trial action is the one proposed by zheng et al . @xcite , based on single links : @xmath17 the sum runs over all links @xmath18 of the lattice , and the parameter @xmath2 is used for optimization . however , these authors used a different optimization criterion , more closely related to the conventional variational method , in which a rigorous inequality for the free energy at @xmath19 was applied at _ all orders in _ @xmath1 . such a procedure is liable to forfeit the convergence which may be provided by an order - by - order optimization . two of the present authors @xcite used the zheng trial action with the pms in its usual sense in a calculation of the @xmath0 plaquette energy . this was found to give excellent agreement at @xmath20 with monte carlo results in the weak coupling regime . following on from this , the phase structure of the mixed @xmath0 - @xmath21 action was studied @xcite , and again the results to @xmath20 gave good agreement with the monte carlo results . we were therefore encouraged to attempt to extend the method to the more difficult problem of the mass gap in lattice @xmath0 using the same trial action . for such quantities , which involve finding the exponential fall - off of a correlator at large separations , semi - analytic methods have , in principle , an advantage over monte - carlo methods , insofar as the size of the lattice is not limited and small signals are not masked by statistics . in section 2 we set up the formalism for the problem to be studied and explain how the diagrams which arise in the delta expansion of the modified action are evaluated . the optimization procedure adopted is explained in section 3 , where the results are presented first in lattice units , and then in terms of the su(2 ) lattice constant @xmath22 by looking for the correct scaling limit as @xmath23 . in section 4 we summarize the paper and indicate some directions for further development . the appendix shows how the evaluation of expectation values in the background of the trial action of eq . ( [ eq - trial_links ] ) can be organized in a way amenable to symbolic computation . we consider the @xmath0 gauge theory on the lattice . the @xmath1-extended action is : @xmath24 the partition function for this system may be written : @xmath25 \ , { \rm e}^{s_\delta } \nonumber \\ & = & \int [ du ] \ , \sum _ { r=0 } ^\infty \delta^r\frac{(s - s_0)^r}{r ! } { \rm e}^{s_0}\end{aligned}\ ] ] and lattice quantities may be evaluated as power series in @xmath1 in the background @xmath3 . this leads to a diagrammatic expansion related , but not identical to the conventional strong coupling @xmath26 expansion @xcite @xcite , the difference being that the strong coupling expectations are evaluated in a zero background . the actual diagrams used are also different , the first non - vanishing diagram in the strong coupling expansion for the mass gap being a closed cuboid of plaquettes , compared with the lde , for which the first diagram is shown in fig . 1(a ) . calculation of the mass gap involves the evaluation of the connected correlation @xmath27 between two non - oriented plaquettes @xmath28 and @xmath29 with temporal separation @xmath30 in any spatial position . @xmath31 the subscript @xmath8 denotes the connected expectation or cumulant . the diagrammatic expansion in powers of @xmath1 has its first non - vanishing term at @xmath32 . this is shown in fig . 1(a ) , where a ladder " of time - like plaquettes connects @xmath28 and @xmath29 . the next power in @xmath1 adds one extra plaquette to fig . 1(a ) in all possible positions . some examples of these are shown as figs . 1(b)-(k ) . it should be noted that this calculation is carried out in the temporal gauge . this explains the absence of diagrams where a plaquette is attached to the side of the ladder by a temporal link only . such a link variable is set to unity , and therefore the extra plaquette is effectively disconnected . at this order , there is also a term proportional to @xmath33 in the @xmath1 expansion . this can be included as a partial derivative with respect to @xmath2 of the @xmath32 diagram . each of the diagrams shown has an associated multiplicity depending on its geometric properties . the basic diagram of fig . 1(a ) has a factor of @xmath34 representing the fact that the ladder may be connected to any of the four sides of the lower plaquette @xmath28 , which is taken as fixed , and that the upper plaquette @xmath29 has @xmath35 possible spatial orientations . similarly the additional factor of @xmath36 in fig . 1(b ) arises from the @xmath37 possible spatial orientations of the extra plaquette , its possible attachment on any of the three sides of its neighbour , the fact that either of the two upper plaquettes could be @xmath29 , and finally a factor of two to include the symmetrical configuration where the extra plaquette is attached to @xmath28 instead . note that figs . 1(f ) , ( h ) and ( k ) , which involve an additional plaquette in the body of the ladder , have a @xmath38-dependent multiplicity . having enumerated the diagrams to the required order and calculated their associated multiplicities , their expectation values must be evaluated . the evaluation of simpler diagrams consisting of up to four or five plaquettes by group integration @xcite or character expansion @xcite has been discussed elsewhere . another method is discussed in the appendix to the present paper . in the evaluation of the straight ladder diagram of fig . 1(a ) and subsequent modifications thereof , an enormous simplification arises from the fact that the expectation value of a single link is a multiple of the identity ( eq . ( a1 ) ) . this means that the contribution of fig . 1(a ) is just a product of factors representing the expectation values of the doubled links occurring on each rung . remarkably this factorization extends to the connected expectation value , with the result that @xmath39 here each factor of @xmath40 represents the expectation value of a single spacelike link of @xmath28 and @xmath29 , and we get a factor of @xmath41 for each rung . the functions @xmath42 are defined as ratios of modified bessel functions of argument @xmath43 : @xmath44 two derived quantities which appear frequently in the contributions of higher order diagrams are @xmath45 as defined above : @xmath46 and @xmath47 the higher order diagrams consist of modifications to this basic diagram by inserting additional plaquettes at either end and/or in the middle . at order @xmath48 only one of these alternatives is possible . the factorization property noted above extends to these higher order diagrams . that is , their connected expectation can be obtained from the basic building blocks shown in fig . 2 , with additional factors representing the bulk of the ladder . diagrams which involve a modification at one end have a multiplicity which is independent of the total temporal separation @xmath38 , whereas the three diagrams 1(f ) , 1(h ) and 1(k ) which involve an addition to the middle of the ladder have @xmath38-dependent multiplicities . the latter essentially exponentiate in higher orders and so are the only ones which contribute to the mass gap when this is calculated from the @xmath1-expansion of the ratio @xmath49 ( see eq . ( [ eq - tayrat ] ) ) . the contributions of these diagrams are given below : @xmath50 the diagrams of order @xmath51 are similarly built up by adding a further plaquette in all possible ways . some examples are shown in fig . 3 . there are around 150 diagrams at this order , although again it is only those with @xmath38-dependent multiplicities which contribute to the @xmath1-expansion of @xmath49 . there are also additional terms in @xmath52 and @xmath53 which arise from the expansion of the factor @xmath54 in eq . [ eq - parfn ] . the most succinct way of including such contributions is to note that @xmath2 always occurs in the combination @xmath55 ( see eq . ( [ eq - action ] ) ) . thus the @xmath45 s occurring in the various expectation values are really functions of this argument , which needs to be taylor expanded to the appropriate order . altogether we may write @xmath56 where @xmath57 denotes the multiplicity of the @xmath58th diagram and @xmath59 is its connected expectation value . having set up the diagrams necessary to calculate the correlation @xmath27 , we now need to extract the mass gap @xmath60 using the familiar result : @xmath61 as @xmath62 , giving @xmath63 at first sight it might seem reasonable to calculate @xmath27 and @xmath64 separately , applying the pms to each correlation , and then to extract @xmath65 from equation ( [ eq - mgdef2 ] ) . however , this is not a fruitful procedure for two reasons . the first is that it is not in the spirit of the pms , according to which it is the final quantity calculated which should be optimized with respect to @xmath2 . more importantly , the convergence of the expansion performed in this manner is extremely slow . it is , after all , asking a great deal of a perturbation expansion , even when optimized , to give the correct @xmath66 limit of @xmath27 with only a few terms of the expansion . the most important aspect of the problem is that some of the diagrams have multiplicities which grow with @xmath38 , reflecting the fact that additional plaquettes can be attached in a large number of positions to the body of the ladder . thus the larger the value of @xmath38 , the higher the order of the perturbation expansion required before the factorial denominators in eq . ( [ eq - parfn ] ) eventually control the convergence . however , these diagrams essentially exponentiate . for example the series of `` bracket '' diagrams starting with fig . 1(h ) and continuing with fig . 3(h ) has the form of an exponential series for large @xmath38 . consequently , when the series for the _ ratio _ is taken the @xmath38-dependence cancels , as we show in more detail below . thus by considering the taylor expansion for the ratio , the limit @xmath66 does not pose such a threat to the convergence of the series . a similar procedure was adopted by mnster @xcite in the application of the strong coupling expansion to the calculation of the mass gap . we therefore apply the pms to the taylor expansion of the ratio @xmath67 , up to third order in @xmath1 . writing the series for @xmath27 and @xmath64 as @xmath68 the ratio has the expansion @xmath69 this formulation leads to a nave large @xmath38 limit for the mass gap . in going from temporal separation @xmath38 to @xmath70 , we add an extra plaquette to the ladder part of each diagram , which gives an overall extra factor @xmath71 to the correlation . thus one might expect the mass gap to be equal to @xmath72 . in fact , for the lowest - order contribution we have @xmath73 where @xmath74 , so that indeed @xmath75 . in higher orders , however , @xmath38-dependent multiplicities give rise to corrections to the nave result . the next - order coefficients have the form : @xmath76 then to second order in @xmath1 , eq . ( [ eq - tayrat ] ) is : @xmath77\ ] ] again this is independent of @xmath38 , and means that in this form of the expansion @xmath38 does not need to be taken asymptotically large . it is sufficient to take it large enough for the diagrams to settle down to a generic form . at @xmath78 , writing @xmath79 the @xmath20 term in eq . ( [ eq - tayrat ] ) is : @xmath80 \label{eq - d2term}\ ] ] this apparently has a @xmath38-dependence , but in fact the coefficient @xmath81 is precisely @xmath82 because it arises from exponentiation of the @xmath38-dependent graphs at order @xmath48 . as emphasized by mnster @xcite , the summation over the spatial positions of the upper plaquette @xmath29 , which also serves to project out zero spatial momentum in the correlator , is vital to this exponentiation . altogether , then , we have the @xmath38-independent result for the ratio to order @xmath83 : @xmath84.\ ] ] this expression for @xmath49 is still a function of @xmath2 . according to the pms criterion , we are looking for stationary points in @xmath2 . typical curves of the @xmath2 dependence are shown in figs . 4 and 5 for @xmath85 and @xmath78 respectively . at @xmath85 there is a single maximum , and at @xmath78 the value of @xmath49 at the maximum is remarkably close to this , even though the position in @xmath2 is quite different . at this stage our result is expressed in terms of the inverse of the lattice spacing @xmath86 , which we need to take to zero in order to make contact with the continuum limit . the physical value of the glueball mass must in this limit become a fixed number times the su(2 ) lattice scale @xmath22 . in the weak coupling limit , to two - loop level , this is given by @xmath87 we look for the constant @xmath88 such that @xmath89 which is the value for which the graph of eq . ( [ eq - wcrg ] ) against @xmath26 is tangential to that of @xmath65 calculated in the lde . the graphs are shown in fig . 6 to @xmath85 and fig . 7 to @xmath78 . these show good agreement between the orders , the tangents occurring at @xmath90 and @xmath91 at @xmath85 and @xmath78 respectively . these results then give for the mass gap : @xmath92 compared to the strong - coupling expansion @xcite , which gives @xmath93 ( @xmath94 ) at order @xmath95 and @xmath96 ( @xmath97 ) at order @xmath98 , our results show better consistency between consecutive orders ; moreover , the @xmath26-values where the tangents occur are further into the weak - coupling region . in a series expansion of this kind it is difficult to quote a precise error , but based on the difference between our two results at @xmath85 and @xmath78 one would estimate the error as not more than @xmath99 . our results can be compared directly with those of berg and billoire @xcite , who quote @xmath100 . a comparison with the more recent work of michael and perantonis @xcite on a @xmath101 lattice is less straightforward , since they quote their results in lattice units and cast some doubt on the validity of asymptotic scaling . nonetheless , converting @xmath102 to lattice units at @xmath103 gives @xmath104 , in excellent agreement with their results . at @xmath105 it gives @xmath106 , which is slightly higher than their central value , but still within the error bars . in this paper we have demonstrated that the linear delta expansion with the principle of minimal sensitivity is a viable technique for the calculation of the mass gap for a lattice gauge theory . we have shown how the lattice diagrams appearing in this type of calculation can be easily evaluated by a process of building up chains of plaquettes from a simple ` root ' diagram , and that connected expectations of these are as simple to deal with . the gauge fixing procedure adopted reduces the number of contributing diagrams , and makes them easier to evaluate . as always , the pms is an integral part of the calculation . the potential ambiguity arising from the occurrence of multiple pms points is not serious in this case . it is clear by comparison with the lower - order calculation that it is the broad maximum at @xmath78 which is the appropriate one , and it is very encouraging that the resulting value of @xmath60 is so stable in going from one order to the next . this calculation has shown the relationship between the lde and the strong coupling expansion . the diagrammatic expansion used is similar , but the actual evaluation of the diagrams is different , requiring alternative techniques . it has proved sufficient to work with the correlators of simple plaquette operators rather than the more complicated `` fuzzy '' operators which have been found necessary in monte - carlo calculations . the fundamental reason for this is that we are effectively working on an infinite lattice , so that large separations are no problem , whereas in monte - carlo calculations it is necessary to enhance the signal at finite separations . the present calculation could be extended in various ways . in increasing order of difficulty these are : \(i ) to calculate higher mass glueball states . with a simple plaquette operator the @xmath107 state occurring in the @xmath108 representation of the cubic group is accessible by weighting the different orientations of the upper plaquette . other spin - parities would require larger wilson loops . \(ii ) to work with the gauge group su(3 ) rather than su(2 ) . this would involve an extension of the techniques of the appendix to su(3 ) . \(iii ) to go to next order in the @xmath1 expansion . the difficulty here is the greatly increased number of diagrams which have to be taken into account and the consequent danger of missing an important contribution . further possible extensions include calculations of the string tension and various quantities at finite temperature . some work has already been done on these lines by tan and zheng @xcite , but using the free energy criterion mentioned above . it would be interesting to return to these problems using pms optimization order by order in the quantity being calculated .
juvenile recurrent parotitis ( jrp ) or recurrent parotitis of childhood is a well - recognized salivary gland disorder and is the second most common salivary disease in children . the clinical presentation of jrp involves recurrent , non - obstructive , non - suppurative swelling of either one or both of the parotid glands . the diagnosis of jrp is based on the clinical presentation , and is one of exclusion of neoplastic , inflammatory and infectious etiologies . the treatment for jrp ranges from medical management with warm compresses , sialogogues and antibiotic therapy to surgical management with a superficial parotidectomy and facial nerve dissection for patients resistant to medical management . these traditional treatment options have found to have limited effectiveness or carry a high morbidity associated with them . our objective is to present our preliminary experience and a relevant literature review on sialendoscopic management of jrp and to discuss the technical challenges associated with the endoscopic management of jrp . during october 2008 to november 2009 , three children with jrp failing conservative medical management were referred to the lsu department of otolaryngology head neck surgery at the children 's hospital of new orleans , louisiana . an informed consent was obtained from the parents of the patients for management with interventional sialendoscopy . before sialendoscopy , the children had received medical management only with no prior procedures having been performed for the treatment of jrp . indication for endoscopic treatment was at least two episodes of parotid swelling within 6 months despite treatment with antibiotics . the procedure was performed in a non - acute setting after resolution of the most recent episode of parotitis . sialendoscopy is a minimally invasive procedure that allows endoscopic visualization of the salivary ductal system and permits diagnosis and treatment of inflammatory and obstructive pathology of the ductal system thus providing an alternative to open surgery and its related complications . there are multiple endoscopes available , ranging in external diameter from 0.8 to 1.6 mm [ figure 1a and b ] . the surgical procedure can be divided into the following major steps : exposure , access to the salivary ductal system , endoscopy of the salivary ductal system and intervention . ( a ) erlangen 1.1 mm all - in - one sialendoscope . ( b ) 1.3 mm marchal all - in - one sialendoscope ( photographs courtesy karl storz , germany ) exposure : nasotracheal intubation is preferred although not mandatory , as this allows a more complete access to the oral cavity . access : the identification and dilation of the parotid duct opening is the rate - limiting step of this procedure . identification can be improved using optical magnification . in difficult cases , where the opening of the papilla can not be visualized the dilation of the papilla is most commonly performed using a set of specialized ductal dilators of increasing diameter ( marchal dilator system , karl storz , tuttlingen , germany ) [ figure 2 ] . 3 or 4 is adequate to introduce the 1.1 mm erlangen or 1.3 mm marchal interventional sialendoscope . other methods for dilation of the papilla that have been described include the use of guide wires and bougies of increasing diameter via the seldinger technique , threading the sialendoscope over a guide wire and also use of a papillotomy to facilitate introduction of the sialendoscope . the serial dilation method is preferred as it is atraumatic and allows precise dilation of the duct that will allow introduction of the endoscope but prevent backflow of saline that is infused to maintain an adequate surgical view . dilator set ( marchal dilator system , karl storz , tuttlingen , germany ) endoscopy : salivary endoscopy is performed to visualize the main duct looking for debris , areas of stenosis and obstructive sialoliths . a constant infusion of saline helps maintain a surgical endoscopic view . in parotid sialendoscopy , the masseter muscle can often create a turn within the duct that is difficult to navigate and has been termed the masseteric bend . pinching the cheek between the thumb and index finger with forward traction and using the other fingers of the hand to manipulate the salivary gland , it is often possible to straighten the duct and navigate the masseteric bend . a complete endoscopy includes visualization of the main duct as well as secondary and tertiary ductal systems [ figure 3 ] . parotid sialendoscopy with transillumination of the tip of the scope within the parotid glandular system intervention : interventional sialendoscopes allow the introduction of pharmacological agents as well as specialized tools such as wire baskets for stone extraction , laser fibers for stone fulguration or release of ductal stenosis and hand - held micro burr . the surgical intervention used for management of jrp included serial dilation of the parotid papilla using serial dilator probes . this was followed by a sialendoscopy for visualization of stenson 's duct , robust irrigation with normal saline and instillation of kenalog ( triamcinolone acetate 40 mg diluted in 5 ml of normal saline ) . the patients were discharged home the same day with instructions for the use of parotid massage and sialogogues . pain medication was prescribed , but no post - operative antibiotics or steroids were prescribed . initial follow - up was conducted 2 weeks after surgery to ensure adequate salivary flow and rule out iatrogenic stenosis of the papilla . sialendoscopy is a minimally invasive procedure that allows endoscopic visualization of the salivary ductal system and permits diagnosis and treatment of inflammatory and obstructive pathology of the ductal system thus providing an alternative to open surgery and its related complications . there are multiple endoscopes available , ranging in external diameter from 0.8 to 1.6 mm [ figure 1a and b ] . the surgical procedure can be divided into the following major steps : exposure , access to the salivary ductal system , endoscopy of the salivary ductal system and intervention . ( b ) 1.3 mm marchal all - in - one sialendoscope ( photographs courtesy karl storz , germany ) exposure : nasotracheal intubation is preferred although not mandatory , as this allows a more complete access to the oral cavity . access : the identification and dilation of the parotid duct opening is the rate - limiting step of this procedure . identification can be improved using optical magnification . in difficult cases , where the opening of the papilla can not be visualized the dilation of the papilla is most commonly performed using a set of specialized ductal dilators of increasing diameter ( marchal dilator system , karl storz , tuttlingen , germany ) [ figure 2 ] . 8 . usually , for pediatric sialendoscopy , dilation up to no . 3 or 4 is adequate to introduce the 1.1 mm erlangen or 1.3 mm marchal interventional sialendoscope . other methods for dilation of the papilla that have been described include the use of guide wires and bougies of increasing diameter via the seldinger technique , threading the sialendoscope over a guide wire and also use of a papillotomy to facilitate introduction of the sialendoscope . the serial dilation method is preferred as it is atraumatic and allows precise dilation of the duct that will allow introduction of the endoscope but prevent backflow of saline that is infused to maintain an adequate surgical view . dilator set ( marchal dilator system , karl storz , tuttlingen , germany ) endoscopy : salivary endoscopy is performed to visualize the main duct looking for debris , areas of stenosis and obstructive sialoliths . a constant infusion of saline helps maintain a surgical endoscopic view . in parotid sialendoscopy , the masseter muscle can often create a turn within the duct that is difficult to navigate and has been termed the masseteric bend . pinching the cheek between the thumb and index finger with forward traction and using the other fingers of the hand to manipulate the salivary gland , it is often possible to straighten the duct and navigate the a complete endoscopy includes visualization of the main duct as well as secondary and tertiary ductal systems [ figure 3 ] . parotid sialendoscopy with transillumination of the tip of the scope within the parotid glandular system intervention : interventional sialendoscopes allow the introduction of pharmacological agents as well as specialized tools such as wire baskets for stone extraction , laser fibers for stone fulguration or release of ductal stenosis and hand - held micro burr . the surgical intervention used for management of jrp included serial dilation of the parotid papilla using serial dilator probes . this was followed by a sialendoscopy for visualization of stenson 's duct , robust irrigation with normal saline and instillation of kenalog ( triamcinolone acetate 40 mg diluted in 5 ml of normal saline ) . the patients were discharged home the same day with instructions for the use of parotid massage and sialogogues . pain medication was prescribed , but no post - operative antibiotics or steroids were prescribed . initial follow - up was conducted 2 weeks after surgery to ensure adequate salivary flow and rule out iatrogenic stenosis of the papilla . three male patients with a mean age of 9 years ( range 611 years ) were identified with a diagnosis of jrp that were referred for sialendoscopy . of the three patients with unilateral gland involvement , two patients had left - sided symptoms and one had right - sided involvement . the mean number of episodes of jrp in the year prior to presenting to our service among the three patients was 5 ( range 46 per year ) . interventional sialendoscopy was technically possible in all three patients and , consequently , the technical success was 100% ( 3/3 ) . endoscopic findings included a blanched stenotic duct with intraductal debris in symptomatic patients ( 2/3 ; 66% ) [ figure 4 ] . in the third patient , the duct mucosa appeared normal without the presence of intraductal debris [ figure 5 ] . in all cases , parents and patients were satisfied with results , and no new episodes of parotid swelling were reported at the last follow - up ( mean 9 months , range 316 months ) . sialendoscopic view showing a blanched duct with a guide wire in the ductal lumen sialendoscopic view of a normal - appearing duct complications were minor , including an acute masseteric bend that posed a challenge for navigating the scope in one patient . proximal ductal stenosis at the papilla was observed in two patients , of which one patient required marsupialization of stenson 's duct . none of the patients reported new episodes of parotitis , with healed papillotomy incisions and good salivary flow . in 2004 , nahlieli et al . proposed an endoscopic technique for both the diagnosis and the treatment of jrp . their long - term experience with the endoscopic diagnosis and treatment of jrp has encouraging results . diagnosis was achieved by clinical history of two episodes of parotid swelling in a 12-month period , physical examination as well as ultrasound . bilateral sialography of the parotid ducts was also performed in all patients for primary diagnosis . patients in this study underwent bilateral sialendoscopy of stensen 's ducts regardless of laterality of symptoms , and lavage with 60 cc of normal saline was performed . results over the course of 14-year study were promising , with only nine of 70 patients having one subsequent episode of parotid swelling after treatment and only five requiring a repeat endoscopic treatment . follow - up ranged from 6 to 36 months . since the initial description of this method , one other study has documented a separate experience with interventional sialendoscopy involving patients with jrp . reported a series of 10 patients in 2008 . in this study , patients were diagnosed via clinical history , physical examination and an ultrasound . indication for an endoscopic procedure in this study involved two episodes of parotid swelling in a 6-month period . clavulanic acid and prednisolone for 48 h. this study reported the need for only one repeat endoscopic procedure out of 10 patients with follow - up ranging from 2 to 24 months . our experience with sialendoscopy has also had promising results with technical success and subjective improvement in symptoms in all patients at a mean follow - up of 9 months ( range 316 months ) . we feel that although ultrasound is the imaging study of choice , clinical history and physical examination are sufficient to provide an indication for endoscopic treatment , as recent studies have shown sialendoscopy to be a sufficient tool for the diagnosis of jrp . in the event that sialolithiasis is misdiagnosed as jrp , salivary stones can also be diagnosed and managed with sialendoscopy . our study differed from previous reports in that all of our patients were able to undergo same day surgery and leave the hospital without antibiotics or steroid therapy . none of the patients had a diagnostic sialogram , which can confound results as it has been reported for the management of jrp . all three patients have remained free from parotid swelling after sialendoscopy . however , long - term follow - up data is needed to confirm these preliminary observations , which are encouraging . there appears to be a learning curve with the use of sialendoscope . from our own experience , we feel that pediatric sialendoscopy is more challenging and , if possible , should be incorporated into one 's practice after developing a comfort level and an initial experience of sialendoscopy in an adult population . however , this may not always be feasible in exclusively pediatric practices . consequently , training in sialendoscopy and collaboration with someone experienced in sialendoscopy can help bridge these difficulties . endoscopic findings in our study were consistent with the previous literature on the subject , including a white , avascular appearance of the ductal layer of stenson 's duct with intraductal debris present . interestingly , this patient had the last episode of parotitis 6 months prior to the procedure as opposed to the other two patients who had episodes of jrp within 34 months of the procedure . we hypothesize that it may be possible that normal endoscopic findings in patients with documented jrp can help to predict the incidence of future episodes or resolution of this self - limiting disease process . however , accurate long - term prospective data would be essential to confirm this hypothesis . some studies have shown that the parotid system may be approximately the same size in children as in adults , and an ideal scope size has not been recommended for the management of jrp to date . however , our experience and previous studies report that the duct of a patient with jrp is likely to be stenotic , which would therefore call for a smaller endoscope to be used . faure et al . reported that a 1.3-mm sialendoscope can be used without difficulty for diagnostic sialendoscopy . however , we found that the 1.3-mm sialendoscope was technically more challenging to navigate as compared with the 0.8 and 1.1 mm endoscope . this may be due to the fact that we are treating a patient with diseased and stenotic ductal systems associated with jrp . the limitation of our observations is the small sample size and also not having the resources to use scopes of differing diameters in each case . it would seem likely that having a range of scopes with varying interventional capabilities would be of value in performing successful sialendoscopy ( 0.8 and 1.1 scopes for endoscopy in small diameter / stenotic ducts ) and also managing endoscopic removal of debris using wire baskets ( 1.1 and 1.3 mm scopes ) . sialendoscopy is an excellent tool for managing non - neoplastic disorders of the salivary glands . to date , it has been widely reported for its use in the management of sialolithiasis . however , its use for other indications such as for the management of jrp is still evolving . our study concurs with current evidence to suggest that sialendoscopy is a safe and effective intervention with low morbidity and few complications for the management of jrp . prospective multicenter studies will be required to define the utility of this intervention and to develop future clinical protocols .
objective : to evaluate our preliminary experience with interventional sialendoscopy for the diagnosis and treatment of juvenile recurrent parotitis ( jrp).materials and methods : three consecutive pediatric patients with jrp who underwent interventional sialendoscopy were identified . interventional sialendoscopy consisted of serial dilation of the stenson 's duct , endoscopy of the ductal system and saline irrigation followed by instillation of triamcinolone acetate . clinical , demographic , procedure - related data and complications were documented . end points of the study were technical success , defined as completion of the procedure , subjective improvement in symptoms as indicated by the patients or their parents and assessment of safety in terms of complications.results:three male patients with a mean age of 9 years ( range 611 years ) underwent interventional sialendoscopy for jrp . endoscopic findings included a blanched stenotic duct with intraductal debris in those who were symptomatic . technical success was 100% . the mean number of episodes of jrp in the year prior to presenting to our service among the three patients was 5 ( range 46 per year ) . there were no new episodes of jrp reported at the last follow - up . there were no major complications.conclusion:our preliminary experience concurs with the current literature and suggests that interventional sialendoscopy is effective for the management of jrp and can be considered for patients who fail conservative medical management .
SECTION 1. SHORT TITLE, REFERENCE, AND TABLE OF CONTENTS. (a) Short Title.--This Act may be cited as the ``National Uniform Food Safety Labeling Act''. (b) Reference.--Except as otherwise specified, whenever in this Act an amendment is expressed in terms of an amendment to a section or other provision, the reference shall be considered to be made to that section or other provision of the Federal Food, Drug, and Cosmetic Act (21 U.S.C. 321 et seq.). (c) Table of Contents.--The table of contents is as follows: Sec. 1. Short title, reference, and table of contents. Sec. 2. Labeling of raw or partially cooked foods and unpasteurized juice. Sec. 3. Sale and labeling of frozen fish and shellfish. Sec. 4. Sale of raw eggs. Sec. 5. Statement of origin. Sec. 6. Freshness date. Sec. 7. Food labeled as natural. Sec. 8. Labeling of kosher and kosher-style foods. Sec. 9. Unit pricing. Sec. 10. Grades for farm products. Sec. 11. Regulations. SEC. 2. LABELING OF RAW OR PARTIALLY COOKED FOODS AND UNPASTEURIZED JUICE. Section 403 (21 U.S.C. 343) is amended by adding at the end the following: ``(t)(1) Unless the label or labeling of raw or partially cooked eggs, fish, milk, dairy products, shellfish, or unpasteurized juice offered in a ready-to-eat form as a deli, vended, or other item, or the label or labeling of a ready-to-eat food containing as an ingredient raw or partially cooked eggs, fish, milk, dairy products, shellfish, or unpasteurized juice, discloses the increased risk associated with eating such food in raw or partially cooked form. ``(2) Eggs, fish, milk, dairy products, and shellfish routinely served raw or partially cooked, unpasteurized juice, and ready-to-eat foods containing such raw or partially cooked foods or unpasteurized juice as ingredients shall bear the following: This food contains raw or partially cooked eggs, fish, shellfish, or unpasteurized juice. Children, the elderly, pregnant women, or persons with weakened immune systems may experience severe foodborne illness from eating this item. ``(3) The Secretary shall, in accordance with section 11 of the National Uniform Food Safety Labeling Act, establish by regulation the labeling requirements of this paragraph.''. SEC. 3. SALE AND LABELING OF FROZEN FISH AND SHELLFISH. Section 403 (21 U.S.C. 343), as amended by section 2, is amended by adding at the end the following: ``(u)(1) Except as provided in subparagraph (2), if it is fish or shellfish that has been frozen unless its label or labeling bears a prominent and conspicuous statement indicating that such product has been frozen. ``(2) This paragraph shall not apply to fish or shellfish that has been frozen prior to being smoked, cured, cooked, or subjected to the heat of commercial sterilization. ``(3) The Secretary shall, in accordance with section 11 of the National Uniform Food Safety Labeling Act, establish by regulation the labeling requirements of this paragraph.''. SEC. 4. SALE OF RAW EGGS. Section 403 (21 U.S.C. 343), as amended by section 3, is amended by adding at the end the following: ``(v)(1) If it is raw eggs, unless its label or labeling states `Children, the elderly, pregnant women, or persons with weakened immune systems may experience severe illness from eating raw or partially cooked eggs.' ``(2) The Secretary shall, in accordance with section 11 of the National Uniform Food Safety Labeling Act, establish by regulation the labeling requirements of this paragraph.''. SEC. 5. STATEMENT OF ORIGIN. Section 403 (21 U.S.C. 343), as amended by section 4, is amended by adding at the end the following: ``(w)(1) If it is a perishable agricultural commodity as defined in section 1(b)(4) of the Perishable Agricultural Commodities Act of 1930 (7 U.S.C. 499a(b)(1)), unless it bears a label or labeling containing the country of origin of the perishable agricultural commodity. ``(2) If it is a product derived from a perishable agricultural commodity, including juice, frozen juice concentrate, fruit butter, preserves and jams, or canned or frozen fruits or vegetables, unless it bears a label or labeling containing the country of origin of the perishable agricultural commodity and the product derived from it. ``(3) The Secretary shall, in accordance with section 11 of the National Uniform Food Safety Labeling Act, establish by regulation the labeling requirements of this paragraph.''. SEC. 6. FRESHNESS DATE. Section 403 (21 U.S.C. 343), as amended by section 5, is amended by adding at the end the following: ``(x)(1) Unless its label or labeling bears the date upon which the food should no longer be sold because of diminution of quality, nutrient availability, or safety. The freshness date shall be stated in terms of the day and month of the year if the food will not be fresh after 3 months on the shelf, or in terms of the month and year if the product will be fresh for more than 3 months on the shelf. The phrase `use by' shall precede the date. ``(2) The Secretary shall, in accordance with section 11 of the National Uniform Food Safety Labeling Act, establish by regulation the means of disclosing the freshness date.''. SEC. 7. FOOD LABELED AS NATURAL. Section 403 (21 U.S.C. 343), as amended by section 6, is amended by adding at the end the following: ``(y)(1) If its label or labeling bears the word `natural', unless-- ``(A) it contains no artificial flavoring, color additive, chemical preservative, or any other artificial or synthetic ingredient added after harvesting; and ``(B) it has undergone no processing other than minimal processing, such as the removal of inedible substances or the application of physical processes such as cutting, grinding, drying, homogenizing, or pulping. ``(2) This paragraph shall not apply to the use of the terms `natural flavors' and `natural colors' as approved by the Food and Drug Administration. ``(3) The Secretary shall, in accordance with section 11 of the National Uniform Food Safety Labeling Act, establish by regulation the labeling requirements of this paragraph.''. SEC. 8. LABELING OF KOSHER AND KOSHER-STYLE FOODS. Section 403 (21 U.S.C. 343), as amended by section 7, is amended by adding at the end the following: ``(z)(1) If it is falsely represented in the food's label or labeling to be kosher, kosher for Passover, pareve, or as having been prepared in accordance with orthodox Jewish religious standards either by direct statements, orally or in writing, or by display of the word `Kosher', `Kosher for Passover', or `Pareve'; or ``(2) if the food's label or labeling uses the term `Kosher' in conjunction with the words `style' or `type' or any similar expression which might reasonably be calculated to deceive a reasonable person to believe that a representation is being made that the food sold is kosher, kosher for Passover, pareve, or prepared in accordance with orthodox Jewish religious standards. ``(3) The Secretary shall, in accordance with section 11 of the National Uniform Food Safety Labeling Act, establish by regulation provisions that implement this paragraph.''. SEC. 9. UNIT PRICING. (a) In General.--Section 403 (21 U.S.C. 343), as amended by section 8, is amended by adding at the end the following: ``(aa)(1) Unless its label or labeling bears the unit price and the total price of the food as provided in this paragraph. ``(2) As used in this paragraph ``(A) The term `unit price' of food shall mean the price per measure. ``(B) The term `price per measure' shall mean-- ``(i) price per pound for food whose net quantity is expressed in units of weight, except for such food whose net weight is less than 1 ounce which shall be expressed as price per ounce if the same unit of measure is used for the same food in all sizes; ``(ii) price per pint or quart for food whose net quantity is stated in fluid ounces, pints, quarts, gallons, or a combination thereof, if the same unit of measure is used for the same food in all sizes sold in the retail establishment; and ``(iii) price per 100 for food whose net quantity is expressed by count, except as otherwise provided by regulation. ``(3) The Secretary shall, in accordance with section 11 of the National Uniform Food Safety Labeling Act, establish by regulation a national program of pricing as prescribed by this paragraph.''. SEC. 10. GRADES FOR FARM PRODUCTS. Section 403 (21 U.S.C. 343), as amended by section 9, is amended by adding at the end the following: ``(bb)(1) Unless it bears a grade, where grading is customary within the industry. ``(2) The Secretary shall, in accordance with section 11 of the National Uniform Food Safety Labeling Act, establish by regulation a national program of grading for food which is customarily graded.''. SEC. 11. REGULATIONS. (a)(1) Within 12 months after the date of the enactment of this Act, the Secretary of Health and Human Services shall issue proposed regulations to implement paragraphs (t) and (bb) of section 403 of the Federal Food, Drug, and Cosmetic Act. The proposed regulations shall establish format requirements for the label statements mandated by such sections. The required label statements shall appear in easily legible boldface print or type, with upper and lower case letters, and in distinct contrast to other printed or graphic matter. The label statements shall appear in a type size not less than the largest type found on the label, except that used for the brand name, product name, logo, or universal product code, and in any case not less than the type size required for the declaration of net quantity of contents statement as prescribed by regulation printed in 21 C.F.R. 101.105(1). All required label statements shall be placed on the information panel, except for the statements required by paragraphs (w) and (aa) of such section 403, which shall be placed on the principal display panel. (2) Not later than 24 months after the date of enactment of this Act, the Secretary shall issue final regulations to implement sections 403(z)-(y) of the Federal Food, Drug, and Cosmetic Act. (b) If the Secretary does not promulgate final regulations under subsection (a)(2) upon the expiration of 24 months after the date of the enactment of this Act, the proposed regulation issued in accordance with subsection (a)(1) shall be considered as the final regulations upon the expiration of such 24 months. There shall be promptly published in the Federal Register notice of the new status of the proposed regulations.
National Uniform Food Safety Labeling Act - Amends the Federal Food, Drug, and Cosmetic Act to deem food to be misbranded unless certain labeling information is provided concerning: (1) raw or partially cooked eggs, fish and shellfish, dairy products, or unpasteurized juice in ready-to-eat form; (2) frozen fish and shellfish other than smoked, cured, cooked, or commercially sterilized; (3) raw eggs; (4) country of origin for perishable agricultural commodities or derived products ; (5) freshness dates; (6) food labeled as natural; (7) kosher and kosher-style foods; (8) unit pricing; and (9) grades (where customary) for farm products.
SECTION 1. SHORT TITLE. This Act may be cited as the ``Stop Militarizing Law Enforcement Act''. SEC. 2. FINDINGS. Congress makes the following findings: (1) Under section 2576a of title 10, United States Code, the Department of Defense is authorized to provide excess property to local law enforcement agencies. The Defense Logistics Agency, administers such section by operating the Law Enforcement Support Office program. (2) New and used material, including mine-resistant ambush- protected vehicles and weapons determined by the Department of Defense to be ``military grade'' are transferred to local and Federal law enforcement agencies through the program. (3) As a result local law enforcement agencies, including police and sheriff's departments, are acquiring this material for use in their normal operations. (4) As a result of the wars in Iraq and Afghanistan, military equipment purchased for, and used in, those wars has become excess property and has been made available for transfer to local and Federal law enforcement agencies. (5) According to public reports, approximately 12,000 police organizations across the country were able to procure nearly $500,000,000 worth of excess military merchandise including firearms, computers, helicopters, clothing, and other products, at no charge during fiscal year 2011 alone. (6) More than $4,000,000,000 worth of weapons and equipment have been transferred to police organizations in all 50 states and four territories through the program. (7) In May 2012, the Defense Logistics Agency instituted a moratorium on weapons transfers through the program after reports of missing equipment and inappropriate weapons transfers. (8) Though the moratorium was widely publicized, it was lifted in October 2013 without adequate safeguards. (9) As a result, Federal, State, and local law enforcement departments across the country are eligible again to acquire free ``military-grade'' weapons and equipment that could be used inappropriately during policing efforts in which citizens and taxpayers could be harmed. (10) Pursuant to section III(J) of a Defense Logistics Agency memorandum of understanding, property obtained through the program must be placed into use within one year of receipt, possibly providing an incentive for the unnecessary and potentially dangerous use of ``military grade'' equipment by local law enforcement. (11) The Department of Defense categorizes equipment eligible for transfer under the 1033 program as ``controlled'' and ``un-controlled'' equipment. ``Controlled equipment'' includes weapons, explosives such as flash-bang grenades, mine resistant ambush protected vehicles, long range acoustic devices, aircraft capable of being modified to carry armament that are combat coded, and silencers, among other military grade items. SEC. 3. LIMITATION ON DEPARTMENT OF DEFENSE TRANSFER OF PERSONAL PROPERTY TO LOCAL LAW ENFORCEMENT AGENCIES. (a) In General.--Section 2576a of title 10, United States Code, is amended-- (1) in subsection (a)-- (A) in paragraph (1)(A), by striking ``counter-drug and''; and (B) in paragraph (2), by striking ``and the Director of National Drug Control Policy''; (2) in subsection (b)-- (A) in paragraph (3), by striking ``and'' at the end; (B) in paragraph (4), by striking the period and inserting a semicolon; and (C) by adding at the end the following new paragraphs: ``(5) the recipient certifies to the Department of Defense that it has the personnel and technical capacity, including training, to operate the property; ``(6) the recipient submits to the Department of Defense a description of how the recipient expects to use the property; ``(7) the recipient certifies to the Department of Defense that if the recipient determines that the property is surplus to the needs of the recipient, the recipient will return the property to the Department of Defense; and ``(8) with respect to a recipient that is not a Federal agency, the recipient certifies to the Department of Defense that the recipient notified the local community of the request for personal property under this section by-- ``(A) publishing a notice of such request on a publicly accessible Internet website; ``(B) posting such notice at several prominent locations in the jurisdiction of the recipient; and ``(C) ensuring that such notices were available to the local community for a period of not less than 30 days.''; (3) by striking subsection (d); and (4) by adding at the end the following new subsections: ``(d) Annual Certification Accounting for Transferred Property.-- (1) For each fiscal year, the Secretary shall submit to Congress certification in writing that each Federal or State agency to which the Secretary has transferred property under this section-- ``(A) has provided to the Secretary documentation accounting for all controlled personal property, including arms and ammunition, that the Secretary has transferred to the agency, including any item described in subsection (f) so transferred before the date of the enactment of the Stop Militarizing Law Enforcement Act; and ``(B) with respect to a non-Federal agency, carried out each of paragraphs (5) through (8) of subsection (b). ``(2) If the Secretary cannot provide a certification under paragraph (1) for a Federal or State agency, the Secretary may not transfer additional property to that agency under this section. ``(e) Annual Report on Excess Property.--Before making any property available for transfer under this section, the Secretary shall annually submit to Congress a description of the property to be transferred together with a certification that the transfer of the property would not violate this section or any other provision of law. ``(f) Limitations on Transfers.--(1) The Secretary may not transfer the following personal property of the Department of Defense under this section: ``(A) Controlled firearms, ammunition, grenades (including stun and flash-bang) and explosives. ``(B) Controlled vehicles, highly mobile multi-wheeled vehicles, mine-resistant ambush-protected vehicles, trucks, truck dump, truck utility, and truck carryall. ``(C) Drones that are armored, weaponized, or both. ``(D) Controlled aircraft that-- ``(i) are combat configured or combat coded; or ``(ii) have no established commercial flight application. ``(E) Silencers. ``(F) Long range acoustic devices. ``(G) Items in the Federal Supply Class of banned items. ``(2) The Secretary may not require, as a condition of a transfer under this section, that a Federal or State agency demonstrate the use of any small arms or ammunition. ``(3) The limitations under this subsection shall also apply with respect to the transfer of previously transferred property of the Department of Defense from one Federal or State agency to another such agency. ``(4)(A) The Secretary may waive the applicability of paragraph (1) to a vehicle described in subparagraph (B) of such paragraph (other than a mine-resistant ambush-protected vehicle), if the Secretary determines that such a waiver is necessary for disaster or rescue purposes or for another purpose where life and public safety are at risk, as demonstrated by the proposed recipient of the vehicle. ``(B) If the Secretary issues a waiver under subparagraph (A), the Secretary shall-- ``(i) submit to Congress notice of the waiver, and post such notice on a public Internet website of the Department, by not later than 30 days after the date on which the waiver is issued; and ``(ii) require, as a condition of the waiver, that the recipient of the vehicle for which the waiver is issued provides public notice of the waiver and the transfer, including the type of vehicle and the purpose for which it is transferred, in the jurisdiction where the recipient is located by not later than 30 days after the date on which the waiver is issued. ``(5) The Secretary may provide for an exemption to the limitation under subparagraph (D) of paragraph (1) in the case of parts for aircraft described in such subparagraph that are transferred as part of regular maintenance of aircraft in an existing fleet. ``(g) Conditions for Extension of Program.--(1) Notwithstanding any other provision of law, amounts authorized to be appropriated or otherwise made available for any fiscal year may not be obligated or expended to carry out this section unless the Secretary submits to Congress certification that for the preceding fiscal year that-- ``(A) each Federal or State agency that has received covered property transferred under this section has-- ``(i) demonstrated 100 percent accountability for all such property, in accordance with subparagraph (B) or (C), as applicable; or ``(ii) been suspended from the program pursuant to subparagraph (D); ``(B) with respect to each non-Federal agency that has received covered property under this section, the State coordinator responsible for each such agency has verified that the coordinator or an agent of the coordinator has conducted an in-person inventory of the property transferred to the agency and that 100 percent of such property was accounted for during the inventory or that the agency has been suspended from the program pursuant to subparagraph (D); ``(C) with respect to each Federal agency that has received covered property under this section, the Secretary of Defense or an agent of the Secretary has conducted an in-person inventory of the property transferred to the agency and that 100 percent of such property was accounted for during the inventory or that the agency has been suspended from the program pursuant to subparagraph (D); ``(D) the eligibility of any agency that has received covered property under this section for which 100 percent of the property was not accounted for during an inventory described in subparagraph (A) or (B), as applicable, to receive any property transferred under this section has been suspended; and ``(E) each State coordinator has certified, for each non- Federal agency located in the State for which the State coordinator is responsible that-- ``(i) the agency has complied with all requirements under this section; or ``(ii) the eligibility of the agency to receive property transferred under this section has been suspended; and ``(F) the Secretary of Defense has certified, for each Federal agency that has received property under this section that-- ``(i) the agency has complied with all requirements under this section; or ``(ii) the eligibility of the agency to receive property transferred under this section has been suspended. ``(2) In this subsection, the term `covered property' means property classified as controlled equipment. ``(h) Prohibition on Ownership.--A Federal or State agency that receives property classified as controlled equipment under this section may never take ownership of the property. ``(i) Website.--The Defense Logistics Agency shall maintain an Internet website on which the following information shall be made publicly available: ``(1) A description of each transfer made under this section, including transfers made before and after the date of the enactment of the Stop Militarizing Law Enforcement Act, broken down by State, county, and recipient. ``(2) During the 30-day period preceding the date on which any property is transferred under this section, a description of the property to be transferred and the recipient of the transferred items. ``(3) Notice of any use of controlled equipment by the recipient of property transferred under this section as provided under subsection (l). ``(j) Notice to Congress of Property Downgrades.--Not later than 30 days before downgrading the classification of any item of personal property from controlled or Federal Supply Class, the Secretary shall submit to Congress notice of the proposed downgrade. ``(k) Notice to Congress of Property Cannibalization.--Before the Defense Logistics Agency authorizes the recipient of property transferred under this section to cannibalize the property, the Secretary shall submit to Congress notice of such authorization, including the name of the recipient requesting the authorization, the purpose of the proposed cannibalization, and the type of property proposed to be cannibalized. ``(l) Quarterly Reports on Use of Controlled Equipment.--Not later than 30 days after the last day of a fiscal quarter, the Secretary shall submit to Congress a report on any uses of controlled equipment transferred under this section during that fiscal quarter. ``(m) Reports to Congress.--Not later than 30 days after the last day of a fiscal year, the Secretary shall submit to Congress a report on the following for the preceding fiscal year: ``(1) The percentage of equipment lost by recipients of property transferred under this section, including specific information about the type of property lost, the monetary value of such property, and the recipient that lost the property. ``(2) The transfer of any new (condition code A) property transferred under this section, including specific information about the type of property, the recipient of the property, the monetary value of each item of the property, and the total monetary value of all such property transferred during the fiscal year.''. (b) Effective Date.--The amendments made by subsection (a) shall apply with respect to any transfer of property made after the date of the enactment of this Act.
Stop Militarizing Law Enforcement Act Revises the Department of Defense's (DOD's) authority to transfer excess personal property to federal and state law enforcement agencies. Removes DOD's authority to transfer property for counter-drug activities. Requires recipients of DOD property to certify that they: (1) have personnel, technical capacity, and training to operate the property; and (2) will return to DOD any property that is surplus to the recipient's needs. Requires recipients that are not federal agencies to certify that they have notified their local community of requests for DOD property with a notice on a publicly accessible Internet website and postings at prominent locations in the jurisdiction. Requires DOD to submit annually to Congress a description of property to be transferred along with a certification that the transfers are not prohibited by law. Prohibits transfers of: controlled (i.e., military grade) firearms, ammunition, grenades, and explosives; controlled vehicles, certain trucks, and other highly mobile or mine-resistant ambush-protected vehicles; armored or weaponized drones; controlled aircraft that are combat configured or combat coded, or that have no established commercial flight application; silencers; long range acoustic devices; and items in the Federal Supply Class of banned items. Prohibits transfers conditioned upon the agency demonstrating the use of any small arms or ammunitions. Prohibits transfers of previously transferred DOD property from one federal or state agency to another such agency. Allows DOD to waive transfer prohibitions for certain trucks and vehicles (other than mine-resistant ambush-protected vehicles) if necessary for disasters, rescues, or other purposes where life and public safety are at risk. Requires notice of such a waiver to be provided to Congress and the public. Permits DOD to exempt aircraft parts transferred for regular maintenance of aircraft in an existing fleet. Prohibits obligations or expenditures of appropriations to carry out DOD's property transfer program unless specified conditions have been met, including requirements to verify: (1) that in-person inventories of transferred property have been conducted at each agency, and (2) that 100% of such property was accounted for during the inventories or that agencies unable to account for such property have been suspended from the program. Prohibits federal or state agencies that receive controlled equipment from taking ownership of the property. Requires the Defense Logistics Agency to maintain an Internet website to make available to the public: (1) information on each transfer, broken down by state, county, and recipient; (2) during the 30-day period preceding the date on which any property is transferred, information on the property to be transferred and the recipient; and (3) information on any use of controlled equipment by the transfer recipient.
the determination of the longitudinal polarization of the electron beam is one of the dominant systematic uncertainties in any parity violating electron scattering ( pves ) experiment . in order to achieve the desired high precision , the polarization of the electron beam must be monitored continuously with an uncertainty of @xmath00.5% . these ambitious goals can be achieved if multiple independent and high precision polarimeters are used simultaneously . in addition to being precise , the polarimeters must be non - invasive and must achieve the desired statistical precision in the shortest time possible . compton and mller polarimeters are typically the polarimeters of choice for these experiments and are essential to achieve the desired precision . however , a complimentary polarimetry technique based on the spin dependence of synchrotron radiation , referred to as `` spin - light , '' can be used as a relative polarimeter . a spin - light polarimeter could provide additional means for improving the systematic uncertainties and when calibrated against a compton / mller polarimeter it could provide a stable continuous monitoring of the beam polarization . we develop the conceptual design for a continuous polarimeter based on `` spin - light '' . the proposed spin - light polarimeter can achieve statistical precision of @xmath0 1% in measurement cycles of less than 10 minutes for 4 - 20 gev electron beams with a beam currents of @xmath1 100 @xmath2a . .a comparison of the compton , mller and spin - light polarimeters . [ cols="<,<,<",options="header " , ] spin light based polarimetry was demonstrated over 30 years ago , but has been ignored since then . a spin - light polarimeter has several advantages over conventional polarimeters and when used in conjunction with a compton polarimeter it could help provide a new benchmark for precision polarimetry . the 11 gev beam at jlab or the electron beam at a future eic would be well suited for spin light polarimetry and such a polarimeter would help achieve the @xmath0 0.5 % polarimetry desired by experiments approved for the 12 gev era and proposed for the eic . a 3 pole wiggler with a field strength of 4 t and a pole length of 10 cm would be adequate for such a polarimeter . a dual position sensitive ionization chambers with split anode plates is ideally suited as the x - ray detector for such a polarimeter . the differential detector design would help reduce systematic uncertainties . this work was supported in part by the u.s . department of energy under contract # de - fg02 - 07er41528 , by the eic detector r&d grant from brookhaven national lab . one of us ( p.m. ) would also like the thank the jefferson science associates for a jsa fellowship . 1 m. hauger _ et al . _ , a462 * , 382 ( 2001 ) . to be reported for the polarization measured during the qweak experiment at jlab , hall - c . p. c. rowson , d. su , and s. willocq , ann . nucl . part . sci . * 51 * , 345 ( 2001 ) ; m. woods , slac - pub-7319 ( 1996 ) . j. allison _ et al . _ , inst . and meth . * a506 * , 250 ( 2003 ) ; j. allison _ et al . _ , ieee trans . in nucl . science * 53 * , 270 ( 2006 ) . i. p. karabekov , r. rossmanith , proc . of the 1993 pac , washington , v. 1 , p. 457 ( 1993 ) ; i. p. karabekov and s. i. karabekian , proceedings of 5th european particle accelerator conference ( epac 96 ) , sitges , spain , 10 - 14 jun 1996 , pp 1743 - 1745 ( 1996 ) ; a. v. airapetian , r. o. avakian , i. p. karabekov , e. l. saldin , and m. v. yurkov , proc . of the spin-96 , amsterdam , the netherlands , vol.1 , p762 ( 1996 ) . d. d. ivanenko , i. pomeranchuk , ya zh * 16 * , 370 ( 1946 ) ; j. schwinger , phys . rev . * 75 * , 1912 ( 1947 ) . g. a. schott , ann . phys . * 24 * , 635 ( 1907 ) ; a. a. sokolov and i. m. ternov , zh . . fiz . * 31 * , 373 ( 1956 ) , sov . jetp * 4 * , 396 ( 1957 ) . f. r. elder , r. v. langmuir and h. c. pollock , phys . rev . * 74 * , 52 ( 1948 ) . m. yu ado and p. a. cherenkov , sov . . dokl . * 1 * , 517 ( 1957 ) . f. a. korolev , e. .n . akimov , e. n. markov , and o. f. kulikov sov . . dokl . * 1 * , 568 ( 1957 ) . p. joos , phys . * 4 * , 558 ( 1960 ) . a. a. sokolov , n. p. klepikov and i. m. ternov , jetf * 23 * , 632 ( 1952 ) . a. a. sokolov , and i. .m . ternov , jetf * 25 * , 698 ( 1953 ) . a. a. sokolov and i. .m . ternov , _ synchrotron radiation _ , pergamon press , new york ( 1968 ) ; a. a. sokolov and i. .m . ternov , _ radiation from relativistic electrons _ , translation series , new york ( 1986 ) . i. m. ternov , physics - uspekhi * 38 * , 409 ( 1995 ) . v. a. bordovitsyn , ph . d. thesis , moscow ( 1983 ) ; i. m. ternov and v. a. bordovitsyn , vestn . * 24 * , 69 ( 1983 ) ; v. a. bordovitsyn and v. v. telushkin , nucl . inst . and meth . * b266 * , 3708 ( 2008 ) . s. a. belomesthnykh _ et al . . inst . and meth . * 227 * , 173 ( 1984 ) . j. le duff , p. c. marin , j. l. manson , and m. sommev , orsay - rapport technique , 4 - 73 ( 1973 ) . v. v. dmitrenko _ et al . phys .- tech . * 28 * , 1440 ( 1983 ) ; a. e. bolotnikov _ et al . phys .- tech . phys . * 33 * , 449 ( 1988 ) ; c. levin _ et al . inst . and meth . * a332 * , 206 ( 1993 ) . g. tepper and j. losee , nucl . inst . and meth . * a356 * , 339 ( 1995 ) . e. nakamura _ et al . _ , j. of elec . spec . and rel . phen . * 80 * , 421 ( 1996 ) ; d. e. baynham , p. t. m. clee , and d. j. thompson , nucl . instr . and meth . , * 152 * , 31 ( 1978 ) k. sato , j. of synchrotron rad . , * 8 * , 378 ( 2001 ) ; t. gog , d. m. casa and i. kuzmenko , cmc - cat technical report . m. sands , slac technical note , slac-121 ( 1970 ) . b. norum , cebaf technical note , tn-0019 ( 1985 ) . a. e. bolotnikov and b. ramsey , nucl . inst . and meth . * a396 * , 360 ( 1997 ) . http://www.nist.gov/physlab/data/xraycoef/index.cfm http://library.lanl.gov/cgi-bin/getfile?00415886.pdf prajwal mohanmurthy prajwal mohanmurthy obtained his bachelor of science degree from mississippi state university in 2012 . he was a graduate research fellow at the high performance computing collaboratory at mississippi state university and currently is a graduate research fellow in the laboratory for nuclear sciences at massachusetts institute of technology . his research interests are centered around test of standard model and fundamental symmetries in search of physics beyond the standard model . his recent research involvements have been geared towards a search for axionic dark matter and the precision measurement of the mass of neutrinos . he also actively collaborates to develop beam instrumentation for up and coming facilities and future accelerators . dipangkar dutta dr . dipangkar dutta is an associate professor of physics at the mississippi state university department of physics and astronomy . he obtained his bachelor of technology degree from indian institute of technology , bombay in 1992 and his doctoral degree in physics from northwestern university in 1999 . he was a post - doctoral and senior post - doctoral fellow in the laboratory for nuclear sciences at massachusetts institute of technology . his research is focused primarily on precision measurement of fundamental properties of nucleons . he is also interested in precision tests of fundamental symmetries and the standard model .
the physics program at the upgraded jefferson lab ( jlab ) and the physics program envisioned for the proposed electron - ion collider ( eic ) include large efforts to search for interactions beyond the standard model ( sm ) using parity violation in electroweak interactions . these experiments require precision electron polarimetry with an uncertainty of @xmath0 0.5 % . the spin dependent synchrotron radiation ( sr ) , called spin - light , can be used to monitor the electron beam polarization . in this article we develop a conceptual design for a `` spin - light '' polarimeter that can be used at a high intensity , multi - gev electron accelerator . we have also built a geant4 based simulation for a prototype device and report some of the results from these simulations . shell : bare demo of ieeetran.cls for journals polarized electrons , synchrotron radiation , spin light , differential ionization chambers .
in the subsurface , producing geothermal systems are characterized by coupled hydraulic , thermal , chemical and mechanical processes . to determine the potential of a geothermal site , and decide optimal production strategies , it is important to understand and quantify these processes . rigorous mathematical modeling and accurate numerical simulations are essential , but multiple interacting processes acting on different scales leads to challenges in solving the coupled system of equations . standard discretization methods include finite element , control - volume and finite difference methods for the space discretization ( see @xcite and references therein ) , while standard implicit , explicit or implicit - explicit methods have until recently mostly been used for the discretization in time @xcite . challenges with the discretization are , amongst others , related to severe time - step restrictions associated with explicit methods and excessive numerical diffusion for implicit methods . furthermore , implicit methods require at each time step the solution of large systems of nonlinear equations , which may lead to bottlenecks in practical computations . in this paper , we consider a different approach for the temporal discretization based on the exponential rosenbrock euler method ( erem ) and rosenbrock - type methods ( rosm ) . exponential integrators have recently been suggested as efficient and robust alternatives for the temporal discretization for several applications ( see @xcite . rosenbrock - type methods have been intensively developed in the literature and used in a variety of applications ( see @xcite and reference therein ) . however , neither of these approaches have yet found wide - spread use in porous media applications . the mathematical model discussed , consists of a system of partial differential equations that express conservation of mass and energy . in addition , the model entails phenomenological laws describing processes active in the reservoir , such as darcy s law for fluid flow with variable density and viscosity , fourier s law of heat conduction , and those describing the relation between fluid properties ( nonlinear fluid expansivity and compressibility ) and porosity subject to pressure and temperature variations . the resulting system of equations is nonlinear and coupled and requires sophisticated numerical techniques . our solution technique is based on a sequential approach , which decouples the mathematical model . an advantage of this approach is that it allows for specialized solvers for unknowns with different characteristics . as the linearized fully coupled matrices are often very poorly conditioned such that small time - steps are required , a carefully chosen sequential approach leads to higher efficiency and accuracy than a simultaneous solution approach if the couplings are not too strong @xcite . a finite volume method is applied for the space discretization while the exponential rosenbrock euler method and rosenbrock - type methods are applied to integrate the systems in time based on successive linearizations . the exponential rosenbrock euler method is based on the linearization of the odes resulting from the space discretization at each time step . the linear part is solved exactly in time up to a given tolerance in the computation of a matrix exponential function of the jacobian . the nonlinear part is approximated using low - order taylor expansions . as in all exponential integrator schemes , the expense is the computation of the matrix exponentials of the stiff jacobian matrix resulting from the spatial discretization . computing matrix exponentials of stiff matrices is a notorious problem in numerical analysis @xcite , but new developments for both lja points and krylov subspace techniques @xcite have led to efficient numerical approaches ; see e.g. @xcite and references therein . besides , the method is l - stable , and performs well for super - stiff odes . the rosenbrock - type methods use the appropriate rational functions of the jacobian from the spatial discretization . the parameters in these schemes are found in consistency with the required order of convergence in time . as a result these schemes are a - stable ( as will be discussed below ) and only few linear systems are solved at each time step . the paper is organized as follows . we present the model equations in section 2 , and the finite volume method for spatial discretization in section 3 , along with temporal discretization schemes . the implementation of the exponential rosenbrock - euler method is discussed in section 4 . in section 5 we present some numerical examples , which also include simulations for a fractured reservoir , and show comparisons to standard approaches , before we draw conclusions in section 6 . we assume single - phase flow of water , which allows the energy equation to be written in terms of temperature . the model equations are given by @xmath0 ( see @xcite ) . the model equations are given in the bounded spatial domain @xmath1 , with boundary @xmath2 , and in the time interval is @xmath3 $ ] . here @xmath4 is the porosity ; @xmath5 is the heat transfer coefficient ; @xmath6 is the heat production ; @xmath7 is the density ; @xmath8 stand for heat capacity ; @xmath9 is the temperature ; @xmath10 is the thermal conductivity tensor , with the subscripts @xmath11 and @xmath12 referring to fluid and rock ; and @xmath13 is the darcy velocity given by @xmath14 where @xmath15 is the permeability tensor , @xmath16 is the viscosity , @xmath17 is the gravitational acceleration and @xmath18 the pressure . the mass balance equation for a single - phase fluid is given by @xmath19 where @xmath20 $ ] is contribution from a source or sink per time unit . assuming in equation ( [ masscon ] ) that the rock is slightly compressible , the porosity is a function of pressure and can be expressed as a linear function , yielding @xmath21 with @xmath22 where @xmath23 is the porosity at the initial pressure , @xmath24 the initial pressure and @xmath25 the bulk vertical compressibility of the porous medium . notice that @xmath26 where @xmath27 and @xmath28 are respectively the thermal fluid expansivity and its compressibility defined by @xmath29 inserting equation ( [ masscon3 ] ) and ( [ rockcomp ] ) in equation ( [ masscon2 ] ) yields @xmath30 the state functions @xmath16,@xmath31 and @xmath32 can found @xcite . the model problem is therefore to find the functions @xmath33 satisfying the nonlinear equations ( [ heat ] ) and ( [ masscon4 ] ) subject to ( [ darcy ] ) . notice if @xmath34 we have the equilibrium state of the heat with @xmath35 . we use the finite volume method@xcite on a structured mesh @xmath36 , with maximum mesh size @xmath37 . we denote by ( @xmath38 ) the family of control volumes of mesh @xmath36 . the finite volume method space discretization consists in : 1 . integrate each equations of ( [ heat ] ) and ( [ masscon4 ] ) over each control volume @xmath38 . 2 . use the divergence theorem to convert the volume integral into the surface integral in all divergence terms . use two - point flux approximations for diffusion heat and flow fluxes @xcite @xmath39 4 . use the standard upwind weighting @xcite for the convective ( advective ) flux @xmath40 here we denote by @xmath41 the unit normal vector to @xmath42 outward to @xmath43 and @xmath44 the elementary surface measure . for an edge @xmath45 of the control volume @xmath43 , @xmath46 will denote the unit normal vector to @xmath45 outward to @xmath47 . let us illustrate the spatial discretization of the second equation of ( [ heat ] ) on a structured mesh @xmath36 ( the two - point flux approximation is sufficient for so - called @xmath48-orthogonal grids ) . we denote by @xmath49 the set of interior edges of the control volumes of @xmath50 . for any function @xmath51 , @xmath52 denotes the approximation of @xmath51 at time @xmath53 at the center of the control volume @xmath54 and @xmath55 the approximation of @xmath51 at time @xmath53 at the center of the edge @xmath45 . for a control volume @xmath56 , we denote by @xmath57 the set of edges of @xmath47 , @xmath58 the lebesgue measure of the control volume @xmath54 , @xmath59 the edge interface between the control volume @xmath47 and the control volume @xmath60 , @xmath61 the distance between the center of the control volume @xmath47 and center of the control volume @xmath62 , and @xmath63 the distance between the control volume @xmath47 and the edge @xmath45 . letting @xmath64 , we therefore have @xmath65 these approximations are for interior edges and dirichlet boundary condition . for a neumann boundary condition , @xmath66 is naturally given . in the case of a discrete - fracture model , we make adjustments to the spatial discretization following the approach in @xcite . to determine the convective fluxes , we set @xmath67 . standard upwind weighting yields @xmath68 reorganizing these space approximations yield the following system of odes @xmath69 for a given initial pressure @xmath70 , with corresponding initial velocity @xmath71 ) and initial temperature @xmath72 , the technique used in the paper consists in solving successively the systems @xmath73 and @xmath74 the odes ( [ spaceheat1 ] ) is usualy stiff since the smaller absolute value of the eigenvalues of the jacobian matrix is usually closed to zero . we now present our numerical methods for the odes ( [ spaceheat1 ] ) based on the sequential approach . considering a sequential solution approach , we start by presenting low order time discretization schemes and their stability properties before we give some higher order schemes . we briefly describe the standard integrators that will be used for comparison with exponential rosenbrock - euler method and rosenbrock - type methods . consider the odes ( [ spaceheat2 ] ) and ( [ spaceheat3 ] ) within the interval @xmath76,\,\tau>0 $ ] . given a time - step @xmath77 , applying the @xmath78euler scheme with respect to the function @xmath79 in the odes ( [ spaceheat2 ] ) yields @xmath80 for @xmath81 the scheme is implicit and given the approximate solutions @xmath82 and @xmath83 at time @xmath84 , the solution @xmath85 at time @xmath86 is obtained by solving the nonlinear equation @xmath87 which is solved using the newton method . for efficiency , all linear systems are solved using the matlab function bicgstab with ilu(0 ) preconditioners with no fill - in , which are updated at each time step . to solve the odes ( [ spaceheat3 ] ) we apply again the @xmath75-euler method , but with respect to @xmath88 , yielding @xmath89 for @xmath90 the @xmath75-euler scheme is second order in time , and for @xmath91 the scheme is first order in time . in this paper , the standard sequential approach to solve the odes ( [ spaceheat ] ) consists of applying successively the schemes ( [ temscheme ] ) and ( [ prescheme ] ) . to introduce the exponential euler - rosenbrock method ( also called exponential euler method ) , let us first consider the following system of odes @xmath92\\ \mathbf{y}(0)=\mathbf{y}_{0 } , \end{array}\right.\end{aligned}\ ] ] which appear after spatial discretisation of semilinear parabolic pdes . here @xmath93 is a stiff matrix and @xmath17 a nonlinear function . this allows us to write the exact solution of ( [ extraodes ] ) as @xmath94 given the exact solution at the time @xmath84 , we can construct the corresponding solution at @xmath86 as @xmath95 note that the expression in ( [ exactnu ] ) is still an exact solution . the idea behind exponential time differencing ( etd ) is to approximate @xmath96 by a suitable polynomial @xcite . the simplest case is when @xmath96 is approximated by the constant @xmath97 . the corresponding ( etd1 ) scheme is given by @xmath98 where @xmath99 note that the etd1 scheme in ( [ etd ] ) can be rewritten as @xmath100 this new expression has the advantage that it is computationally more efficient as only one matrix exponential function needs to be evaluated at each step . recently , the etd1 scheme was applied to advection - dominated reactive transport in heterogeneous porous media @xcite . for this problem , a rigorous convergence proof is established for the case of a finite volume discretization in space @xcite . in these works , it was observed that the exponential methods were generally more accurate and efficient than standard implicit methods . as our systems ( [ spaceheat2 ] ) and ( [ spaceheat3 ] ) are nonlinear , we need to linearize before applying the etd1 scheme . consider the system the odes ( [ spaceheat2 ] ) . for simplicity we assume that it is autonomous @xmath101 let @xmath82 and @xmath83 be the numerical approximations of the exact solutions @xmath102 and @xmath103 . to obtain the numerical approximation @xmath85 of the exact solution @xmath104 , we linearize @xmath105 at @xmath82 and obtain the following semilinear odes @xmath106 where @xmath107 denotes the jacobian of the function @xmath108 respect to @xmath79 and @xmath17 the remainder given by @xmath109 applying the etd1 scheme to ( [ linear ] ) yields @xmath110 this scheme , called the exponential rosenbrock - euler method ( erem)@xcite ( or the exponential euler method ( eem ) ) has been reinvented in different names ( see references in @xcite ) . the erem scheme is second order in time @xcite for autonomous problems . to deal with non - autonomous problems , while conserving the second order accuracy of eerm scheme , it must first be converted to autonomous problems . the corresponding version is given in the next section by equation([neermg ] ) . the scheme ( [ eerm ] ) contains the exponential matrix function @xmath111 . to obtain the simplest rosenbrock - type methods , the exponential function is approximated by the following rational function @xmath112 where @xmath113 is a parameter . for a given parameter @xmath113 , using the approximation ( [ ratapp ] ) in equation ( [ eerm ] ) , the corresponding rosenbrock - type method ( also called linear implicit methods ) is given by @xmath114 for the parameter @xmath115 , the corresponding rosenbrock - type method is order two in time for regular solutions and order 1 if @xmath116 . to solve the odes ( [ spaceheat3 ] ) we apply again the erem scheme or the rosenbrock - type method ( [ ros ] ) , but with respect to @xmath88 , which yields respectively @xmath117 and @xmath118 as with @xmath78euler methods , the sequential approaches with the exponential rosenbrock - euler method and rosenbrock - type methods proposed in this paper consists in solving the odes ( [ spaceheat ] ) by applying successively the schemes ( [ eerm ] ) and ( [ ros ] ) to the odes ( [ spaceheat2 ] ) and the schemes ( [ eerm1 ] ) and ( [ ros1 ] ) to the odes ( [ spaceheat3 ] ) . the sequential technique presented is the so called trotter splitting and is generally first order accurate@xcite . the alternative technique is the so called strang splitting which consists to apply the schemes ( [ eerm ] ) and ( [ ros ] ) to the odes ( [ spaceheat2 ] ) with time step @xmath119 , afterward apply the schemes ( [ eerm1 ] ) and ( [ ros1 ] ) to the odes ( [ spaceheat3 ] ) with time step @xmath77 and then apply again the schemes ( [ eerm ] ) and ( [ ros ] ) to the odes ( [ spaceheat2 ] ) with time step @xmath119 . strang splitting is formally second -order accurate in time for sufficiently smooth solution @xcite . we can obviously observe that this approach is less efficient than trotter splitting . trotter splitting and strang splitting are called multiplicative operator splittings . in the sequel , sequential approach will mean trotter splitting . one of the important features of any numerical scheme is its stability properties . our goal here is to study the stability properties of the schemes presented in the previous section and high order rosenbrock - type methods . a special interest will be given to two rosenbrock - type methods of order two and three because of their good stability properties . in applying the high order rosenbrock - type methods , we will use the previously presented sequential approaches to solve the odes ( [ spaceheat2 ] ) and ( [ spaceheat3 ] ) . consider the following odes @xmath120\\ \mathbf{y}(0)=\mathbf{y}_{0 } , \end{array}\right.\end{aligned}\ ] ] where @xmath121 is nonlinear function . the corresponding @xmath78euler scheme is given by @xmath122 note that the exponential rosenbrock euler method presented in the previous section is for autonomous odes . before applying it to non - autonomous system ( [ extraodes1 ] ) , transformation @xmath123 must be performed to obtain autonomous odes . given the numerical solution @xmath124 , the linearization equation leading to @xmath125 is given by @xmath126 & = & \left [ \begin{array}{cc } d_{\mathbf{y}}\mathbf{f}(\mathbf{y}^{n},t_{n } ) & d_{t}\mathbf{f}(\mathbf{y}^{n},t_{n})\\ 0 & 0\\ \end{array } \right ] \left [ \begin{array}{c } \mathbf{y}(t)\\ t \end{array } \right ] + \left [ \begin{array}{c } \mathbf{g}_{n } ( \mathbf{y}(t),t)\\ 1 \end{array } \right ] \newline\\ \mathbf{g}_{n } ( \mathbf{y}(t),t)&=&\mathbf{f}(\mathbf{y}(t),t)-d_{\mathbf{y}}\mathbf{f}(\mathbf{y}^{n},t_{n})\mathbf{y}(t)-d_{t}\mathbf{f}\ , t. \ ] ] using this transformation and ( * ? ? ? * lemma 1 ) , the corresponding erem scheme for non - autonomous system is given by @xmath127 the exponential functions @xmath128 are defined by @xmath129 these functions satisfy the recurrence relations @xmath130 the corresponding lower order rosenbrock - type methods is given by @xmath131 . \ ] ] in order to study their stability properties , we apply the @xmath78euler , erem and rosm schemes to the linear odes @xmath132 with constant time step @xmath133 . we therefore have @xmath134 where @xmath135 for the @xmath78euler scheme , @xmath136 for the erem scheme and @xmath137 for the rosm scheme . the function @xmath138 is called the stability function of the method . the set @xmath139 is called the stability domain of the method . a numerical method is a -stable if its stability domain @xmath140 satisfies @xmath141 let us study the a - stability of the @xmath78euler . let @xmath142 , we have @xmath143 we can therefore observe that the @xmath78euler scheme is a -stable if @xmath144 rosm scheme is a -stable if @xmath145 and erem scheme is a -stable . a - stability is not the whole answer to the problem of stiff equations , excellent numerical methods for super - stiff equations would be l - stable . numerical methods are l - stable if they are a - stable and in addition ( see @xcite ) @xmath146 we can observe that the @xmath78euler scheme is l - stable if @xmath147 , the rosm scheme is l - stable if @xmath148 and the erem scheme is l - stable . in the sequel we will use the rosm schemes with @xmath148 and @xmath115 , which will be denoted respectively by rosm@xmath149 and rosm@xmath150 . the s - stage rosenbrock - type methods for the ode ( [ extraodes1 ] ) are given by @xmath151 the coefficients @xmath152 are obtained by using the consistency conditions required to achieve the desirable order of convergence @xmath18 in time . different ways to find these coefficients are presented in the literature ( see @xcite and references therein ) . the approximation @xmath153 is called an embedded approximation associated to the s - stage rosenbrock - type approximation @xmath154 and is used to control the local errors for adaptivity purpose . the coefficients @xmath155 are determined using the consistency conditions such that the embedded approximation is order @xmath156 . in this work , we use the second order scheme ros2(1 ) , where the coefficients are given in table [ ros2 ] , and also the third order scheme denoted ros3p in @xcite , which uses additional conditions to avoid order reduction ( see table [ ros3 ] ) . the ros2(1 ) scheme is l - stable and the ros3p scheme is a - stable ( see @xcite ) . .coefficients of the ros2(1 ) scheme from @xcite . [ cols="<,<",options="header " , ] the implementation of rosenbrock - type schemes is straightforward as there are no nonlinear equations to solve at each time step . for efficiency , all linear systems are solved using the matlab function bicgstab with ilu(0 ) preconditioners with no fill - in . the time step adaptivity can be performed using the standard error control and the step size prediction as in @xcite with an appropriate norm of @xmath157 . the key element in all exponential integrator schemes is the computation of the matrix exponential functions , the so called @xmath158 functions . there are many techniques available for that task@xcite . standard pad approximation compute at every time step the whole matrix exponential functions and are therefore memory and time consuming for large problems . krylov subspace technique and the real fast leja points technique are proved to be efficient for this computation for large systems @xcite . let us summarize these techniques while solving ode ( [ extraodes1 ] ) with the erem scheme . the main idea of the krylov subspace technique is to approximate the action of the exponential matrix function @xmath159 on a vector @xmath13 by projection onto a small krylov subspace @xmath160 @xcite . the approximation is formed using an orthonormal basis of @xmath161 $ ] of the krylov subspace @xmath162 and of its completion @xmath163 $ ] . the basis is found by arnoldi iteration , which uses stabilized gram - schmidt to produce a sequence of vectors that span the krylov subspace ( see algorithm [ alg : al2 ] ) . let @xmath164 be the @xmath165 standard basis vector of @xmath166 . we approximate @xmath167 by @xmath168 with @xmath169.\ ] ] the coefficient @xmath170 is recovered in the last iteration of arnoldi s iteration in algorithm [ alg : al2 ] . we denote by @xmath171 the standard euclidan norm . the approximation ( [ approphi ] ) is the first two terms of the expansion given in ( * ? ? ? * theorem 2 ) . initialise : @xmath172 @xmath173 @xmath174 @xmath175 @xmath176 @xmath177 for a small krylov subspace ( i.e , @xmath178 is small ) a standard pad approximation can be used to compute @xmath179 , but an efficient way used in @xcite is to recover @xmath180 directly from the pad approximation of the exponential of a matrix related to @xmath181 . notice that this implementation can be done without explicit computation of the jacobian matrix @xmath182 as the krylov subspace @xmath162 can be formed by using the following approximations @xmath183 for a suitably chosen perturbation of @xmath184 ( see @xcite ) , while solving the ode ( [ extraodes1 ] ) . these approximations prove that the exponential rosenbrock - euler scheme with the krylov subspace technique can be implemented using the free jacobian technique . the implementations in expokit @xcite ( for function @xmath185 ) and in @xcite use the truncation error in the approximation ( [ approphi ] ) to build the local error estimates ( see ( * ? ? ? * theorem 2 ) ) . the time step subdivisions depend on the given tolerance and the local errors . this technique has been successfully applied to the nonlinear advection diffusion - reaction - equation in @xcite where advection plays a key role . we will used it to solve the temperature equation ( [ spaceheat2 ] ) . the key points of this method are as follows : for a given vector @xmath13 , real fast ljapoints approximate @xmath186 by @xmath187 , where @xmath188 is an interpolation polynomial of degree @xmath178 of @xmath128 at the sequence of points @xmath189 called spectral real fast ljapoints . these points @xmath190 belong to the spectral focal interval @xmath191 $ ] of the matrix @xmath192 , i.e. the focal interval of the smaller ellipse containing all the eigenvalues of @xmath193 . this spectral interval can be estimated by the well known gershgorin circle theorem @xcite . it has been shown that as the degree of the polynomial increases , and hence the number of ljapoints increases , superlinear convergence is achieved @xcite ; i.e. , @xmath194 set @xmath195 , the sequence of fast lja points is generated by @xmath196 } { \max } \underset { k=0}{\prod^{j-1}}\mid \xi-\xi_{k } \mid \quad j=1,2,3,\cdots.\end{aligned}\ ] ] given the newton s form of the interpolating polynomial , @xmath188 is given by @xmath197 + \underset{j=1}{\sum ^{m}}\varphi_{i}\left [ \xi_{0},\xi_{1 } , \cdots,\xi_{j}\right ] \underset{k=0}{\prod^{j-1}}\left ( z-\xi_{k } \right)\end{aligned}\ ] ] where the divided differences @xmath198 $ ] are defined recursively by @xmath199 : = \varphi_{i}(\xi_{0}),\;\ ; d_{1}=\varphi_{i}\left [ \xi_{0},\xi_{1}\right]:=\dfrac{\varphi_{i}\left[\xi_{1}\right]-\varphi_{i}\left[\xi _ { 0}\right]}{\xi_{1}-\xi_{0}}.\\ d_{i}= \varphi_{i}\left [ \xi_{0},\xi_{1},\cdots,\xi_{i}\right]=\dfrac{\varphi_{i}\left[\xi_{0},\xi_{1},\cdots,\xi_{i-2 } , \xi_{i } \right]-\varphi_{i}\left[\xi_{0},\xi_{1},\cdots,\xi_{i-2},\xi_{i-1 } \right]}{x_{i}-x_{i-1}}. \end{array}\right.\end{aligned}\ ] ] due to cancelation errors , this standard procedure can not produce accurate divided differences with magnitude smaller than machine precision . it can be shown @xcite that the divided differences of a function @xmath200 of the independent variable @xmath201 at the points @xmath202 $ ] are the first column of the matrix function @xmath203 , where @xmath204 here @xmath205 is the identity matrix . to compute @xmath206 , where @xmath207 is the first standard basis vector of @xmath208 , we apply taylor expansion of order @xmath18 with scaling and squaring or pad approximation @xcite . the interest of the newton interpolation comes from the fact that the approximation with a polynomial of degree @xmath178 is directly obtained from the approximation with a polynomial of degree @xmath209 , in fact we have @xmath210 the error estimate from this approximation is given by @xmath211 where @xmath212 is the weighted and scaled norm defined by @xmath213 where @xmath214 and @xmath215 denote respectively the desirable absolute and relative tolerance , and @xmath216 the size of the matrix @xmath182 . following the work in @xcite , during the evaluation of the @xmath217 functions , the stopping criterion is @xmath218 where @xmath219 being the order of convergence of the method ( @xmath220 for the scheme erem ) . in order to filter possible oscillations in the error estimate , the average on the last five previous values of the errors is used instead of @xmath221 in the stopping condition . in the case of an unaccepted degree @xmath178 , we increase the degree of the polynomial following relation ( [ newt ] ) . when the degree @xmath178 for convergence is too large , the time step @xmath77 has to be split as described in @xcite . the algorithm with the function @xmath111 is given in @xcite . for the case where the spectral of @xmath182 is more spread along the imaginary axis , as for example in some hyperbolic problems , the method has been upgraded in @xcite . the attractive computational features of the method are clear in the sense that there is no krylov subspace to store or linear systems to solve , but a drawback is that the method is based on interpolation , which is generally ill - conditioned . a major drawback is that the required degree of the polynomial grows with the norm of the matrix @xmath222 . in the two examples , we deal with temperatures between 0 and @xmath223 . the water thermal expansivity @xmath27 and its compressibility @xmath28 used is from @xcite . these two state functions depend on both pressure and the temperature . as we are dealing with low - enthalpy reservoirs ( @xmath224 ) , some water properties can be well approximated as a function of temperature only . the water density in @xmath225 and the fluid viscosity in @xmath226 ( see @xcite ) used are given respectively by @xmath227 and @xmath228 the water heat capacity in @xmath229 used is also function of the temperature only in the interval @xmath230 and given by ( see @xcite ) @xmath231 the water thermal conductivity used is @xmath232 . we take the heat transfer coefficient @xmath233 sufficiently large to reach the local equilibrium . all our tests were performed on a workstation with a 3 ghz intel processor and 8 gb ram . our code was implemented in matlab 7.11 . we also used part of the codes in @xcite for the spatial discretization . the absolute tolerance in the krylov subspace technique , leja point technique , newton iterations and all linear systems is @xmath234 . the dimension in the krylov technique used is @xmath235 . the initial pressure used is the steady state pressure with water properties at the initial temperature . in the legends of all of our graphs we use the following notation * `` implicittheta=1 '' denotes results from the theta euler with @xmath236 in([spaceheat2 ] ) and ( [ spaceheat3 ] ) . * `` implicittheta=0.5 '' denotes results from the theta euler with @xmath90 in([spaceheat2 ] ) and ( [ spaceheat3 ] ) . * `` eremkleja '' denotes results from erem scheme with krylov subspace for matrix exponential in the pressure system ( [ spaceheat3 ] ) and real fast leja points for matrix exponential in the temperature system ( [ spaceheat2 ] ) . * `` eremkrylov '' denotes results from erem scheme with krylov subspace for matrix exponential in ( [ spaceheat2 ] ) and ( [ spaceheat3 ] ) . * `` rosm(1 ) '' denotes results from the the scheme rosm with @xmath148 in([spaceheat2 ] ) and ( [ spaceheat3 ] ) . * `` rosm(1/2 ) '' denotes results from the the scheme rosm with @xmath115 in([spaceheat2 ] ) and ( [ spaceheat3 ] ) . * `` ros2 '' denotes results from the scheme ros2(1 ) in([spaceheat2 ] ) and ( [ spaceheat3 ] ) . * `` ros3p '' denotes results from the scheme ros3p in([spaceheat2 ] ) and ( [ spaceheat3 ] ) . we use different constant time steps with the goal to study the convergence of the temperature and pressure equation at the final time along with the efficiency of numerical schemes . the reference solutions used in the calculation of the errors are the numerical solutions with the time step size equal to the half of the lower time step in the graphs . we consider a heterogeneous reservoir described by the domain @xmath237 \times \left[0,1\right ] \times \left[0,0.1\right],$ ] where all distances are in km . the half upper part of the reservoir is less permeable than the lower part . the injection point is located at the position @xmath238,$ ] injecting with the rate @xmath239 , and the production is at @xmath240,$ ] with rate @xmath241 with the lowest pressure at the point @xmath242 . homogeneous neumann boundary conditions are applied for both the pressure equation , given by the mass conservation law , and the energy equation . the water temperature at the injection well is @xmath243 . the upper half of the reservoir has rock properties : permeability @xmath244 darcy , porosity @xmath245 , @xmath246 , @xmath247,@xmath248 while rock properties of the lower half are : permeability @xmath249 darcy , porosity @xmath250 , @xmath251 , @xmath252,@xmath253 . in the two part the bulk vertical compressibility @xmath254 . we use a structured parallelepiped mesh . the size of the system is @xmath255 for the odes ( [ spaceheat3 ] ) and @xmath256 for the odes ( [ spaceheat2 ] ) . the initial temperature at @xmath257 is @xmath258 and the temperature increases @xmath259 at every 10 m. the initial temperature field is presented in figure [ fig01a ] , the temperature field at @xmath260 days is shown in figure [ fig01b ] while figure [ fig01c ] shows the temperature at @xmath261 days . we can observe that the cold water decreases reservoir temperature at injection well and that the temperature at the production well increases . figure [ fig01d ] shows how the temperature errors at the final time @xmath260 days decrease with time step size . from this figure we can observe that the schemes with the same order of convergence in time have almost the same errors with our sequential approach . we can also observe that for large time steps , the errors are almost the same for all the schemes . the implicit @xmath75-euler method with @xmath236 and the rosm(1 ) are both of order @xmath262 in time . this order may decrease up to 1 for less smooth solutions @xcite or relatively small time steps as for simple problems these schemes are order 1 . we can observe that the erem scheme and the implicit @xmath75-euler method with @xmath90 are slightly more accurate than the ros2 scheme . erem scheme and the implicit @xmath75-euler method with @xmath90 are @xmath263 in time , the ros2(1 ) scheme is order @xmath264 and the ros3p scheme is order @xmath265 . this order may decrease for less smooth solutions @xcite . we can however observe that schemes with high orders in time for simple problems are affected by order reduction in the sequential approach . figure [ fig01e ] shows the relative @xmath266 temperature errors as the function of the cpu time corresponding to figure [ fig01d ] . we can observe the efficiency of the erem scheme comparing the others schemes . this figure also shows that the exponential rosenbrock - euler method and rosenbrock- type methods are very efficient as compared to the standard implicit @xmath75-euler methods . figure [ fig01f ] shows the cpu time as a function of time step size corresponding to figure [ fig01d ] and figure [ fig01e ] . we can clearly observe that the exponential rosenbrock - euler method and rosenbrock- type methods are again very efficiency compared to standard implicit @xmath75-euler methods . from this figure we can observe that erem scheme , rosm(1 ) and rosm(1/2 ) schemes are at least five time as efficient , the ros2 is at least twice as efficient and the ros3p at least one and half time as efficient as the standard methods . while these factors may be dependent on the particular implementation , and the availability of good nonlinear solvers for the nonlinear solves in the standard methods , we believe them to be representative . 0.01 0.01 0.01 0.01 we consider here a 2 d fractured reservoir @xcite , with a quasi - structured triangular mesh in the domain @xmath267 \times \left[0,100\right ] $ ] , where all distances are in m. the matrix properties are : permeability @xmath268 darcy , porosity @xmath269 , @xmath252,@xmath246 , @xmath248 and @xmath270 . the fractures have an aperture of @xmath271 and a permeability of @xmath272 darcy . the injection point is located at the position @xmath273 , with constant pressure @xmath274mpa , while the production point is located at the point @xmath275 , with constant pressure @xmath276pa . homogeneous neumann boundary conditions are applied for both the pressure equation given by the mass conservation law and the energy equation . the size of the system is @xmath277 for the odes ( [ spaceheat3 ] ) and @xmath278 for the odes ( [ spaceheat2 ] ) . the initial temperature is @xmath279 while the water temperature at the injection well is @xmath280 . the 2d grid with fractures is shown in figure [ fig02a ] , the temperature field at time @xmath281 days in figure [ fig02b ] and the temperature field at the time @xmath282 days in figure [ fig02c ] . figure [ fig02d ] shows the pressure field at time @xmath281 days . figure [ fig03a ] shows the time convergence of the all the schemes as the temperature errors decrease with time step size at the final time @xmath281 days . from this figure we can observe again that the schemes with the same order of convergence in time have almost the same errors . the implicit @xmath75-euler method with @xmath236 and the rosm(1 ) are order @xmath283 in time . the erem scheme , the @xmath75-euler method with @xmath90 and the ros2(1 ) scheme are order @xmath284 in time while the ros3p scheme is order @xmath265 . these orders may increase for relatively small time steps . again we can observe that schemes with high orders in time for simple problems are affected by order reduction in the sequential approach . figure [ fig03b ] shows the relative @xmath266 temperature errors as function of the cpu time corresponding to figure [ fig03a ] . as in the first example , we can observe the efficiency of the schemes rosm(1/2 ) and erem with the krylov technique compared to the others schemes . figure [ fig03c ] shows the cpu time as a function of time step size , corresponding to figure figure [ fig03a ] and figure [ fig03b ] . again we observe that erem scheme , rosm(1 ) and rosm(1/2 ) schemes are almost four times as efficient as the standard implicit methods , while the ros2 and the ros3p are almost twice as efficient as the standard implicit methods . from these examples , we expect the efficiency gain to increase with the size of the problem . the relative @xmath266 pressure errors are almost the same for all the numerical schemes . we therefore plot only the errors for two numerical schemes with order 1 and 2 in time respectively . figure [ fig03a ] shows the pressure errors at time @xmath281 days as a function of time step size for the erem scheme and @xmath75-euler method with @xmath236 . we can observe the convergence of those schemes while solving the pressure equation . the order of convergence in time is @xmath285 and may decrease according to @xcite for rough solutions ( less smooth solutions ) . we have proposed a novel approach for simulation of geothermal processes in heterogeneous porous media . this approach decouples the mass conservation equation from the energy equation and solves each stiff odes from space discretization sequentially using the exponential rosenbrock - euler method and rosenbrock - type methods for the time integration . numerical simulations in 2d and 3d show that using the krylov subspace technique and real leja points technique in the computation of the exponential functions @xmath128 in the exponential rosenbrock - euler method , and the matlab function bicgstab with ilu(0 ) preconditioners with no fill - in for solving all linear systems appearing in the rosenbrock - type methods and the implicit euler theta methods , makes our approach more efficient compared to the sequential standard implicit euler methods . we thank tor harald sandve for sharing with us the 2d fractured grid and its discretization . this work was funded by the research council of norway ( grant number 190761/s60 ) . l. bergamaschi , m. caliari and m. vianello . the relpm exponential integrator for fe discretizations of advection - diffusion equations , in : m. bubak , g. d. van albada , p. sloot ( eds . ) , lecture notes in computer sciences volume 3039 , springer verlag , berlin heidelberg , 2004 , pp . 434 - 442 . d. coumou , t. driesner , s. geiger , c. a. heinrich and s. k. matthi . the dynamics of mid - ocean ridge hydrothermal systems : splitting plumes and fluctuating vent temperatures . , 245 , pp . 218231 , 2006 . s. geiger , t. driesner , c. a .. heinrich and s. k. matthi . multiphase thermohaline convection in the earthcrust : i. a new finite element finite volume solution technique combined with a new equation of state for nacl - h2o . , ( 63):399434 , 2006 . d. o. hayba and s. ingebritsen . the computer model hydrotherm , a three - dimensional finite difference model to simulate ground water flow and heat transport in the temperature range of 0 to @xmath286 . , 1994 . r. podgorney , h. huang and d. gaston . massively parallel fully coupled implicit modeling of coupled thermal - hydrological - mechanical processes for enhanced geothermal system reservoirs , , 2010 , sgp - tr-188 .
simulation of geothermal systems is challenging due to coupled physical processes in highly heterogeneous media . combining the exponential rosenbrock euler and rosenbrock - type methods with control - volume ( two - point flux approximation ) space discretizations leads to efficient numerical techniques for simulating geothermal systems . in terms of efficiency and accuracy , the exponential rosenbrock euler time integrator has advantages over standard time - dicretization schemes , which suffer from time - step restrictions or excessive numerical diffusion when advection processes are dominating . based on linearization of the equation at each time step , we make use of matrix exponentials of the jacobian from the spatial discretization , which provide the exact solution in time for the linearized equations . this is at the expense of computing the matrix exponentials of the stiff jacobian matrix , together with propagating a linearized system . however , using a krylov subspace or leja points techniques make these computations efficient . the rosenbrock - type methods use the appropriate rational functions of the jacobian from the spatial discretization . the parameters in these schemes are found in consistency with the required order of convergence in time . as a result , these schemes are a - stable and only a few linear systems are solved at each time step . the efficiency of the methods compared to standard time - discretization techniques are demonstrated in numerical examples . exponential integration , krylov subspace , ljapoints , rosenbrock - type methods , fast time integrators , geothermal systems
SECTION 1. DEPOSITS IN CAPITAL CONSTRUCTION FUND ACCOUNT EXCLUDED FROM NET EARNINGS FROM SELF-EMPLOYMENT. (a) In General.--Subparagraph (A) of section 607(d)(1) of the Merchant Marine Act, 1936 (46 U.S.C. 1177(d)(1)) is amended by striking ``taxable income (determined without regard to this section and section 7518 of such Code) for the taxable year shall be reduced'' and by inserting ``taxable income and net earnings from self-employment attributable to the operation of the agreement vessels (determined without regard to this section and section 7518 of such Code) for the taxable year shall each be reduced''. (b) Nonqualified Withdrawals.--Section 607(h) of the Merchant Marine Act, 1936 (46 U.S.C. 1177(h)) is amended by adding at the end thereof the following new paragraph: ``(7) Nonqualified withdrawals subject to self-employment tax.-- ``(A) In general.--In the case of any taxable year for which there is a nonqualified withdrawal (including any amount so treated under paragraph (5)), the tax imposed by section 1401 of the Internal Revenue Code of 1986 (at a rate for such taxable year unless otherwise established by the taxpayer to the satisfaction of the Secretary) shall be determined without regard to section 230 of the Social Security Act (42 U.S.C. 430). ``(B) Tax benefit rule.--If any portion of a nonqualified withdrawal is properly attributable to deposits (other than earnings on deposits) made by the taxpayer in any taxable year which did not reduce the taxpayer's liability for tax under section 1401 of such Code for any taxable year preceding the taxable year in which such withdrawal occurs, such portion shall not be taken into account under subparagraph (A).''. (c) Conforming Amendments.-- (1) Subparagraph (A) of section 7518(c)(1) of the Internal Revenue Code of 1986 is amended by striking ``taxable income (determined without regard to this section and section 607 of the Merchant Marine Act, 1936) for the taxable year shall be reduced'' and by inserting ``taxable income and net earnings from self-employment attributable to the operation of the agreement vessels (determined without regard to this section and section 607 of the Merchant Marine Act, 1936) for the taxable year shall each be reduced''. (2) Section 7518(g) of the Internal Revenue Code of 1986 is amended by adding at the end thereof the following new paragraph: ``(7) Nonqualified withdrawals subject to self-employment tax.-- ``(A) In general.--In the case of any taxable year for which there is a nonqualified withdrawal (including any amount so treated under paragraph (5)), the tax imposed by section 1401 (at a rate for such taxable year unless otherwise established by the taxpayer to the satisfaction of the Secretary) shall be determined without regard to section 230 of the Social Security Act (42 U.S.C. 430). ``(B) Tax benefit rule.--If any portion of a nonqualified withdrawal is properly attributable to deposits (other than earnings on deposits) made by the taxpayer in any taxable year which did not reduce the taxpayer's liability for tax under section 1401 for any taxable year preceding the taxable year in which such withdrawal occurs, such portion shall not be taken into account under subparagraph (A).''. (3) Section 1403(b) of the Internal Revenue Code of 1986 is amended by adding the following new paragraph. ``(3) For treatment of earnings of ship contractors deposited in special reserve funds, see subsections (d) and (h) of section 607 of the Merchant Marine Act, 1936 (46 U.S.C. 1177) and subsections (c) and (g) of section 7518''. (d) Effective Date.-- (1) In general.--The amendments made by this section shall apply to taxable years beginning after December 31, 1992. (2) Waiver of statute of limitations.--If on the date of the enactment of this Act (or at any time within 1 year after such date of enactment) refund or credit of any overpayment of tax resulting from the application of the amendment made by subsection (a) is barred by any law or rule of law, refund or credit of such overpayment shall, nevertheless, be made or allowed if claim therefore is filed before the date 1 year after the date of the enactment of this Act.
Amends the Merchant Marine Act, 1936 and the Internal Revenue Code to permit participants in a capital construction fund to reduce their self-employment income by the amount of contributions to such fund. Makes nonqualified withdrawals subject to the self-employment tax.
TLDEF Files Federal Lawsuit against South Carolina Department of Motor Vehicles on Behalf of Gender Non-conforming Teen who was Forced to Remove Makeup for His Driver's License Photo September 2, 2014 - TLDEF today filed a federal lawsuit against the South Carolina Department of Motor Vehicles on behalf of a 16-year-old gender non-conforming teen who was targeted for discrimination last March. When he attempted to get his first driver’s license, Chase Culpepper was told by the DMV that he could not take his license photo unless he removed the makeup that he wears on a regular basis. The suit – brought by Chase’s mother Teresa Culpepper on his behalf as a minor – asks the court to rule that denying Chase the freedom to wear his everyday makeup in his license photo constitutes sex discrimination and violates his right to free speech and expression under the United States Constitution. It also seeks a ruling under the U.S. and South Carolina Constitutions that the DMV’s photo policy is unconstitutionally vague, too broad, and lets DMV employees arbitrarily decide how a driver's license applicant should look, without regard for the rights of the people they are supposed to serve. Chase wears makeup and androgynous or girls’ clothing on a regular basis. On March 3rd, 2014, he went to the DMV office in Anderson, SC with his mother to get his license. He had already passed his driving test and was dressed as he normally does. DMV employees told Chase that they would not take his license photo while he was wearing makeup and that he did not look the way DMV employees thought that a boy should. He was told that he could not wear a “disguise” and that he needed to “look male” in his license photo. Chase wanted his license and ultimately removed as much makeup as he could and had his photo taken by DMV employees. But he left the office feeling humiliated after changing the way he normally looks. “My clothing and makeup reflect who I am,” Chase said. “The Department of Motor Vehicles should not have forced me to remove my makeup simply because my appearance does not match what they think a boy should look like. I just want the freedom to be who I am without the DMV telling me that I’m somehow not good enough.” On June 9, TLDEF sent a letter to the South Carolina DMV asking that Chase be given the opportunity to retake his license photo while dressed as he normally does, with makeup. The letter explained that forcing Chase to alter his everyday appearance was discriminatory and violated Chase’s constitutional rights. But the department never responded to the letter and TLDEF now brings this lawsuit on Chase’s behalf. “Chase is entitled to be himself and to express his gender non-conformity without interference from the South Carolina DMV,” said TLDEF Executive Director Michael Silverman. “It is not the role of the DMV or any government agency or employee to decide how men and women should look. Chase should be able to get a driver’s license without being subjected to sex discrimination.” Along with TLDEF, Chase’s mother is standing by him. “As a mother, it broke my heart to see Chase being forced to be someone that he isn’t. Every time he pulls out his license, he is reminded of that, and that makes it even worse,” said Teresa Culpepper. “I love my son just the way he is. The DMV should not have treated him this way.” “I want to take my license photo again, with makeup, so I can be myself and express to the world who I truly am,” Chase added. The suit, Teresa Culpepper v. Kevin A. Shwedo, et al., is pending in the United States District Court for the District of South Carolina, Columbia Division. Fulbright & Jaworski LLP and Wyche, P.A. are pro bono co-counsel for Chase with TLDEF. We are grateful for their assistance. ||||| 0 Gender non-conforming teen sues over driver's license photo COLUMBIA, S.C. - Channel 9 was in Columbia Tuesday, as 16-year-old Chase Culpepper accused the DMV of forcing him to be something he's not. Culpepper listed himself as male on the driver's license application. However, when it comes to how he dresses and lives his life, he considers himself to be gender non-conforming. Culpepper looks nothing like your typical teen boy. Talking to reporters on Tuesday, he wore lady's jewelry, shoes, clothes and full makeup. When he went to get his driver's license in Anderson County in March, the DMV employees would not take his picture. "They told me that I could not wear disguises and I need to look more like a boy," Culpepper said. The state requires that anything that alters someone's appearance cannot be worn for driver's license photos. Culpepper said that doesn't apply to him because he wears makeup and women's clothes daily and that is his true appearance. "I was horrified and saddened by what happened to him," said his mother, Teresa Culpepper. He was asked to remove his face makeup and he went to bathroom twice to do that before they would take his picture. He said the final picture isn’t who he is. "I just want my license photo to accurately reflect me," Chase Culpepper said. On the steps of the state house in Columbia, Chase Culpepper announced a federal sex-discrimination lawsuit against the DMV. Michael Silverman heads the Transgender Legal Defense and Education Fund. He called the DMV's actions sex stereotyping. "It’s a violation of his personal rights and his right to free speech under the constitution," Silverman said. "It's not the role of the DMV or any government agency or employee to decide how men and women ought to look." Chase Culpepper's mother said she never expected there would be a problem that day at the DMV. "It was heartbreaking to see my son humiliated because he wears makeup," she said. Late Tuesday a DMV spokeswoman told Channel 9 the agency cannot comment on a pending lawsuit. The family is not seeking any damages or money but only that Chase Culpepper be allowed to retake his driver's license photo wearing makeup. The Transgender Legal Defense and Education Fund is also asking the court to require South Carolina to clarify its rules on the issue which they call vague and discriminatory. Silverman said he'd hoped the filing of the lawsuit would cause the DMV to reconsider its position. He said he knew of no other similar lawsuit before the courts in any other state. ||||| A transgender rights group is asking the South Carolina Department of Motor Vehicles to allow a gender non-conforming teen to retake his driver's license photo while wearing makeup. Chase Culpepper -- a 16-year-old who wears makeup and androgynous or girls' clothing on a daily basis -- went to the DMV in Anderson on March 3 with his mother to get his driver's license after passing his driver's test, according to a press release obtained by The Huffington Post. However, he was told he couldn't be photographed while wearing makeup. DMV employees said he did not look the way they thought a boy should, and one individual called his makeup a "disguise," the release notes. Culpepper ultimately removed his makeup and got his photo taken, but the experience left a mark. “This is who I am and my clothing and makeup reflect that,” he says in the release. “The Department of Motor Vehicles should not have forced me to remove my makeup simply because my appearance does not meet their expectations of what a boy should look like. I just want the freedom to be who I am without the DMV telling me that I’m somehow not good enough.” In the end, Chase was told that he could not wear makeup simply because boys typically do not wear makeup. It was not because his makeup acted as any type of disguise of his identity. Sex stereotypes like this do not justify a government agency’s restriction of constitutionally protected expression. On June 9, the Transgender Legal Defense & Education Fund (TLDEF) sent a letter to the South Carolina DMV on behalf of Culpepper. The letter, which alleges the teen's constitutional rights were violated, reads, in part: TLDEF asked that Culpepper be granted the opportunity to retake his license photo. Culpepper says: “I want the DMV to take my picture again, with makeup, so I can put this incident behind me. However, a representative from the DMV told HuffPost that it is unlikely Culpepper will be able to retake the picture because of a 2009 clause added to the driver's license photo policy. The clause reads: "At no time can an applicant be photographed when it appears that he or she is purposefully altering his or her appearance so that the photo would misrepresent his or her identity." According to the rep, the DMV works with law enforcement on these decisions. "If it says male [on the license], that's what they're gonna look for. They expect the photo to be of a man," she said. "If they stop somebody and they're dressed as a woman, they can straighten that out." (h/t Pink News)
– A 16-year-old in South Carolina wants a driver's license photo redo—but not to look better. Chase Culpepper, who describes himself as "gender nonconforming," says he wears makeup and women's clothing and jewelry every day, and that when he was forced to take off the makeup for his photo in March he was made to look different than he is. "They told me that I could not wear disguises and I need to look more like a boy," Chase told WSOCTV. But Chase, who listed himself as male on his application, says it is not a disguise but "who I am." The Transgender Legal Defense & Education Fund, which filed a federal lawsuit against the South Carolina DMV yesterday to allow Chase to take the photo with his makeup on, also wants the court to require that the state clarify its rule. "It's not the role of the DMV or any government agency or employee to decide how men and women ought to look," the group's head said. The teen's mother, Teresa Culpepper, tells HuffPost Live that Chase is actually a "stickler for policy and procedure," and that he wants to retake the photo so that it is an accurate depiction of who he is. He is not seeking any monetary compensation. (One student wasn't allowed back to middle school after transitioning to a girl.)
as an alternative way to study the higgs boson production at the lhc , the central exclusive diffractive ( ced ) production has been currently analyzed as a new framework for particle production @xcite . indeed , the double pomeron exchange ( dpe ) and the two - photon process offer the opportunity to study the exclusive higgs boson production in proton and nuclei collisions . one way to increase these cross sections is to consider nuclei collisions , especially for the two - photon process , where the photon flux is enhanced by a factor of @xmath3 in @xmath1 collisions , and @xmath4 in @xmath5 ones . for instance , the predicted cross section for the higgs boson production in pbpb collisions is 18 pb @xcite , which is enhanced by five orders if compared to the predictions for @xmath0 collisions ( 0.18 fb ) . however , in dpe this enhancement is smaller , showing an increasing from 3 fb for @xmath0 collisions to 100 fb in auau ones @xcite . in this sense , we compute the cross sections with the photoproduction mechanism for the ced higgs boson production in hadron - hadron and hadron - nucleus collisions at the lhc . considering the @xmath6 interaction in high - energy collisions , we compute the cross section for the ced higgs boson production by dpe in the @xmath6 subprocess @xcite , which is one of the possible subprocess in ultraperipheral collisions ( upc ) @xcite . the photon fluctuates into a quark - antiquark pair , and then the interaction occurs between the proton and this pair by the exchange of gluons in the @xmath7-channel . the diagram at partonic level is taken into account in order to compute the scattering amplitude , where a quasi - real photon interacts with a quark nto the proton . the imaginary part of the scattering amplitude is computed by the use of the cutkosky rules @xmath8 , with @xmath9 and @xmath10 being the amplitudes in the left- and the right - hand sides of the central line that splits the diagram in fig.[fig1 ] in two pieces , and @xmath11 is the volume element of the three - body phase space . this integration results in the following amplitude @xmath12 where @xmath13 is the impact factor for the @xmath14-@xmath14 transition @xmath15[\rho^{2 } + ( 1 - \rho)^{2}]}{q^{2}\rho(1 - \rho ) + \mathbf{k}^{2}\tau(1 - \tau ) } , \label{imp - fact}\end{aligned}\ ] ] @xmath16 is the virtuality of the initial photon , @xmath17 = 246 gev is the vacuum expectation value of the electroweak theory , @xmath18 is the charge of the quark in the dipole , @xmath19 and @xmath20 are the electromagnetic and strong coupling constants , respectively , @xmath21 is the feynman parameter , and @xmath22 is the transverse momentum of the gluons . in this calculation , we introduce the sudakov parametrization @xmath23 , with @xmath24 . diagram that represents the photoproduction mechanism for the higgs boson production . ] in order to include all the partonic content of the proton , one has to replace the contribution of the @xmath25 coupling by the unintegrated parton distribution function @xmath26 , where the function @xmath27 is the integrated gluon distribution function , and the multiplicative factor @xmath28 = 1.2 takes into account the non - diagonality of the distribution @xcite . in this work we apply the mstw2008lo parametrization for such distribution function @xcite . to obtain the event rate , one have to integrate the amplitude squared given by eq.([amp - im ] ) over the momenta of the particles in the final state , including the prescription for @xmath29 . the result for central rapidity reads @xmath30^{2 } , \label{gammap - xsec}\end{aligned}\ ] ] where @xmath31 is taken to 1.5 , which corresponds to the enhancement of the @xmath32 cross section at nlo accuracy @xcite , and @xmath33 = 5.5 gev@xmath34 is the slope of the gluon - proton form factor . the function @xmath35 is the modified unintegrated gluon distribution function that includes the sudakov form factor @xmath36 computed at leading logarithm accuracy ( lla ) . regarding the phenomenological aspects introduced in this result , the rapidity gap survival probability ( gsp ) depends particularly of the process under consideration . the gsp for the @xmath6 process is not computed yet , and we use the one of 3% predicted for the pomeron - pomeron mechanism . however , we expect a higher survival factor for the @xmath6 interaction , since the large distances between the two colliding hadrons in upc should decrease the probability of interaction between secondary particles . analyzing the results for central dijet production at hera , one finds that the survival probability is about 10% @xcite , and we make predictions with this probability for the ced higgs boson photoproduction . the initial photon is emitted from one relativistic source object , which can be a proton or a nucleus . particularly , a nucleus has @xmath3 protons , which enhances the photon flux in @xmath1 and @xmath5 collisions . in fact , considering the luminosity and pile - up effects in the collisions at the lhc , the @xmath1 collisions may offer the best experimental condition if compared to @xmath0 and @xmath5 collisions @xcite . additionally , in the photoproduction mechanism , we neglect the contribution from @xmath5 collisions , since the shadowing effects present in the nuclear pdf will decrease the production cross section by a factor of 0.2 - 0.3 . the production cross section in upc is given by @xmath37 where @xmath38 and @xmath39 , and @xmath40 is given by eq.([gammap - xsec ] ) . the functions @xmath41 are the photon fluxes for protons and nuclei , which can be found in ref.@xcite . in this sense , the photon virtuality is decomposed into @xmath42 , with @xmath43 , which is restricted by the coherence condition in upc , depending of the radius of the source object . the hadronic cross section is computed for @xmath0 , @xmath2pb , @xmath2au , @xmath2ar , and @xmath2o collisions at the lhc . actually , collisions involving gold nucleus are not going to be measured in the lhc , however we include such prediction to compare with previous results @xcite . the tab.[tab1 ] shows the kinematics introduced in this calculation , and the predicted cross sections for the ced higgs boson photoproduction for the two possibilities of the survival factor . .[tab1]the predicted cross sections for the ced higgs boson photoproduction at the lhc for @xmath44=120 gev , and the kinematics parameters . the cross section is shown for the two possibilities of the gsp : 3% and 10% . [ cols="^,^,^,^,^,^",options="header " , ] as one can see , the production cross section is significantly enhanced in @xmath1 collisions taking nucleus with high @xmath3 . these results are higher than the ones obtained for the two - photon and for the dpe mechanism . in the case of @xmath0 collisions , the cross section is similar to that of the dpe mechanism , but one order higher than that for the two - photon mechanism . considering the @xmath5 run that will occur in the end of 2010 , new data can be available for nuclei collisions in the next year . in this work we applied the photoproduction mechanism for the ced higgs boson production to @xmath0 and @xmath1 collisions in the lhc . the results show an enhanced cross sections for collisions involving au and pb nucleus , which open a new way to detect the higgs boson in the lhc . the gsp is a fundamental aspect to be determined with the future data from the lhc , playing an important role for reliable predictions of diffractive processes . therefore , the photoproduction mechanism offers a new approach for the higgs boson production , showing a cross section competitive with other production mechanisms .
we present the current development of the photoproduction approach for the higgs boson with its application to @xmath0 and @xmath1 collisions at the lhc . we perform a different analysis for the gap survival probability , where we consider a probability of 3% and also a more optimistic value of 10% based on the hera data for dijet production . as a result , the cross section for the exclusive higgs boson production is about 2 fb and 6 fb in @xmath0 collisions and 617 and 2056 fb for @xmath2pb collisions , considering the gap survival factor of 3% and 10% , respectively . address = high energy physics phenomenology group , ufrgs , + caixa postal 15051 , cep 91501 - 970 - porto alegre , rs , brazil .
the lack of a convincing detection of hydrogen in the spectrum of any type ia supernova has been difficult to reconcile in the single - degenerate scenario where the non - degenerate companion , in most candidate progenitor channels , is hydrogen - rich . however , @xcite recently announced the discovery of h@xmath0 emission associated with the type ia supernova , sn 2002ic . over a time - span of about + 7 to + 48d from maximum light ( @xmath5 = jd 2452601 = 2002 november 22 ; * 4 ) , the optical spectra of sn 2002ic exhibited similar but weaker features to those of ` normal ' type ia sne . however , strong h@xmath0 emission was also apparent : the h@xmath0 feature consisted of a narrow component ( unresolved at 300 ) atop a broad component ( fwhm @xmath11800 ) . while the narrow component could have beeen due to an underlying hii region ( but see below ) , @xcite argued that the broad component arose from ejecta / circumstellar medium ( csm ) interaction , and that this interaction also provided the continuum source required to dilute the spectral features of sn 2002ic . by day + 48 , they found that the spectrum could be equally well - matched by either a suitably ` diluted ' coeval spectrum of the type ia sn 1990n , or by an unmodified , roughly coeval spectrum of the type iin sn 1997cy . type iin supernovae ( sne iin ) are so called because of the presence at early times of narrow lines in the spectra originating in a relatively undisturbed circumstellar medium ( csm ) @xcite . their progenitors must therefore have undergone one or more mass - loss phases before explosion . in order to investigate the origin of the hydrogen emission and hence the nature of sn 2002ic and its circumstellar environment , we have acquired high resolution optical spectroscopy at + 256 days , and @xmath6-band ir photometry at + 278 and 380 days . the first results of this study are presented here . we obtained optical spectra of sn 2002ic and its purported host galaxy on 2003 august 05 ( = + 256d ) with the eso very large telescope ( vlt ) unit 2 ( kueyen ) and ultra - violet echelle spectrograph ( uves ) . we used a 3 ( pa=90 ) slit which yielded a resolution of @xmath19 . the seeing was @xmath109 . the exposure times for sn 2002ic and the galaxy were 2200s and 1100s respectively . the data were reduced in the figaro 4 environment . wavelength calibration was by means of a thar arc taken at the end of the exposure of each of the targets . flux calibration was carried out with respect to the spectrophotometric standard feige 110 . [ fig : ha ] a portion of the uves spectrum obtained on day 256 is shown in fig . [ fig : ha ] . the spectrum is dominated by a strong h@xmath0 feature . in addition , a weak broad feature around 9000 is present which is the blend of o i 8446 and the ca ii ir triplet @xcite . in fig.[fig : ha]b we show the h@xmath0 profile in more detail : it comprises a narrow , but resolved , p cygni - like profile atop a broad emission feature . there may also be a very broad feature present , but owing to the blue limit of the spectrum , we are unable to give a complete description of this feature . the @xcite + 217d spectrum extends further to the blue , and they attribute the very broad feature to [ oi]6300,6364 , ( fwhm@xmath7 ) . we do not give further consideration to this component . however , we note that as a consequence of the presence of the very broad feature , some authors refer to the `` broad emission feature '' mentioned above and shown in fig . [ fig : ha ] as the `` intermediate component '' . we shall continue to refer to this as the `` broad '' feature . it is about @xmath8 across the base ( fwhm@xmath11550 ) . in order to extract more detailed information about the narrow feature , we generated a model p cygni profile using a homologously expanding csm above a photosphere , with a rest frame wavelength equal to that of the narrow peak and an exponentially - declining density profile . the p cygni profile parameters were adjusted by eye to match the absorption component , yielding a velocity of 100 at the photosphere , and an e - folding velocity of 30 . the maximum detectable extent of the blue wing of the absorption is @xmath1250 . the p cygni model also demonstrated that the narrow component includes additional emission not taken into account by the absorption . both the narrow emission component and the p cygni profile suggest a csm velocity of 80100 . in addition , there is a small but significant shift between the narrow emission component and the 1500 component in the sense that the narrow component peak is @xmath9 further to the red . the h@xmath0 profile parameters are summarised in table [ tab : ha ] . .parameters for the h@xmath0 profile of sn 2002ic at + 256 d. [ cols="<,<,<",options="header " , ] we determined the redshift of sn 2002ic from the narrow emission component of the p cygni profile and found it to be @xmath10 . this is consistent with the measurement of @xcite . our redshift implies a distance of @xmath1280 mpc ( @xmath11 = 70mpc@xmath4 ) . according to @xcite , the redshift of the galaxy @xmath15 e of sn 2002ic ( marked a in fig . [ fig : chart ] ) is 0.22 thereby ruling out association with the supernova . the only other nearby galaxy is the one marked b in fig . [ fig : chart ] , lying @xmath110 s of the supernova ; @xcite suggested an association but do not report a redshift for this galaxy . however , our uves spectrum of galaxyb indicates @xmath12 , i.e. it is unlikely to be the host galaxy . fortuitously , during our uves observation of the supernova , the extreme eastern end of the slit intercepted the nuclear region of galaxy a. we noticed an emission feature at the corresponding spatial position ( along the slit ) in the spectra , and in the same order as the h@xmath0 feature of sn 2002ic , shifted only slightly in wavelength . assuming the feature was also due to h@xmath0 emission ( but from galaxy a ) , we derive @xmath13 . this indicates that galaxy a must be the host of sn 2002ic . hamuy ( priv . comm . ) confirms that , owing to a target acquisition error , the redshift given for the host galaxy in @xcite is incorrect . the narrow component of the h@xmath0 feature suggests an origin in a wind flowing at @xmath14 . the presence and velocity of the p cygni - like absorption immediately rules out an origin in a line - of - sight hii region i.e. the narrow emission / absorption feature is intrinsic to the supernova or its immediate environment . the similarity of the early - time , low - resolution spectra of sn 2002ic to that of the type iin sn 1997cy has been noted by several authors . we find that the similarity also holds at high resolution ; the late - time h@xmath0 profile of sn 2002ic is compared to those of type iin supernovae observed at comparable epochs in fig . [ fig : hires_sne ] . narrow emission / absorption profiles superimposed on broad emission features have been observed in the type iin events sn 1997ab at 425d and sn 1997eg at + 202d @xcite . for sn 1997ab the narrow absorption blue wing limit yields a velocity of 90 , superimposed on an emission feature of @xmath11800 fwhm , with wings extending to @xmath14000 . for sn 1997eg , the corresponding figures are 160 , 3800and @xmath111000 . in both cases the narrow feature is displaced redward of the peak of the broad component by @xmath1600 . @xcite attribute the apparent relative shifts to self - absorption in the intermediate component which preferentially attenuates the red wing . as indicated above , the sn2002ic narrow component also exhibits a ` redshift ' relative to the broad peak , but at 111 the shift is much smaller , suggesting that self - absorption is less important . the luminosity of the narrow and intermediate h@xmath0 features of sn2002ic are , respectively , @xmath15ergs@xmath4 and @xmath16 ergs@xmath4 . the 1500 component is probably produced by the supernova ejecta / wind interaction , as suggested by @xcite . we note that the width of the broad component declined from 1800 to 1500 between days + 47 and + 256 . this gives additional weight to the ejecta / wind interaction scenario . the high late - time luminosity of sn 2002ic allows us to rule out radioactive decay of @xmath17ni as the dominant energy source . the @xmath18 light curve @xcite gives a luminosity of @xmath19 ergs@xmath4 at 250d , whereas the total radioactive luminosity of 0.7m@xmath2 @xmath17ni at this epoch is only @xmath20 ergs@xmath4 . moreover , by + 250 , only about @xmath110% of the decay gamma - rays would be deposited in the ejecta of the presumed type ia sn ejecta ( e.g. * ? ? ? therefore , the dominant source of energy for the broad component luminosity , @xmath21 , must be due to the ejecta / csm interaction . in this scenario , @xmath21 is proportional to the kinetic energy dissipation rate across the shock front . we can use this luminosity to estimate the mass - loss rate , @xmath22 , @xcite : @xmath23 where @xmath24 is an efficiency factor which peaks at @xmath10.1 . @xmath25 is the shock velocity , and @xmath26 is the velocity of the unshocked wind , assumed to be freely - expanding . from the broad h@xmath0 line , @xmath25= 2900 , while the narrow feature gives @xmath27=100 . substituting into the above equation , and using the broad component luminosity , we obtain a mass - loss rate of @xmath28/0.1)m@xmath29yr@xmath4 . @xcite also show that by using the luminosities of both the broad and narrow emission components , the csm density and mass can be estimated . using the component luminosities given above , we find that the number density at the csm inner limit on day 256 is @xmath30@xmath31 . assuming that the csm was created by a steady wind ( density @xmath32 @xmath33 ) and that the inner limit of the csm on day 256 corresponds to the radius reached by the 2900 shock , we find a total csm mass of @xmath34m@xmath2 , where @xmath35 and @xmath36 are , respectively , the inner and outer limits of the csm , and @xmath37 . @xmath35 can be identified with the shock radius . we now consider the ir emission . on day + 380 , @xmath380.04 . such a colour corresponds to a 1430@xmath3940k blackbody . we therefore propose that the late - time ir emission is due to thermal emission from hot dust associated with sn 2002ic or its progenitor . however , it is possible that the ir flux contains a component due to hot ( t@xmath110,000k ) residual photospheric emission such as might be produced by the shock / csm interaction . we therefore measured the continuum level in the vicinity of 0.94@xmath40 m on day + 256 and extrapolated to day + 380 assuming an exponential decline timescale of 170d @xcite . we then extrapolated to the @xmath41 and @xmath42 bands assuming a rayleigh - jeans law . from this we conclude that @xmath130% and @xmath18.5% of the @xmath41 and @xmath42 band fluxes respectively was due to contamination by the hot photosphere . after subtracting the photospheric component and correcting for a galactic extinction of a@xmath43 ( ned ) , we find that the net ir flux on day 380 can be reproduced by a t=1220k blackbody and a luminosity of @xmath20 ergs@xmath4 . the corresponding figure for + 278d ( assuming the same temperature ) is @xmath44ergs@xmath4 . we note that if the ir emission were due to dust condensation in the ejecta , for the corresponding 1220k blackbody to attain these luminosities it would need to have expanded at 80009000 since the supernova exploded . we now consider the location and origin of the hot dust . we first note that as with the @xmath18 emission , contemporary radioactive decay can have made only a minor contribution to the late - time ir luminosity of sn2002ic . we also note that dust is not expected to condense in type ia explosions . this is consistent with the fact that to produce the observed ir luminosity , the 1220k blackbody surface would have to be located as far out as the @xmath18500 region of the ejecta . dust condensation in such circumstances seems unlikely . we conclude that the ir luminosity arises from a pre - existing dusty csm . heating of this dust can be via ( a ) local heating by the ongoing ejecta / csm shock interaction , or ( b ) photon emission from the supernova yielding an ir echo in the unshocked csm . @xcite find velocities exceeding 10000 in c , o , and ca , which they attribute to the supernova ejecta . thus , local heating of a dusty csm by the ejecta / csm shock might seem to be a possibility . however , the the initial flash from the supernova would have evaporated any csm dust to at least @xmath13500 au for carbon - rich grains ( t@xmath45 ) and 16,500 au for oxygen - rich grains ( t@xmath46 ) @xcite . yet a 10000 shock would have reached only 1600au by + 278d . we therefore rule out local heating of the dust by the ejecta / csm shock . we now test the possibility that the ir emission arose from the heating of csm dust by photons from the supernova i.e. the ir - echo scenario . we use the bolometric light curve of @xcite which can be approximately described as having a peak ( t=0 ) luminosity of @xmath47 ergs@xmath4 and an exponential decline timescale of 170d . we attribute this slow decline to the energy released in the interaction of the shock with the csm . the bolometric light curve is based on @xmath18 photometry , and so does not include possible additional energy from an x - ray precursor . however , given the much higher opacity of dust grains to uv - optical light , it is likely that the x - ray contribution to grain heating will be small . @xcite showed that the ir - echo light curve comprises an initial plateau phase , followed by a decline . the transition from plateau to decline corresponds to the passing of the ellipsoid vertex from the dust - free cavity into the region of unevaporated dust . thus , the radius of the dust free cavity @xmath48 . we noted that the ir flux from sn2002ic barely changed between days 278 and 380 . from this we conclude that the vertex was still within the dust - free cavity on day 380 , implying a cavity radius of @xmath49au . we use eq.17 in @xcite to estimate the optical depth of dust required to yield the observed ir flux from sn 2002ic . the parameters adopted were : @xmath50 au , d = 280mpc , dust ir emissivity proportional to @xmath51 for wavelengths down to 0.2@xmath40 m , a mean uv - visual absorption efficiency of 1 , and an initial dust temperature at the cavity boundary assumed to be about equal to an evaporation temperature of 1500k . the csm was assumed to have been produced by a steady wind so that the density is proportional to @xmath33 . we find that the ir flux at both epochs is reproduced with a dust optical depth of @xmath52 . for a gas - to - dust mass ratio of 160 , a grain material density of 3g@xmath31 and a grain radius of 0.1@xmath40 m , this translates ( from * ? ? ? * ) into a total csm mass ( including the dust - free cavity ) of @xmath10.3@xmath53m@xmath2 , where @xmath36 is the outer limit of the csm and @xmath54 . thus the csm mass exceeds 0.3m@xmath2 . the corresponding mass loss rate ( again for @xmath54 ) , assuming a wind velocity of 100 is @xmath55 m@xmath2yr@xmath4 ( * ? ? ? it is about @xmath565 the value which @xcite derived for the type iil sn 1979c . compared with the derivations from the h@xmath0 line , the ir analysis produces a lower mass - loss rate ( @xmath11% ) . this discrepancy could , in part , be due to an underestimate of the shock velocity leading to an overestimate of the mass - loss rate derived from the broad @xmath57 line . nevertheless , both analyses indicate a csm mass probably exceeding 0.3m@xmath2 produced by a mass loss rate greater than several times @xmath3m@xmath2yr@xmath4 . the mass - loss rate inferred from this work and that of others is higher than expected from traditional mass loss mechanisms . we remark that the high values of @xmath22 are at least partly a consequence of simplifying assumptions e.g. clumped winds would mimic a high mass - loss rate . the close similarity between sn 2002ic and type iin sne , has raised doubts as to whether sn 2002ic is a _ bona fide _ ia event and whether other type iin ( i.e. core - collapse ) sne , only discovered at late epochs , may have been sn 2002ic - like events . we suggest that the type iin phenomenon is predominantly related to the amount of csm around the progenitor system rather than the type of explosion . a type 1.5 scenario i.e. the explosion of a single , massive agb star has been invoked as a possible progenitor of sn 2002ic . we point out that an extremely low metallicity would be required to inhibit mass - loss so as to allow the degenerate core to grow to m@xmath58 e.g. , a 4m@xmath29 star would need to have @xmath59 @xcite . also , a low mass - loss rate is at odds with the large amount of csm inferred for this event , although we can not rule out the possibility that the csm is due to a binary companion . furthermore , sn 2002ic exhibited an exceptionally high maximum @xmath60-band luminosity and a much slower post-@xmath1 + 25days decline rate in @xmath61 than is seen in normal sne ia . the opposite effect would be expected for type 1.5 events i.e. the photometric and spectral evolution should be similar to ii events at early times and dominated by the decay of radioactive ni and co at late times as for type i and iip events @xcite . any progenitor scenario must satisfy all the observational constraints _ viz : _ the type ia - like behaviour at early - times and the type iin behaviour at late - times @xcite ; broad profiles of ca and o @xcite ; an aspherical csm @xcite ; a slow - moving outflow at @xmath62 and a dusty csm . it must also explain the apparent rarity of sn 2002ic - like events . taking these observational constraints at face - value , we currently favour a system involving a post - agb star . there are several known examples of post - agb stars that have high inferred mass - loss rates and dusty discs csm ) ( e.g. iras 08544 - 4431 , * ? ? ? these objects have typical outflow velocities of the order of 100 . furthermore , the post - agb phase can be relatively short , @xmath63 yrs . @xcite . further planned observations will no doubt provide more clues as to the previous evolution of sn 2002ic . we thank f. patat for expert help in setting up the observations . thanks also go to j. deng , k.s . kawabata , k. nomoto , and y. ohyama for kindly providing us with the + 217d spectrum . based on ddt observations obtained with eso telescopes at the paranal observatories under programme i d 271.d-5021 and the united kingdom infrared telescope which is operated by the joint astronomy centre on behalf of the u.k . particle physics and astronomy research council . r.k . acknowledges support from the ec programme ` the physics of type ia sne ' ( hprn - ct-2002 - 00303 ) and interesting discussions with s. sim and j.s . vink .
we present results from the first high resolution , high s / n , spectrum of sn 2002ic . the resolved h@xmath0 line has a p cygni - type profile , clearly demonstrating the presence of a dense , slow - moving ( @xmath1100 ) outflow . we have additionally found a huge near - ir excess , hitherto unseen in type ia sne . we argue that this is due to an ir light - echo arising from the pre - existing dusty circumstellar medium . we deduce a csm mass probably exceeding 0.3m@xmath2 produced by a mass loss rate greater than several times @xmath3 m@xmath2yr@xmath4 . for the progenitor , we favour a single degenerate system where the companion is a post - agb star . as a by - product of our optical data , we are able to provide a firm identification of the host galaxy of sn 2002ic . [ firstpage ] circumstellar matter supernovae : general supernovae : individual : sn 2002ic stars : winds , outflows dust
SECTION 1. SHORT TITLE. This Act may be cited as the ``Reduce and Cap the Federal Workforce Act of 2010''. SEC. 2. REDUCTION AND LIMITATION ON THE TOTAL NUMBER OF FEDERAL EMPLOYEES. (a) Definition.--In this Act-- (1) the term ``agency''-- (A) means an executive agency as defined under section 105 of title 5, United States Code; and (B) shall not include-- (i) the Executive Office of the President; (ii) the Central Intelligence Agency; (iii) the Federal Bureau of Investigation; or (iv) the Secret Service; and (2) the term ``employee''-- (A) means an employee of any agency; and (B) shall not include any employee-- (i) employed by a Federal entity described under paragraph (1)(B); or (ii) designated by the Director of National Intelligence for exclusion for purposes of national security. (b) Agencies Other Than the Department of Defense and the Department of Homeland Security.-- (1) Determination of number of employees.--Not later than 90 days after the date of enactment of this Act, the head of each agency (other than the Department of Defense and the Department of Homeland Security) shall collaborate with the Director of the Office of Management and Budget and determine-- (A) the number of full-time employees employed in that agency on February 16, 2009; and (B) the number of full-time employees employed in that agency at the end of that 90-day period. (2) Reductions by attrition.--If the number of full-time employees employed in an agency determined under paragraph (1)(A) is less than the number of full-time employees employed in that agency on the date occurring 90 days after the date of enactment of this Act, the head of that agency shall ensure that no individual is appointed as a full-time employee in that agency until the number of full-time employees employed in that agency is reduced by attrition to that number determined under paragraph (1)(A). (3) Offset in number of employees.-- (A) In general.--After an agency has reached the number of full-time employees to be in compliance with paragraph (2), the head of that agency shall ensure that the number of full-time employees in that agency is offset by a reduction of 1 full-time employee at that agency for each individual who is appointed as a full-time employee in any agency. (B) Offset if reductions unnecessary.--If the number of full-time employees employed in an agency determined under paragraph (1)(A) is more than the number of full-time employees employed in that agency on the date occurring 90 days after the date of enactment of this Act, the head of that agency shall ensure that the number of full-time employees in that agency is offset by a reduction of 1 full-time employee at that agency for each individual who is appointed as a full-time employee in any agency. (c) Department of Defense and the Department of Homeland Security.-- (1) Determination of number of employees.--Not later than 90 days after the date of enactment of this Act, the Secretary of Defense and the Secretary of Homeland Security shall collaborate with the Director of the Office of Management and Budget and determine the number of full-time employees employed in the Department of Defense and the Department of Homeland Security at the end of that 90-day period. (2) Offset in number of employees.--After the 90-day period described under paragraph (1), the Secretary of Defense and the Secretary of Homeland Security shall ensure that the number of full-time employees in the Department of Defense and the Department of Homeland Security determined under paragraph (1) is offset by a reduction of 1 full-time employee at the applicable department for each individual who is appointed as a full-time employee in that department. (d) Information on Total Employees.-- (1) In general.--Except as provided under paragraph (2), the Director of the Office of Management and Budget shall-- (A) publicly disclose-- (i) the total number of Federal employees; (ii) the number of Federal employees in each agency; and (iii) the annual rate of pay by title of each Federal employee at each agency; and (B) update the information described under subparagraph (A) not less than once a year. (2) National security exception.--The Director of National Intelligence may exclude any employee from information to be disclosed under paragraph (1) for purposes of national security.
Reduce and Cap the Federal Workforce Act of 2010 - Requires the head of each executive agency: (1) to determine the number of full-time agency employees on February 16, 2009 (2009 number) and the number of full-time agency employees on the date occurring 90 days after enactment of this Act (current number); (2) if the 2009 number is lower, to ensure that no new employee is appointed until the 2009 number is attained through attrition; and (3) if the current number is lower or once the 2009 number is attained, to maintain that number by offsetting each new appointment by a reduction. Excludes the Department of Defense (DOD), the Department of Homeland Security (DHS), the Executive Office of the President, the Central Intelligence Agency (CIA), the Federal Bureau of Investigation (FBI), and the Secret Service. Requires the Secretary of Defense and the Secretary of Homeland Security to: (1) determine the current number of full-time employees of DOD and DHS; and (2) maintain that number by offsetting each new appointment by a reduction. Requires the Director of the Office of Management and Budget (OMB) to: (1) publicly disclose the total number of federal employees, the number of federal employees in each agency, and the annual rate of pay by title of each federal employee at each agency; and (2) update such information at least once a year. Authorizes the Director of National Intelligence to exclude any employee from such information for purposes of national security.
the predominant cause of death from cardiovascular disease is believed to be coronary artery thrombosis.13 thrombotic occlusion of a coronary artery in response to atherosclerotic plaque rupture is considered the ultimate and key step in the pathogenesis of acute myocardial infarction ( ami).4 the propensity to provoke thrombosis depends on a complex cascade of events involving inflammatory pathways , and more importantly , platelet activation with subsequent aggregation.5 guidelines recommend dual antiplatelet therapy with acetylsalicylic acid and adenosine diphosphate ( adp ) receptor antagonists for a period of up to 1 year following the qualifying ami event , to reduce recurrent thrombosis.69 the introduction of more potent oral antiplatelet agents , such as the more recent adp receptor antagonists ticagrelor and prasugrel , has further reduced the risk of recurrent thrombosis.10,11 however , despite modern treatments , many patients remain at increased risk of future thrombotic events . in recent studies , some 10%15% of patients went on to have a major adverse cardiac event during the first 12 months after ami , which was attributed predominantly to thrombotic complications.1215 additionally , there has been a growing concern over the safety profile of oral antiplatelet agents in terms of increased bleeding , which is now known to be a marker of an adverse prognosis and has negatively affected their use.10,16 in order to reduce thrombotic risk even further , oral anticoagulant agents were added to dual antiplatelet therapy , but this was found to be associated with increased bleeding.17,18 this has led to the search for novel antiplatelet agents with effects to reduce thrombotic risk , taking into consideration the potential for excess bleeding . among these are the oral protease - activated receptor ( par)-1 antagonists , which represent a new class of oral antiplatelet agents for patients with atherothrombotic disease . a key step in the process of thrombus formation is the role thrombin plays in the activation of platelets by binding to pars , especially par-1.19 targeting this thrombin signaling receptor has led to greater inhibition of platelet activation and inhibition , and in turn of thrombosis . the clinical efficacy appeared to be superior with vorapaxar , compared with atopaxar , but this was associated with a higher risk of serious bleeding.20 only vorapaxar has completed phase iii clinical trial investigation to assess its efficacy and safety in the clinical arena.21,22 the present review provides an overview of the role of adjunctive therapy with vorapaxar in the secondary prevention of atherothrombotic disease , particularly ami , and the potential role for vorapaxar in modern practice . hemostasis is considered a protective mechanism that maintains the integrity of blood vessels after vascular injury . thrombin signaling in platelets contributes to hemostasis and thrombosis by converting circulating fibrinogen into fibrin , the fibrous matrix of blood clots . the cellular effects of thrombin are mainly mediated by pars.23 the mechanism of par activation and signaling is complex . pars are g protein - coupled receptors that are expressed in vascular endothelial cells and activated by cleavage of part of their extracellular domain , causing the physiological response.24 they play an important role in thrombosis , coagulation , hemostasis , atherosclerosis , and inflammation.2527 there are four known types of pars , numbered from par-1 to par-4 . par-1 is activated at a much smaller concentration , resulting in rapid platelet activation.28 many of the downstream mediators of the par-1 pathway , such as thromboxane a2 and adp , are involved in platelet activation . in an animal model , administration of the par-1 antagonist vorapaxar caused complete and dose - dependent inhibition of thrombin receptor activating peptide ( trap)-induced platelet aggregation without affecting the coagulation cascade , including activated clotting time , prothrombin time , and activated partial thromboplastin time , a finding that is consistent with the fact that this agent interacts with specific platelet receptors,29 and suggested that it could inhibit thrombosis without undue bleeding risk . vorapaxar ( formerly known as sch 530348 ) is a synthetic tricyclic 3-phenylpyridine derived from the natural product himbacine . it is an oral competitive par-1 antagonist that exerts its action by inhibition of trap - induced platelet aggregation in a dose - dependent manner30 ( figure 1 ) . the loading dose is 20 or 40 mg , with a higher dose achieving greater inhibition of platelet aggregation , and the maintenance dose is 2.5 mg daily.31 vorapaxar is rapidly absorbed via the gastrointestinal tract , with high bioavailability . its peak concentration is 12 hours after oral loading and it has a half - life of 159310 hours , with no antidote available at present.32 vorapaxar is predominantly metabolized via the cytochrome p450 3a4 pathway and is mainly excreted in bile with only minor renal excretion.32,33 a summary of key randomized trials evaluating the use of vorapaxar in the three phases of clinical trial investigation is shown in table 1 . in healthy caucasian subjects , single 20 and 40 mg doses of vorapaxar administered in a randomized , double - blind placebo - controlled fashion inhibited trap - induced platelet aggregation ( > 80% inhibition ) at 1 hour , and this level of inhibition was sustained for up to 72 hours.34 multiple ascending doses for 28 days ( 1 , 3 , or 5 mg / day ) resulted in complete inhibition of platelet aggregation on day 1 ( 5 mg / day ) and day 7 ( 1 and 3 mg / day ) . adverse events were generally mild and unrelated to dose . in another randomized open - label trial in healthy japanese and matched caucasian subjects , complete inhibition of trap - induced platelet aggregation was achieved most rapidly with vorapaxar 40 mg and was sustained with a maintenance dose of 2.5 mg daily.35 no racial difference as regard to the safety , pharmacokinetics , or pharmacodynamics of vorapaxar was found . tra - pci ( thrombin receptor antagonist percutaneous coronary intervention ) was a multicenter , randomized , double - blind , placebo - controlled trial of patients undergoing non - urgent or elective percutaneous coronary intervention ( pci).31 this was a phase ii trial involving 1,030 patients comparing different oral loading doses of 10 , 20 , and 40 mg vorapaxar followed by maintenance doses of 0.5 , 1 , and 2 mg daily against matched placebo in a 3:1 ratio . after the loading doses , the vorapaxar group continued receiving vorapaxar maintenance doses and the placebo group continued placebo for 60 days after pci . patients were continued on standard dual antiplatelet therapy with acetylsalicylic acid and adp receptor antagonists during the study . the primary endpoint was the incidence of timi ( thrombolysis in myocardial infarction ) major bleeding ( defined as intracranial hemorrhage , overt bleeding associated with a fall in hemoglobin > 5 g / dl ) , or timi minor bleeding ( defined as overt clinical signs of bleeding associated with a hemoglobin reduction of 35 g / dl ) in the pci cohort . the secondary endpoints were overt bleeding that did not meet timi criteria ( defined as clinically overt signs of bleeding with a reduction in hemoglobin <3 g / dl ) , or ischemic events ( defined as composite of death , myocardial infarction [ mi ] , and stroke ) . the study showed no increased risk of timi major or minor bleeding or non - timi bleeding with vorapaxar using any of the dosing regimens compared with placebo . as the study was underpowered , there was a non - significant reduction in ischemic events among pci - treated patients with vorapaxar using any of the dosing regimens compared with placebo ( odds ratio 0.67 ; 95% confidence interval [ ci ] 0.331.34 ) . the conclusion of the tra - pci study was that , in addition to standard dual antiplatelet therapy , vorapaxar was well tolerated and not associated with an increased risk of bleeding compared with placebo in patients undergoing pci . another phase ii trial was reported by goto et al.36 this was a multicenter , randomized , double - blind , placebo - controlled trial performed to assess the efficacy and safety of vorapaxar in japanese patients with non - st - segment elevation mi planned for pci . the study involved 117 patients to compare two different vorapaxar oral loading doses of 20 mg and 40 mg followed by maintenance doses of 1 mg and 2.5 mg daily against matched placebo in a 4:1 ratio . the patients received loading doses of standard - of - care medication at the time of the study ( acetylsalicylic acid , ticlopidine , and heparin ) as well as a loading dose of the study treatment ; the vorapaxar group continued receiving vorapaxar maintenance doses and the placebo group continued placebo for 60 days after pci . patients were continued on standard dual antiplatelet therapy with acetylsalicylic acid and adp receptor antagonists during the study . the efficacy endpoint was major adverse cardiac event or all - cause death and the safety endpoint was timi major and minor bleeding or non - timi bleeding . the study showed a significant reduction in peri - procedural ami in the group treated with vorapaxar compared with the placebo group ( 16.9% versus 42.9% , respectively ; p=0.013 ) . peri - procedural ami was defined as elevation of cardiac enzymes above three times the upper limit of normal , with at least a 50% increase from the value prior to the procedure . the incidence of combined timi major and minor bleeding was 14% in the vorapaxar group and 10% in the placebo group . timi major bleeding occurred in five patients ( 7% ) in the vorapaxar group versus none in the placebo group . the rates of non - timi bleeding or bleeding of any severity were comparable between the two groups . the authors concluded that in addition to standard dual antiplatelet therapy , vorapaxar significantly reduced the incidence of peri - procedural mi in japanese patients undergoing urgent pci without resulting in excess bleeding . to date , two large randomized clinical trials have been conducted , ie , tracer ( thrombin receptor antagonist for clinical event reduction in acute coronary syndrome ) and tra 2p - timi 50 ( thrombin receptor antagonist in secondary prevention of atherothrombotic ischemic events - thrombolysis in myocardial infarction 50).21,22 several hypothesis - generating subgroup analyses were derived from these trials . in this review , tracer was the first large study to evaluate the efficacy and safety of vorapaxar.21 it was a multicenter , randomized , double - blind , placebo - controlled , phase iii trial , in which patients with non - st - segment elevation mi ( 12,944 patients ) received vorapaxar at a loading dose of 40 mg and a daily maintenance dose of 2.5 mg thereafter , or matched placebo . management of the patients included medical therapy ( 32.2% ) , pci ( 57.7% ) , and coronary artery bypass grafting ( cabg , 10.1% ) , according to usual clinical care . patients were continued on standard dual antiplatelet therapy with acetylsalicylic acid and an adp receptor antagonist during the study . the primary efficacy endpoint was a composite of cardiovascular death , mi , stroke , recurrent ischemia with rehospitalization , and urgent coronary revascularization . the primary safety endpoints were a composite of moderate or severe bleeding according to the gusto ( global use of strategies to open occluded coronary arteries ) classification and clinically significant bleeding according to the timi classification . the trial was terminated early owing to safety concerns . at a median follow - up of 502 days , the study primary composite ischemic endpoint was non - significantly reduced in patients randomized to vorapaxar when compared with placebo ( 18.5% versus 19.9% ; hazard ratio [ hr ] 0.92 ; 95% ci 0.851.01 ; p=0.07 ) . however , a secondary non - prespecified ischemic endpoint ( defined as composite of cardiovascular death , mi , or stroke ) was significantly reduced with vorapaxar ( 14.7% versus 16.4% ; hr 0.89 ; 95% ci 0.810.98 ; p=0.02 ) . the rate of moderate or severe bleeding ( gusto classification ) was higher with vorapaxar compared with placebo ( 7.2% versus 5.2% ; hr 1.35 ; 95% ci 1.161.58 ; p<0.001 ) . also , the rate of intracranial hemorrhage was higher with vorapaxar ( 1.1% versus 0.2% ; hr 3.39 ; 95% ci 1.786.45 ; p<0.001 ) compared with placebo . the data and safety monitoring board recommended premature termination of the trial as a result of increased bleeding risk with vorapaxar . the conclusion of the study was that in addition to standard dual antiplatelet therapy , although vorapaxar partially reduced ischemic events , its use significantly increased bleeding , including major and intracranial bleeding . the net clinical benefit , defined as the difference in ischemic and bleeding event rates , was evaluated in a post hoc analysis of the tracer trial.37 the analysis was performed by application of multivariate risk stratification strategies , which were unique to this analysis and not widely validated . the results showed that vorapaxar was associated with an improved net benefit in a large group of patients with acute coronary syndrome ( acs ) at high risk of recurrent ischemic events and at low risk of bleeding ( 26% of patients ; net benefit + 2.8% ) . however , among patients with a high risk of bleeding and irrespective of ischemic risk ( low or high ) , vorapaxar was associated with worse net benefit ( 11% of patients ; net benefit 3% ) . vorapaxar exerted a neutral effect on those with a low risk of bleeding and a low risk of ischemic events ( 63% of patients ; net benefit 0.1% ) . the results of this analysis highlight the potentially beneficial role of vorapaxar in patients at high risk of ischemic events and low risk of bleeding , but similarly , the potential for harm using vorapaxar in patients at high risk of bleeding . although the tracer trial did not meet its primary efficacy endpoint , a significant reduction in the rate of mi was observed with vorapaxar compared with placebo . therefore , the effect of vorapaxar on mi was further explored in a post hoc analysis.38 a blinded , independent central endpoint adjudication committee prospectively defined and classified mi according to the universal mi definition.39 during a median follow - up of 502 days , 1,580 mi events occurred in 1,319 patients . compared with placebo , vorapaxar reduced the hazard of a first mi of any type by 12% ( hr 0.88 ; 95% ci 0.790.98 ; p=0.021 ) and the hazard of total numbers of mi ( first and subsequent ) by 14% ( hr 0.86 ; 95% ci 0.770.97 ; p=0.014 ) . also , vorapaxar reduced type 1 mi ( the most common type ) by 17% ( hr 0.83 ; 95% ci 0.730.95 ; p=0.007 ) , but not type 4a mi ( pci - related , hr 0.90 ; 95% ci 0.731.12 ; p=0.35 ) compared with placebo . these findings support the potential role of vorapaxar in the management of acs patients at high - risk of future mi events . a post hoc analysis was performed for the subgroup of 1,312 patients who underwent cabg in the tracer trial.40 of these , 78% were on vorapaxar at the time of surgery . compared with placebo , the vorapaxar group had a 45% significant reduction in incidence of the primary composite ischemic endpoint , ie , a composite of cardiovascular death , mi , stroke , recurrent ischemia with rehospitalization , and urgent coronary revascularization ( hr 0.55 ; 95% ci 0.360.83 ; p=0.005 ) . these findings differed significantly from the non - cabg group , with a significant interaction ( p=0.012 ) . cabg - related major bleeding was similar with vorapaxar and placebo ( 9.7% versus 7.3% ; hr 1.36 ; 95% ci 0.922.02 ; p=0.12 ) . although derived from subgroup analysis , these results show promise for use of vorapaxar in acs patients undergoing cabg . another post hoc analysis was performed according to stent type in the 7,479 patients who underwent pci in the tracer trial ( n=7,479).41 the efficacy and safety of vora - paxar among pci patients were largely consistent with the overall tracer trial results . a trend toward reduction in ischemic events and less bleeding was noted in patients who had bare metal stents compared with drug - eluting stents . in another post hoc analysis of acs patients who were initially managed with medical treatment ( n=4,194 ) , the efficacy and safety of vorapaxar appeared consistent with the overall tracer trial results.42 tra 2p - timi 50 was a secondary prevention study.22 this was a multicenter , randomized , double - blind , placebo - controlled , phase iii trial . patients with a history of mi , isch - emic stroke , or peripheral arterial disease ( 26,449 patients ) received vorapaxar ( 2.5 mg daily ) or matching placebo . patients were continued on standard - of - care therapy with acetylsalicylic acid or adp receptor antagonists during the study . the study excluded patients with a high risk of bleeding , including those who had a history of bleeding diathesis or recent active bleeding , were receiving concurrent anticoagulation therapy , or had active hepatobiliary disease . the primary efficacy endpoint was the composite of cardiovascular death , mi , or stroke . the primary safety endpoints were a composite of moderate or severe bleeding according to the gusto classification and clinically significant bleeding according to the timi classification . the composite of cardiovascular death , mi , stroke , or recurrent ischemia leading to urgent coronary revascularization was the major secondary efficacy endpoint . the trial was terminated early owing to safety concerns . at a median follow - up of 30 months , the primary efficacy endpoint occurred in 9.3% of the vorapaxar group and in 10.5% of the placebo group ( hr 0.87 ; 95% ci 0.800.94 ; p<0.001 ) . the secondary efficacy endpoint occurred in 11.2% of the vorapaxar group and in 12.4% of the placebo group ( hr 0.88 ; 95% ci 0.820.95 ; p=0.001 ) . the ischemic benefit observed with vorapaxar also , the incidence of cardiovascular death or mi was less with vorapaxar ( 7.3% versus 8.2% ; p=0.002 ) . no treatment difference was found in the incidence of stroke or death from any cause . the rate of moderate or severe bleeding ( gusto classification ) was higher with vorapaxar than with placebo ( 4.2% versus 2.5% ; hr 1.66 ; 95% ci 1.431.93 ; p=0.001 ) . the rate of intracranial hemorrhage was also notably higher with vorapaxar ( 1.0% versus 0.5% ; hr 1.94 ; 95% ci 1.392.70 ; p<0.001 ) . the data and safety monitoring board recommended premature termination of the trial in patients with a history of stroke or new stroke as a result of an increased risk of intracranial hemorrhage with vorapaxar . the conclusion of the study was that vorapaxar in addition to standard therapy reduced the risk of composite cardiovascular death , mi , and stroke , with the most benefit observed in patients with a stable atherosclerotic disease , particularly those with a previous history of mi . however , this resulted in a significant increase in bleeding risk , particularly intracranial hemorrhage in patients with a history of stroke . in a subgroup analysis of patients with a history of ischemic stroke ( n=4,883),43 vorapaxar increased the risk of intracranial hemorrhage compared with placebo ( 2.5% versus 1.0% ; hr 2.52 ; 95% ci 1.464.36 ; p<0.001 ) . also , vorapaxar increased the risk of moderate and severe bleeding ( 4.2% versus 2.4% ; hr 1.93 ; 95% ci 1.332.79 ; p<0.001 ) , without any significant effect on the primary ischemic endpoint ( 13.0% versus 11.7% ; p=0.75 ) . this analysis highlighted the potential harm of vorapaxar in patients with a history of stroke . another large subgroup analysis was performed of patients with a prior mi within the previous 2 weeks to 12 months ( 17,779 patients).44 vorapaxar significantly reduced the primary ischemic endpoint compared with placebo ( 8.1% versus 9.7% ; hr 0.80 ; 95% ci 0.720.89 ; p<0.0001 ) . this benefit was consistent in all key subgroups , including subgroup analysis based on timing between the qualifying mi events and randomization ; <3 months ( hr 0.82 ; 95% ci 0.700.95 ; p=0.011 ) , 36 months ( hr 0.79 ; 95% ci 0.650.97 ; p=0.023 ) , and > 6 months ( hr 0.78 ; 95% ci 0.620.97 ; p=0.026 ) . however , the observed benefit occurred at a cost of an excess of moderate or severe bleeding ( vorapaxar group 3.4% versus placebo group 2.1% ; hr 1.61 ; 95% ci 1.311.97 ; p<0.0001 ) , and clinically significant bleeding ( vorapaxar group 15.1% versus placebo group 10.4% ; hr 1.49 ; 95% ci 1.361.63 ; , there was no significant risk of intracranial hemorrhage associated with vorapaxar compared with placebo ( 0.6% versus 0.4% ; hr 1.54 ; 95% ci 0.962.48 ; p=0.076 ) . in a further analysis of patients at low bleeding risk , defined as those < 75 years of age and without a history of stroke or transient ischemic attack , similar results were obtained , with still significantly higher rates of moderate or severe bleeding with vorapaxar than with placebo ( 2.7% versus 1.8% ) , although with fewer bleeds overall . the conclusion of this analysis was that prolonged treatment with vorapaxar when added to standard antiplatelet therapy may be beneficial for long - term secondary prevention in patients with prior mi . the efficacy of vorapaxar in terms of the occurrence of stent thrombosis ( defined using academic research consortium criteria ) , was recently investigated.45,46 during a median follow - up of 30 months , there were 152 definite stent thrombosis events , with the majority ( 92% ) occurring late ( 30 days to 1 year ) or very late ( > 1 year ) . vorapaxar consistently reduced stent thrombosis including very late stent thrombosis ( 1.1% versus 1.4% ; hr 0.71 ; 95% ci 0.510.98 ; p=0.037 ) , regardless of dual antiplatelet use , stent type , history of diabetes , or time from pci . tracer was the first large study to evaluate the efficacy and safety of vorapaxar.21 it was a multicenter , randomized , double - blind , placebo - controlled , phase iii trial , in which patients with non - st - segment elevation mi ( 12,944 patients ) received vorapaxar at a loading dose of 40 mg and a daily maintenance dose of 2.5 mg thereafter , or matched placebo . management of the patients included medical therapy ( 32.2% ) , pci ( 57.7% ) , and coronary artery bypass grafting ( cabg , 10.1% ) , according to usual clinical care . patients were continued on standard dual antiplatelet therapy with acetylsalicylic acid and an adp receptor antagonist during the study . the primary efficacy endpoint was a composite of cardiovascular death , mi , stroke , recurrent ischemia with rehospitalization , and urgent coronary revascularization . the primary safety endpoints were a composite of moderate or severe bleeding according to the gusto ( global use of strategies to open occluded coronary arteries ) classification and clinically significant bleeding according to the timi classification . the trial was terminated early owing to safety concerns . at a median follow - up of 502 days , the study primary composite ischemic endpoint was non - significantly reduced in patients randomized to vorapaxar when compared with placebo ( 18.5% versus 19.9% ; hazard ratio [ hr ] 0.92 ; 95% ci 0.851.01 ; p=0.07 ) . however , a secondary non - prespecified ischemic endpoint ( defined as composite of cardiovascular death , mi , or stroke ) was significantly reduced with vorapaxar ( 14.7% versus 16.4% ; hr 0.89 ; 95% ci 0.810.98 ; p=0.02 ) . the rate of moderate or severe bleeding ( gusto classification ) was higher with vorapaxar compared with placebo ( 7.2% versus 5.2% ; hr 1.35 ; 95% ci 1.161.58 ; p<0.001 ) . also , the rate of intracranial hemorrhage was higher with vorapaxar ( 1.1% versus 0.2% ; the data and safety monitoring board recommended premature termination of the trial as a result of increased bleeding risk with vorapaxar . the conclusion of the study was that in addition to standard dual antiplatelet therapy , although vorapaxar partially reduced ischemic events , its use significantly increased bleeding , including major and intracranial bleeding . the net clinical benefit , defined as the difference in ischemic and bleeding event rates , was evaluated in a post hoc analysis of the tracer trial.37 the analysis was performed by application of multivariate risk stratification strategies , which were unique to this analysis and not widely validated . the results showed that vorapaxar was associated with an improved net benefit in a large group of patients with acute coronary syndrome ( acs ) at high risk of recurrent ischemic events and at low risk of bleeding ( 26% of patients ; net benefit + 2.8% ) . however , among patients with a high risk of bleeding and irrespective of ischemic risk ( low or high ) , vorapaxar was associated with worse net benefit ( 11% of patients ; net benefit 3% ) . vorapaxar exerted a neutral effect on those with a low risk of bleeding and a low risk of ischemic events ( 63% of patients ; net benefit 0.1% ) . the results of this analysis highlight the potentially beneficial role of vorapaxar in patients at high risk of ischemic events and low risk of bleeding , but similarly , the potential for harm using vorapaxar in patients at high risk of bleeding . although the tracer trial did not meet its primary efficacy endpoint , a significant reduction in the rate of mi was observed with vorapaxar compared with placebo . therefore , the effect of vorapaxar on mi was further explored in a post hoc analysis.38 a blinded , independent central endpoint adjudication committee prospectively defined and classified mi according to the universal mi definition.39 during a median follow - up of 502 days , 1,580 mi events occurred in 1,319 patients . compared with placebo , vorapaxar reduced the hazard of a first mi of any type by 12% ( hr 0.88 ; 95% ci 0.790.98 ; p=0.021 ) and the hazard of total numbers of mi ( first and subsequent ) by 14% ( hr 0.86 ; 95% ci 0.770.97 ; p=0.014 ) . also , vorapaxar reduced type 1 mi ( the most common type ) by 17% ( hr 0.83 ; 95% ci 0.730.95 ; p=0.007 ) , but not type 4a mi ( pci - related , hr 0.90 ; 95% ci 0.731.12 ; p=0.35 ) compared with placebo . these findings support the potential role of vorapaxar in the management of acs patients at high - risk of future mi events . a post hoc analysis was performed for the subgroup of 1,312 patients who underwent cabg in the tracer trial.40 of these , 78% were on vorapaxar at the time of surgery . compared with placebo , the vorapaxar group had a 45% significant reduction in incidence of the primary composite ischemic endpoint , ie , a composite of cardiovascular death , mi , stroke , recurrent ischemia with rehospitalization , and urgent coronary revascularization ( hr 0.55 ; 95% ci 0.360.83 ; p=0.005 ) . these findings differed significantly from the non - cabg group , with a significant interaction ( p=0.012 ) . cabg - related major bleeding was similar with vorapaxar and placebo ( 9.7% versus 7.3% ; hr 1.36 ; 95% ci 0.922.02 ; p=0.12 ) . although derived from subgroup analysis , these results show promise for use of vorapaxar in acs patients undergoing cabg . another post hoc analysis was performed according to stent type in the 7,479 patients who underwent pci in the tracer trial ( n=7,479).41 the efficacy and safety of vora - paxar among pci patients were largely consistent with the overall tracer trial results . a trend toward reduction in ischemic events and less bleeding was noted in patients who had bare metal stents compared with drug - eluting stents . in another post hoc analysis of acs patients who were initially managed with medical treatment ( n=4,194 ) , the efficacy and safety of vorapaxar appeared consistent with the overall tracer trial results.42 tra 2p - timi 50 was a secondary prevention study.22 this was a multicenter , randomized , double - blind , placebo - controlled , phase iii trial . patients with a history of mi , isch - emic stroke , or peripheral arterial disease ( 26,449 patients ) received vorapaxar ( 2.5 mg daily ) or matching placebo . patients were continued on standard - of - care therapy with acetylsalicylic acid or adp receptor antagonists during the study . the study excluded patients with a high risk of bleeding , including those who had a history of bleeding diathesis or recent active bleeding , were receiving concurrent anticoagulation therapy , or had active hepatobiliary disease . the primary efficacy endpoint was the composite of cardiovascular death , mi , or stroke . the primary safety endpoints were a composite of moderate or severe bleeding according to the gusto classification and clinically significant bleeding according to the timi classification . the composite of cardiovascular death , mi , stroke , or recurrent ischemia leading to urgent coronary revascularization was the major secondary efficacy endpoint . the trial was terminated early owing to safety concerns . at a median follow - up of 30 months , the primary efficacy endpoint occurred in 9.3% of the vorapaxar group and in 10.5% of the placebo group ( hr 0.87 ; 95% ci 0.800.94 ; p<0.001 ) . the secondary efficacy endpoint occurred in 11.2% of the vorapaxar group and in 12.4% of the placebo group ( hr 0.88 ; 95% ci 0.820.95 ; p=0.001 ) . the ischemic benefit observed with vorapaxar was driven mainly by a reduction in mi ( 5.2% versus 6.1% ; hr 0.83 ; 95% ci 0.740.93 ; p=0.001 ) . also , the incidence of cardiovascular death or mi was less with vorapaxar ( 7.3% versus 8.2% ; p=0.002 ) . no treatment difference was found in the incidence of stroke or death from any cause . the rate of moderate or severe bleeding ( gusto classification ) was higher with vorapaxar than with placebo ( 4.2% versus 2.5% ; hr 1.66 ; 95% ci 1.431.93 ; p=0.001 ) . the rate of intracranial hemorrhage was also notably higher with vorapaxar ( 1.0% versus 0.5% ; hr 1.94 ; 95% ci 1.392.70 ; p<0.001 ) . the data and safety monitoring board recommended premature termination of the trial in patients with a history of stroke or new stroke as a result of an increased risk of intracranial hemorrhage with vorapaxar . the conclusion of the study was that vorapaxar in addition to standard therapy reduced the risk of composite cardiovascular death , mi , and stroke , with the most benefit observed in patients with a stable atherosclerotic disease , particularly those with a previous history of mi . however , this resulted in a significant increase in bleeding risk , particularly intracranial hemorrhage in patients with a history of stroke . in a subgroup analysis of patients with a history of ischemic stroke ( n=4,883),43 vorapaxar increased the risk of intracranial hemorrhage compared with placebo ( 2.5% versus 1.0% ; hr 2.52 ; 95% ci 1.464.36 ; p<0.001 ) . also , vorapaxar increased the risk of moderate and severe bleeding ( 4.2% versus 2.4% ; hr 1.93 ; 95% ci 1.332.79 ; p<0.001 ) , without any significant effect on the primary ischemic endpoint ( 13.0% versus 11.7% ; p=0.75 ) . this analysis highlighted the potential harm of vorapaxar in patients with a history of stroke . another large subgroup analysis was performed of patients with a prior mi within the previous 2 weeks to 12 months ( 17,779 patients).44 vorapaxar significantly reduced the primary ischemic endpoint compared with placebo ( 8.1% versus 9.7% ; hr 0.80 ; 95% ci 0.720.89 ; p<0.0001 ) . this benefit was consistent in all key subgroups , including subgroup analysis based on timing between the qualifying mi events and randomization ; <3 months ( hr 0.82 ; 95% ci 0.700.95 ; p=0.011 ) , 36 months ( hr 0.79 ; 95% ci 0.650.97 ; p=0.023 ) , and > 6 months ( hr 0.78 ; 95% ci 0.620.97 ; p=0.026 ) . however , the observed benefit occurred at a cost of an excess of moderate or severe bleeding ( vorapaxar group 3.4% versus placebo group 2.1% ; hr 1.61 ; 95% ci 1.311.97 ; p<0.0001 ) , and clinically significant bleeding ( vorapaxar group 15.1% versus placebo group 10.4% ; hr 1.49 ; 95% ci 1.361.63 ; p<0.0001 ) . in this subgroup analysis , there was no significant risk of intracranial hemorrhage associated with vorapaxar compared with placebo ( 0.6% versus 0.4% ; hr 1.54 ; 95% ci 0.962.48 ; p=0.076 ) . in a further analysis of patients at low bleeding risk , defined as those < 75 years of age and without a history of stroke or transient ischemic attack , similar results were obtained , with still significantly higher rates of moderate or severe bleeding with vorapaxar than with placebo ( 2.7% versus 1.8% ) , although with fewer bleeds overall . the conclusion of this analysis was that prolonged treatment with vorapaxar when added to standard antiplatelet therapy may be beneficial for long - term secondary prevention in patients with prior mi . the efficacy of vorapaxar in terms of the occurrence of stent thrombosis ( defined using academic research consortium criteria ) , was recently investigated.45,46 during a median follow - up of 30 months , there were 152 definite stent thrombosis events , with the majority ( 92% ) occurring late ( 30 days to 1 year ) or very late ( > 1 year ) . hr 0.71 ; 95% ci 0.510.98 ; p=0.037 ) , regardless of dual antiplatelet use , stent type , history of diabetes , or time from pci . patients with atherosclerosis and those with prior thrombotic events such as mi are at increased risk of future thrombotic events . since platelet activation is an essential key step in thrombus formation , aggressive secondary preventive measures with antiplatelet agents have been devised to reduce thrombotic risk . vorapaxar , a par-1 antagonist , has been tested in addition to dual antiplatelet therapy for prevention of thrombotic events in patients mostly with a history of mi . although phase ii trials showed a significant reduction in recurrent ischemic events with no increase in bleeding , phase iii trials showed similar reductions in ischemic events , but unveiled an increase in major bleeding , including intracranial bleeding , associated with vorapaxar . there are currently no ongoing trials further assessing the efficacy and safety of vorapaxar in the setting of mi . based on these findings , vorapaxar was approved for use only in patients at high risk of thrombosis and low risk of bleeding . subsequently , the us food and drug administration has recommended vorapaxar as an addition to dual antiplatelet therapy for treatment of patients with a history of mi , a low risk of bleeding , and no prior stroke or transient ischemic attack.47 however , given the concerns around bleeding associated with the drug , its use is still restricted due to the challenges of safely identifying patients at high risk of thrombosis and low risk of bleeding who may gain the most from vorapaxar .
acute myocardial infarction ( ami ) is generally attributed to coronary atherothrombotic disease . platelet activation is essential for thrombus formation and is thus an important target for pharmacological intervention to prevent and treat ami . despite contemporary treatment with dual antiplatelet therapy , including acetylsalicylic acid and adenosine diphosphate receptor antagonists , patients with prior ami remain at increased risk of future thrombotic events . this has stimulated the search for more potent antithrombotic agents . among these is the oral protease - activated receptor-1 antagonist vorapaxar , which represents a new oral antiplatelet agent to reduce thrombotic risk in patients with atherothrombotic disease . the tracer and the tra 2p - timi 50 trials concluded that vorapaxar in addition to standard therapy reduced ischemic adverse cardiac events . a remarkable benefit was observed in patients with stable atherosclerotic disease , particularly those with a previous history of ami . although favorable effects were seen in reduction of adverse cardiac events , this was associated with excess major and intracranial bleeding , particularly in patients at high risk of bleeding and those with a history of stroke or transient ischemic attack . currently , the lack of a reliable individualized risk stratification tool to assess patients for thrombotic and bleeding tendencies in order to identify those who might gain most net clinical benefit has led to limited use of vorapaxar in clinical practice . vorapaxar may find a niche as an adjunct to standard care in patients at high risk of thrombotic events and who are at low risk of bleeding .
the development of ovarian follicles begins during fetal life with the transformation of primordial germ cells into oocytes enclosed in structures called follicles [ 1 , 2 ] . some of these follicles are recruited to start a long progress of growth and differentiation , during which the proteins required for oocyte maturation are progressively synthesized and accumulated [ 3 , 4 ] . all events related to follicular development are regulated by appropriate signals originating from the growing oocyte itself and from the somatic cells that surround it [ 5 , 6 ] and also by complex interactions between gonadotropin hormones , sex steroids , and diverse growth factors [ 7 , 8 ] . the sex steroids produced by follicular cells are known to play major roles in the regulation of ovarian function . when present in the systemic circulation , these steroids actively participate in the regulation of pituitary gonadotropin secretion , and when present in the ovarian microenvironment , they act as important paracrine factors for the maintenance of follicular development . although much of the information about the role of sex steroids in ovarian functioning has been obtained in studies directed at the action of estrogens [ 1012 ] and progestogens [ 1315 ] , increasing attention is being devoted to the action of androgen hormones because the activation of the androgen receptor located in follicular cells [ 16 , 17 ] modulates the expression and activity of important genes for the maintenance of ovarian follicle development [ 1719 ] . additional evidence of the action of androgens in the regulation of folliculogenesis has arisen from in vitro studies showing that various androgens , including testosterone , androstenedione , and dihydrotestosterone , can stimulate the growth and development of ovarian follicles in mammals [ 2022 ] . the reduction of reproductive function and the development of premature ovarian failure in mice with the nonselective deletion of the androgen receptor gene [ 2325 ] support the hypothesis of the involvement of androgens in the regulation of follicular development . reinforcing these findings , mice carrying this deletion show the impaired in vitro development of preantral follicles . additionally , a more pronounced expression of the androgen receptors has been reported to occur in preantral follicles [ 27 , 28 ] , suggesting the major action of androgens during the initial stages of folliculogenesis . from a clinical point of view , polycystic ovary syndrome ( pcos ) is a nosologic entity that affects approximately 510% of women of reproductive age and is characterized by increased ovarian androgen production and chronic anovulation . the excessively androgenic microenvironment of the ovary is believed to have a negative impact on follicular development , which , in addition to lh hypersecretion , promotes follicle stagnation in the early stages of development ( initial antral ) , inhibiting the development of a dominant and ovulatory follicle and leading to chronic anovulation and infertility . among the various therapeutic alternatives for infertile patients with this diagnosis the results of this technique are limited in terms of the reproductive outcome , likely due to the impaired quality of the oocytes developed in hyperandrogenic environments . another therapeutic possibility in these cases is in vitro fertilization procedure , which shows that patients with pcos , despite a larger number of oocytes , have similar pregnancy rates . this suggests a poorer utilization of the oocytes obtained , again indicating impaired oocyte quality . therefore , there is evidence in the literature of the participation of androgens in follicular development both as essential adjuvants and as harmful agents when present in excessive amounts . this indicates the relevance of understanding the role of androgens in the regulation of folliculogenesis , as well as the possibility of recreating in vitro conditions capable of guaranteeing the full growth of ovarian follicles to mimic as much as possible the in vivo intrafollicular environment . on this basis , the objective of the current review was to present more relevant data regarding the involvement of androgens in the regulation of folliculogenesis and to provide information for the design of future culture strategies that can be used to promote the in vitro development of ovarian follicles . the androgens androstenedione , testosterone , and dihydrotestosterone are primarily synthesized from cholesterol and are produced by the ovary in a sequential manner together with other sex steroids , progestogens , and estrogens , with each steroid serving as a substrate for the subsequent one in a cascade of events known as steroidogenesis [ 3234 ] . the classical two - cells - two - hormones model describes the role of follicular cells ( theca and granulose ) and of gonadotropins ( follicle stimulating hormone ( fsh ) and luteinizing hormone ( lh ) ) in steroid synthesis and secretion in the ovary , with emphasis on the cooperation of the two cell types that are necessary for androgen production . in general , the androstenedione synthesized from progestogens is converted to testosterone by the action of the enzyme 17-hydroxysteroid dehydrogenase in theca cells under the lh stimulus , and the produced androgen is passively transported to the granulose cells where it is converted in estrogen by the action of aromatase under the fsh stimulus ( figure 1 ) . due of this conversion of testosterone in estradiol in the granulosa , many of the actions of androgens on the growth and differentiation of ovarian follicles can be indirectly mediated by the action of androgens as precursors in the biosynthesis of estrogens . although the actions of estrogens on the ovary are well known in terms of the pattern of expression and function of estrogen receptors , little is known about the direct involvement of androgens in terms of their interaction with the specific receptor in the regulation and maintenance of folliculogenesis . the cellular actions of androgens require the binding and activation of the specific ligand receptor called the androgen receptor ( ar ) . both the protein and messenger rna of ar have been detected in the ovary of various mammalian species , such as rodents [ 38 , 39 ] , cattle [ 40 , 41 ] , sheep , swine [ 43 , 44 ] , non - human primates [ 16 , 17 ] , and humans [ 45 , 46 ] . although most of these studies have indicated that granulosa cells are the predominant sites of ar expression , theca and ovarian stromal cells also express ar [ 20 , 47 , 48 ] . in oocytes , ar expression exhibits an evolutive profile that is highly expressed in amphibians , moderately expressed in rodents , little expressed in ruminants [ 42 , 44 ] , and incipient or absent in non - human primates and in humans [ 17 , 5052 ] . in rodents and primates , ar expression appears to be regulated along follicular development , being increased in ovaries containing a larger number of preantral and antral follicles of small diameter and reduced in ovaries containing periovulatory follicles [ 53 , 54 ] . the analysis of the expression in isolated follicles shows that , in these species , follicles in the early stages of development express a larger number of ar than those in more advanced stages ( table 1 , ) [ 20 , 55 ] . additionally , a differential gradient of ar expression is noted in mature follicles , which are little expressed in mural granulosa cells and are highly expressed in cumulus cells . the profile of mrna and the protein of ar in different follicular classes are presented in table 1 . like all steroid hormones , the androgens primarily exert their functions by binding and activating specific nuclear receptors that trigger the intracellular events responsible for the beginning of the transcription of target genes [ 57 , 58 ] . additionally , the androgens can also exert their effects by interacting with receptors located on the cell membrane to perform rapid , non - genomic actions involved in the activation of various transcription factors [ 5961 ] . thus , the activated ars transcriptionally regulate the expression of a selected group of genes via direct or indirect association with the regulatory regions of ( enhancer / promoter ) upstream elements [ 37 , 62 , 63 ] . although several autocrine and/or paracrine factors involved in the regulation of the development of ovarian follicles have been described [ 6 , 64 , 65 ] , only some genes responsible for the transcription of these factors have been tested as ar targets . particularly important among them are the genes related to fsh receptors , insulin - like growth factor-1 ( igf-1 ) , and aromatase enzyme [ 18 , 19 , 66 , 67 ] . the main findings regarding the direct action of androgen hormones on the in vivo and in vitro control of follicular development in mammals are based on the transcriptional actions of ar in follicular cells . as reported for ar expression in the oocytes , the physiological role of androgens in oocyte maturation appears to have been lost over evolution , being phylogenetically replaced by the action of gonadotropins . although the effects of chronic exposure to high androgen concentrations during the prenatal or postnatal period in different mammalian species are extensively known and are correlated with irregularities of the reproductive cycle and changes in the ovarian morphology in patients with a diagnosis of pcos , few studies have been designed to assess the effects of short - term exposure to low androgen concentrations . the main results obtained in studies evaluating the effects of the in vivo administration of androgen hormones on early follicular development in different species are listed in table 2 and described below . subcutaneous implants for the controlled release of low androgen doses have been used as an efficient tool for the study of the effects of short - term androgen exposure using experimental animal models . in non - human primates , subcutaneous implants containing different doses of testosterone promoted an expressive increase in follicular recruitment , growth , and survival . these effects appear to be mediated by a local amplification of the action of both igf-1 and fsh because exposure to testosterone induced an increase in igf-1 , igf-1 receptor , and fsh receptor mrna in the ovaries of these animals [ 18 , 19 , 66 ] . the increase of follicular recruitment was positively correlated with increases in igf-1 and igf-1 receptor mrna in the oocytes of primordial follicles , suggesting an indirect action of androgens via the igf-1 system on follicular activation . all of the effects on both ovarian morphology and the igf-1 and fsh system induced via exposure to testosterone were fully replicated when subcutaneous implants containing the nonaromatizable androgen dihydrotestosterone were used , showing that the effects of androgens are mediated by ar and not by conversion to estrogens . similarly , the in vitro treatment of granulosa cells obtained from the antral follicles of swine with dihydrotestosterone increases the action of igf-1 as a stimulus to cell proliferation , as well as the effect of growth and differentiation factor 9 ( gdf9 ) in the presence of igf-1 [ 71 , 72 ] , both of which are effects that are blocked by the presence of the ar antagonist hydroxyflutamide . the positive correlation between the expression of the ar gene and the proliferation of granulosa cells and follicular growth further supports the hypothesis of the involvement of androgens in follicular growth via ar ( figure 2 ) . in other mammalian species , short - term exposure to low androgen doses , the intramuscular administration of dihydrotestosterone during the first 3 days of the early follicular phase and during the last 3 days of the late follicular phase of the reproductive cycle significantly increased the ovulatory rate of these animals . the administration of 10x diluted doses administered from the 13th day of the estrous cycle to the next estrus promoted a significant increase of mrna expression of the fsh receptor in periovulatory follicles , suggesting that the increased ovulatory response detected in androgenized animals is in fact related to the increased sensitivity of ovarian follicles to the gonadotropic action induced by the androgen . in rodents , the administration of subcutaneous implants containing dihydrotestosterone promoted the increased expression of fsh receptor mrna in preantral follicles . additionally , the potential for the in vitro development of preantral follicles isolated from the ovaries of androgenized animals was superior to that of nonandrogenized follicles , showing that in this species the androgens also promote follicular growth via increased gonadotropic sensitization . taken together , these findings suggest that the in vivo action of androgens via ar may regulate both the expression and the action of one or more of the ovarian growth factors necessary for the regulation of follicular recruitment and growth . it should be noted that no in vivo studies have been conducted on the human species because of the limitation represented by the side effects that this practice could cause in women exposed to systemic androgens . a satisfactory model for these evaluations in humans is represented by the study of oocytes obtained from patients with pcos . in 1991 , cha et al . reported the first pregnancy resulting from in vitro maturation ( ivm ) ; in 1994 , trounson first reported a successful pregnancy process using oocytes aspirated from nonstimulated patients with pcos . since then , various studies have performed ivm without stimulation in patients with pcos with the objective of obtaining pregnancy rates similar to those of fertile women [ 75 , 7779 ] and to assess the efficacy of ivm and fertilization in nonstimulated patients . in the studies cited above , immature oocytes were aspirated from infertile patients between days 6 and 14 of the cycle with the aid of transvaginal ultrasound , and those of normal morphology were placed in culture for 2448 hours ; only those with extrusion of the first polar body were submitted to intracytoplasmic sperm injection ( icsi ) , and transfer was performed 2 to 3 days after icsi [ 75 , 7779 ] . in the cited reports , fertilization rates ranged from 62% to 75.3% , and cleavage rates ranged from 81.4% to 95% , which are in agreement with data reported by trounson et al . . ( 40% ) are similar to those of patients with unknown causes of infertility , although cha et al . and reported lower pregnancy rates in pcos patients ( 27.1% and 32% , resp . ) . according to zhao et al . , ovarian stimulation is unnecessary ; they obtained pregnancy rates similar to those of patients with stimulated cycles ( 40% ) . the advantages of nonstimulation are many and include the prevention of ovary hyperstimulation syndrome induced by gonadotropin use , reduction of costs , shorter and continuous treatment cycles , and the prevention of a series of other long - term complications such as hormone - dependent neoplasias [ 75 , 79 ] ( table 3 ) . according to das et al . , the anti - mullerian hormone ( amh ) plays a role in pcos because its values are increased in affected patients compared with control ovulatory women ( 466 ng / ml and 78 ng / ml , resp . it was also observed that early antral and preantral follicles express amh , which is absent in primordial and in atresic follicles . these authors speculated about the influence of high androgen levels on the elevation of amh values in patients with pcos . the relationship between gdf9 and bone morphogenetic protein ( bmp15 ) factor has been studied in the oocytes and granulosa cells of patients with pcos because these factors play a crucial role in follicle development , ovulation , oocyte maturation , and embryo development . it has been reported that gdf9 and bmp15 are not expressed in patients with pcos , with a consequent later impairment of cytoplasm maturation and poor oocyte quality , whereas they are expressed in normal ovulatory women [ 83 , 84 ] . the development of in vitro culture systems able to guarantee the growth and differentiation of isolated ovarian follicles represents a valuable tool for the study of the direct effects of androgens on folliculogenesis . various in vitro systems have been employed for the culture of preantral follicles in various mammalian species , such as cattle , goats , non - human primates , and humans . however , the production of healthy offspring from oocytes of preantral follicles matured in vitro has been reported only in mice thus far . the main results obtained in studies aiming to evaluate the in vitro effects of androgens on early follicular development in different species are listed in table 4 and described below . the in vitro treatment of mouse ovarian follicles with ar antagonists ( hydroxyflutamide and bicalutamide ) reduced follicular growth during the preantral phase , as well as the meiotic maturation of the enclosed oocyte , suggesting the importance of androgen action in follicle maturation . the inability of preantral follicles to develop in vitro to preovulatory stages in the presence of antiandrogen antibodies supports the hypothesis that the actions mediated by ars are important for the early stages of follicular growth . in the same study , the addition of the ar antagonist casodex inhibited the positive effect of fsh on follicular growth , which was completely reversed when dihydroxytestosterone was added , revealing a joint action of androgens and fsh on follicular development . additionally , the increased survival and growth of preantral follicles in the presence of androstenedione represents further evidence of the positive action of androgens on follicular development . in the same study , the addition of antiestrogen antibodies and of estradiol receptor antagonists ( ici 182 , 780 ) did not modify the positive effects of androstenedione on follicular growth , confirming the direct action of androgens on the development of preantral follicle maintenance . the supplementation of the culture medium with other androgen hormones , that is , dihydrotestosterone , testosterone , dhea , and dhea sulfate , at different concentrations ( 10 to 10 m ) also promoted the growth of preantral follicles in a dose - dependent manner . in this study , the ar antagonist hydroxyflutamide but not the aromatase inhibitor fadrozole hydrochloride inhibited the growth response , indicating that estrogens converted from androgens during culture were not responsible for follicular growth . androgen hormones are able to promote follicular growth not only during the culture of isolated follicles but also during in situ culture . the addition of testosterone ( 10 to 10 m ) to the culture medium of fragments of ovarian cortex from bovine fetuses increased the transition from primary to secondary follicles in a dose - dependent manner . in the same study , the addition of estradiol did not promote the same effect ; also , in the presence of the ar antagonist flutamide , the positive effect of testosterone on follicular development was completely abolished , indicating that the observed effect was due to the direct action of androgens via ar . the addition of androgens ( 10 to 10 m ) to the culture medium of fragments of human ovarian cortex tissue significantly inhibited cell apoptosis . similarly , the addition of estradiol to the culture medium was unable to reproduce the effect of androgen ; this effect was also abolished in the presence of the ar antagonist casodex . these findings suggest a positive effect of androgen hormones on the maintenance of the viability of ovarian tissue during culture , which is exclusively due to a direct androgen action mediated by ar . in general , the results reported here suggest that , at least under in vitro conditions , androgen hormones can promote the growth of ovarian follicles during early stages of development . as observed in vivo , high androgen doses can also have a negative influence on follicular development under in vitro conditions . the in vitro exposure of mouse preantral follicles to androgen concentrations higher than 10 m induced the precocious luteinization of granulosa cells and also significantly reduced follicular growth and viability . additionally , under these conditions , the in vitro estradiol and progesterone secretion by developing mouse follicles is exacerbated and is associated with reduced oocyte quality and abnormal chromosome distribution on the metaphase plate . in general , the results compiled in the present review indicate that during the early and intermediate stages of follicular maturation , when ar expression is more pronounced , the androgens locally produced by the developing follicles facilitate the transcription of genes involved in the control of follicle transition from the reserve pool to the growth pool and of genes involved in the promotion of subsequent follicle development . additionally , because androgens increase the activities of fsh , especially those related to cell proliferation and differentiation , the fall in ar expression in mature follicles reduces the action of androgens ; this is possibly a critical event during the processes of follicular selection and atresia . under in vitro conditions , submicromolar androgen doses can have a positive influence on the development of preantral follicles , promoting both survival and growth , especially when combined with the addition of fsh .
background . although chronic hyperandrogenism , a typical feature of polycystic ovary syndrome , is often associated with disturbed reproductive performance , androgens have been shown to promote ovarian follicle growth in shorter exposures . here , we review the main effects of androgens on the regulation of early folliculogenesis and the potential of their application in improving follicular in vitro growth . review . androgens may affect folliculogenesis directly via androgen receptors ( ars ) or indirectly through aromatization to estrogen . ars are highly expressed in the granulosa and theca cells of early stage follicles and slightly expressed in mature follicles . short - term androgen exposure augments fsh receptor expression in the granulosa cells of developing follicles and enhances the fsh - induced camp formation necessary for the transcription of genes involved in the control of follicular cell proliferation and differentiation . ar activation also increases insulin - like growth factor ( igf-1 ) and its receptor gene expression in the granulosa and theca cells of growing follicles and in the oocytes of primordial follicles , thus facilitating igf-1 actions in both follicular recruitment and subsequent development . conclusion . during the early and intermediate stages of follicular maturation , locally produced androgens facilitate the transition of follicles from the dormant to the growing pool as well as their further development .
bright , diffuse x - ray and @xmath12-ray emission has been observed all along the galactic plane , but is particularly bright toward the galactic center ( e.g. , * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? the origin of this emission is uncertain . unlike the cosmic x - ray background , the galactic ridge emission has not yet been resolved into discrete point sources . on the one hand , _ asca _ observations revealed degree - scale differences in the surface brightness of the diffuse emission that could only be produced by poisson fluctuations in the numbers of undetected point sources if they have a luminosity of @xmath13 erg s@xmath4 @xcite . on the other hand , _ chandra _ observations clearly demonstrate that there are not enough discrete sources with @xmath14 erg s@xmath4 to account for more than 10% the galactic ridge x - ray emission @xcite . however , the strengths of fe lines observed from the diffuse emission are similar to those observed from galactic x - ray point sources , which suggests that they could be one and the same ( wang , gotthelf , & lang 2002 ) . furthermore , discrete yet extended sources could produce the spatial variations in the diffuse emission . several classes of extended features have recently been identified , including : regions of bright iron fluorescence that are ascribed to molecular clouds being illuminated by x - rays from a bright point source @xcite or bombarded by low - energy cosmic - ray electrons @xcite ; arcminute - scale features with hard spectra that resemble supernova shocks @xcite ; and x - ray counterparts to radio features that are thought to be magnetic filaments ( * ? ? ? * lu , wang , & lang 2003 ) . _ chandra _ and _ xmm - newton _ observations are only beginning to establish how much flux faint point sources and discrete extended features contribute to the diffuse galactic x - ray emission . if the galactic ridge x - ray emission is truly diffuse , then it could be produced either by hot , @xmath15 k plasma or by cosmic - rays interacting with neutral material in the interstellar medium ( ism ) . the spectrum of the diffuse emission is one of the most useful diagnostics of its origin . observations in the 0.510 kev band reveal lines from h - like and he - like ions of mg , si , s , and fe , which indicates that the diffuse emission can not originate from a plasma with a single temperature @xcite . as a result , several authors have modeled the diffuse emission as originating from two plasma components ( e.g. , * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) : one with @xmath16 kev ( which we refer to here as the `` cool '' or `` soft '' component ) , and a second with @xmath17 kev ( which we refer to as `` hot '' or `` hard '' ) . the soft plasma is thought to be produced by supernova shock - waves @xcite , which are the largest source of energy for heating the ism @xcite . the origin of the @xmath7 kev component of the galactic ridge emission is far less certain . the temperature of the putative hot plasma is too high for it to be bound to the galactic disk , so that the energy required to sustain it could be equivalent to the release of kinetic energy from one supernova occurring every 30 years ( e.g. , * ? ? ? * ; * ? ? ? however , supernovae are not observed to produce thermal plasma with @xmath18 kev , and there is no known alternative source in the galactic disk for that much energy . therefore , several alternative sources of the hot plasma have been proposed . one possibility is that the @xmath7 kev plasma is heated by magnetic reconnection in the ism , and subsequently confined to the galactic plane by a large - scale toroidal field @xcite . it is also possible that the hot diffuse x - ray emission is a low - energy extension of the emission with a power - law spectrum observed above 10 kev , which suggests that it may result from a non - thermal mechanism ( see * ? ? ? * ; * ? ? ? * ; * ? ? ? * but see lebrun 2004 ) . among the proposed mechanisms are charge - exchange interactions between cosmic ray ions and interstellar matter ( tanaka , miyaji , & hasinger 1999 ; valinia 2000 ) , bremsstrahlung radiation from cosmic - ray electrons or protons @xcite , and quasi - thermal emission from plasma that is continuously accelerated by supernova shocks propagating in the @xmath1 kev component of the ism @xcite . these non - thermal processes should produce line emission with energies and flux ratios that differ significantly from those expected from a plasma in thermal equilibrium . unfortunately , previous _ observations were unable to determine the ionization state of the plasma unambiguously , because the spectrum was contaminated by an instrumental fe line between 67 kev . clearly , further study of the diffuse x - ray emission is important for understanding stellar life cycles , magnetic fields , and cosmic ray production in the galaxy . in this paper , we study the spectral properties of the diffuse x - ray emission from a 17 by 17 field around that has been observed for over 600 ks with _ these observations have several advantages over previous ones with _ asca _ @xcite and _ bepposax _ @xcite : ( 1 ) the long integration provides sufficiently large signal - to - noise ratio to study the spectrum of the diffuse emission from arcminute - scale sub - regions of the field , ( 2 ) the 05 angular resolution allows us to resolve the truly diffuse emission from filamentary features and point sources , and ( 3 ) the relative lack of instrumental lines , particularly between 67 kev , allows us to measure the ionization state of the diffuse emission with greater confidence . the layout of the paper is as follows . in section 2.1 we present images that provide an overview of the diffuse emission from the field . in section 2.2 , we examine how the spectrum of the diffuse emission differs across the field . in section 2.3 , we compare the spectra of the point sources and diffuse emission , and place upper limits on the contribution of undetected point sources to the diffuse emission . in section 3.1 , we derive the properties of the putative plasma responsible for the diffuse emission . these are used in sections 3.2 and 3.3 to examine the likely origins of the diffuse emission . in section 3.4 , we discuss the number of undetected point sources that may be present in the field . finally , in section 4 , we list the contributions of point sources , extended features , and diffuse emission to the x - ray luminosity of the galactic center . the inner 20 pc of the galaxy have been observed on twelve occasions as of june 2002 ( table [ tab : obs ] ) using the advanced ccd imaging spectrometer imaging array ( acis - i ) aboard the _ chandra x - ray observatory _ @xcite . the acis - i is a set of four 1024-by-1024 pixel ccds , covering a field of view of 17 by 17 . when placed on - axis at the focal plane of the grazing - incidence x - ray mirrors , the imaging resolution is determined primarily by the pixel size of the ccds , 0492 . the ccds also measure the energies of incident photons , with a resolution of 50300 ev ( depending on photon energy and distance from the read - out node ) within a calibrated energy band of 0.58 kev . we reduced the data from each observation using the methods described in detail in @xcite and @xcite . in brief , we first created an event list for each observation in which we corrected the pulse heights of each event for the position - dependent charge - transfer inefficiency @xcite . next , we excluded events that did not pass the standard asca grade filters and cxc good - time filters . we then removed intervals in which the count rate from the diffuse emission flared to 3-@xmath19 above the mean rate in the 0.58.0 kev band , presumably due to particles impacting the detector . this removed all excursions larger than @xmath20% , which represented a total time of only 15 ks ( 2% of the total exposure ) . the remaining long - term variations in the particle background should be no larger than 5% ( see * ? ? ? the final total live time was 626 ks . before extracting spectra of the diffuse emission , we needed to remove the point sources from the image . to identify the point sources , we created a composite event list by re - projecting the sky coordinates of the events from each observation onto the plane tangent to the radio position of . we excluded the first half of obsid 1561 from the composite event list because a bright @xmath21 erg @xmath3 s@xmath4 transient dominated the northwest portion of the field (; see * ? ? ? an image based upon this event list is displayed in figure [ fig : rawimg ] . we searched for point sources in images of events in three energy bands ( 0.58 kev , 0.51.5 kev , and 48 kev ) using . we used a significance threshold of @xmath22 , which corresponds to the chance of spuriously identifying poisson fluctuations within a beam defined by the instrumental point spread function ( psf ) . we identified a total of 2357 x - ray point sources ( see table 3 in muno et al . 2003a ) , of which 1792 were detected in the full band , 281 in the soft band ( 124 exclusively in the soft band ) , and 1832 in the hard band ( 441 exclusively in the hard band ) . to isolate the diffuse emission in each observation , we excluded events that fell within circles circumscribing the 95% contour of the psf around each point source . the excluded regions are indicated with circles in figure [ fig : rawimg ] , and range in size from 3 within 4 of the aim - point to 7 at an offset of 7.5 from the aim - point . we found that the 95% contour struck an appropriate balance between removing most of the counts from point sources in the image , and leaving diffuse emission to analyze . for instance , in the inner few arcminutes of the image , the density of point sources is so high that the circles circumscribing the 99% contour of the psf cover the image , leaving few counts that are unambiguously diffuse emission . few photons from the observed point sources should contaminate the diffuse emission . there are only @xmath23 net counts from the point sources in the catalog of @xcite , compared to @xmath24 counts in the diffuse emission , so fewer than 0.5% of the counts in the diffuse emission should come from the wings of the psfs for the detected point sources . the flux contributed by the dust - scattering halos of the observed point sources can be estimated using the optical depth toward sources near from ( * ? ? ? * their figures 1 and 8) , and the expected energy - dependent profiles of the halos from ( * ? ? ? * their figures 9 and 10 ) . we estimate that at 7 from the aim - point the scattered component equals the observed point - source flux at 2 kev , but declines to @xmath25% of the point - source flux above 4 kev . within 4 of the aim - point the exclusion regions for the point sources are smaller , so the contribution from scattered flux is twice as large . however , point sources produce only 4% of the diffuse flux at 2 kev and 15% above 4 kev ( see section [ sec : ps ] ) , so the scattered flux contributes only a few percent to the diffuse emission . for comparison , the systematic uncertainty in the calibration of the cti - corrected acis response is similar , on order 3% . finally , we removed the bright , filamentary features identified in the northeast portion of the image by @xcite . these features contributed @xmath20% of the flux from this region . we used the resulting event lists to create images and spectra of the diffuse emission . we combined the event lists from each observation to produce the images of the diffuse emission in the 24 kev and 48 kev bands in figures [ fig : softimg ] and [ fig : hardimg ] . the `` holes '' apparent in the images were left by removing the events associated with the point sources and filamentary features . these images provide a qualitative understanding of the diffuse emission . in the soft band ( 24 kev ) , the inner part of the image is dominated by , the nuclear stellar cluster , and sgr a east . beyond these , two lobes of x - ray emission are oriented perpendicular to the galactic plane and centered on . it is likely that these represent an outflow from the central parsec @xcite . enhanced x - ray emission is also evident in the northeast portion of the image , between the galactic center and the radio arches region located @xmath26 arcmin to the north of the image @xcite . this emission exhibits prominent he - like lines from si and s , and low - ionization k-@xmath0 emission from fe @xcite . finally , a broad ridge of emission with low surface brightness is evident to the southwest ( lower - right ) . the hard band ( 48 kev ) is also dominated by the sgr a complex at the center of the image . at larger radii from the center , the most prominent features are filaments dominated by low - ionization fe emission at 6.4 kev in the northeast @xcite , and a hard , continuum - dominated feature in the south @xcite . these features have been removed from the image , as they were not included when we modeled the diffuse emission . enhancements in the hard diffuse emission are also observed at the bipolar lobes , and in the northeast . in figure [ fig : twocolor ] , we display a smoothed , three color image of the field . the red band was made using photons between 0.52.0 kev , the green with 2.04.0 kev , and the blue with 0.48.0 kev . after removing the point sources , the image was adaptively smoothed using the algorithm described by @xcite . the red band contributes very little flux to the image , because the galactic absorption column prevents us from receiving photons with energies @xmath27 kev from the galactic center . however , soft photons are received from the sgr a complex , and from a bright , slightly extended feature of uncertain nature about 6 north - northeast of . all of the other features evident in the un - smoothed images are apparent in the smoothed image . we used the images in figures [ fig : rawimg][fig : twocolor ] to select regions from which we extracted spectra of the diffuse emission . the regions are displayed with polygons in figures [ fig : softimg ] and [ fig : hardimg ] . five of the regions were chosen because they were particularly dark . the sixth was chosen from the bright region in the northeast to help understand the nature of the surface brightness variations in figure [ fig : twocolor ] . we avoided the bipolar lobes and the ridge to the southwest , as they will be studied elsewhere . in order to model the spectrum , for each region and each observation we computed an effective area function using the ciao tool , which accounts for the vignetting and satellite dither using the distribution of counts received in the extraction region . we then computed a weighted average of the effective area for each region , using as the weighting the number of counts in that region in each observation . similarly , we computed a mean response by averaging the response functions provided by @xcite from the range of detector rows covered by each region , also weighted by the number of counts in each observation . we estimated the background produced by particles impacting the detector using a 50 ks observation taken with the acis - i stowed out of the focal plane of the mirror assembly . in order to account for spatial variations in the background across the acis - i chips , we extracted the background events from regions identical to those we used for the spectra of the diffuse emission . the background observations had the same cti correction and filtering applied as we used for the source events . the area , average offset from the nominal aim point ( ) , total counts , and estimated instrumental background for each region are listed in table [ tab : regions ] . the instrumental background represents 2025% of the total counts in the dark regions , but only 10% of the total counts in the bright region to the northwest . we note that the unresolved cosmic x - ray background contributes insignificantly to the diffuse emission . if we account for @xmath28 @xmath3 of absorption through the galaxy , and the fact that sources brighter than @xmath29 erg @xmath3 s@xmath4 ( 28 kev ; de - absorbed to match the deep - field surveys ) should have been detected , less than 1% of the observed diffuse emission is from extra - galactic background ( e.g. , * ? ? ? * ) . figure [ fig : diffraw ] displays the source and background spectrum of the southeast region , which has the highest signal - to - noise of the dark regions . many lines are detected in the spectrum : the he - like @xmath30 transitions of si , s , ar , ca , and fe ; the he - like @xmath31 transitions of si and s ; the h - like @xmath30 transitions of si , s , ar , and fe ; low - ionization ( `` neutral '' ) fe k-@xmath0 at 6.4 kev ; and an instrumental ni line at 7.5 kev . background - subtracted spectra are displayed in figure [ fig : diffmod ] ; the absence of the instrumental ni line at 7.5 kev indicates that the background subtraction was successful , whereas the other lines are clearly intrinsic to the diffuse emission . motivated by past investigations , we modeled the spectrum of the diffuse emission in two ways . first , we modeled the spectrum between 18 kev using two thermal plasma components . second , we analyzed the spectrum between 4.58.0 kev in order to examine the properties of the iron lines . as pointed out by several authors ( e.g. * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) , the prominent line emission in the spectrum suggests that it originates from optically thin plasma , whereas the simultaneous presence of fe with si , s , and ar suggests that the plasma has multiple temperature components . the plasma model we used ( in ; see also , e.g. , raymond & smith 1977 ; mewe , lemen , & van den oord 1986 ; borkowski , sarazin , & blondin 1994 ) assumes that it is in collisional ionization equilibrium , and is able to self - consistently account for both the continuum emission and the lines from h - like and he - like species . the neutral fe emission must be included ad - hoc as an additional gaussian component . several assumptions were required for the model to reproduce the data . first , we only applied the model between 1.0 and 8.0 kev . below this energy range the photon counts are dominated by foreground emission , while above this range the spectra are background - dominated . second , we examined data taken in 2002 may from the on - board calibration sources ( which produce lines at the k-@xmath0 and k-@xmath33 transitions from al , ti , and mn when the detector is exposed to them ) , and found that even after applying the cti correction the observed line centroids were slightly below their expected values . the shift ranged from @xmath34% near the detector read - out ( at the top and bottom of figure [ fig : rawimg ] ) , up to @xmath35% at the top of each ccd ( the center of the image ) . therefore , we allowed for a @xmath36% shift in the energy scale in each spectrum . when fitting a gaussian to the 6.4 kev line of fe , the line centroids were allowed to vary once , and then were frozen . when using plasma models , the red - shift parameter was used to change the energy scale third , a 3% systematic uncertainty was added in quadrature to the statistical uncertainty on the count rate in order to account for uncertainties in the acis effective area . finally , we found that we needed to allow the abundances of si , s , ar , ca , and fe to vary independently to obtain an adequate fit . to account for absorption we assumed ( 1 ) that the entire region was affected by one column of material that represents the average galactic absorption ( modeled with in ) and ( 2 ) that a fraction of each region was affected by a second column that represents absorbing material with a smaller filling factor along the line of sight ( modeled with ) . the absorption model roughly accounts for the fact that both the plasma and absorbing material are distributed along the line of sight ( see , e.g. , vollmer , zylka , & duschl 2003 ) . the mathematical form of the model was @xmath37 + fe^{-\sigma(e)n_{\rm pc , h } } ) , \label{eq : abs}\ ] ] where @xmath38 is the energy - dependent absorption cross - section , @xmath39 is the absorption column , @xmath40 is the partial - covering column , and @xmath41 is the partial - covering fraction . the absorption was modeled using a separate factor ( i.e. , equation [ eq : abs ] ) for each plasma component , although , aside from the quality of the fits , the basic results do not change if we use a single absorption factor for both plasma components . we note that for the soft plasma component the best - fit value of @xmath41 approached 1 and was poorly constrained , so we fixed its value to @xmath42 . despite the complexity of the diffuse emission in the image , the spectra from all of the regions are basically described with the same collisional - equilibrium plasma model . we list the best - fit parameters for this model in each of the regions in table [ tab : twokt ] , and display them in figure [ fig : diffmod ] . the uncertainties are @xmath43 , derived from a search in chi - squared space with @xmath44 . the value of @xmath45 from some regions is formally poor , but this is not surprising given that the properties of the diffuse emission probably vary somewhat within each of the regions in figures [ fig : softimg ] and [ fig : hardimg ] . we note that two - temperature non - equilibrium plasma models ( and in ; masai 1984 ) did not reproduce the data nearly as well , as they yielded @xmath46 for all combinations of free parameters and absorption components . moreover , the best - fit ionization parameters under those models were larger than @xmath47 s @xmath48 , which reinforces the suggestion that the diffuse emission originates from plasma in thermal equilibrium . the absorption columns derived are consistent with the expected galactic values . the soft , @xmath1 kev component has a total column of approximately @xmath49 @xmath3 , assuming that the partial - covering absorber affects 95% of the field . this indicates that only a small fraction of the soft diffuse emission originates in the foreground . the hot , @xmath7 kev component is absorbed by about @xmath50 @xmath3 over the entire region , with additional absorption of @xmath51 @xmath3 affecting on order half of each region . the additional partial - covering absorption column is comparable to that which could be provided by the typical molecular clouds in the region , @xmath52 @xmath3 ( zylka , mezger , & wink 1990 ) . the higher absorption toward the hard component is probably a selection effect , caused by the fact that no trace of the soft component could emerge from behind @xmath53 @xmath3 of absorption . the differences in the absorption column from field - to - field are not surprising given the non - uniform distribution of molecular clouds near the galactic center . the temperatures of the two plasma components , @xmath54 kev and @xmath55 kev , both differ from region to region . the temperature of the cooler component is well - constrained by the lines of si , s , and ar , and changes by at most 0.1 kev if a different absorption model is assumed . for example , assuming identical absorption for the cool and hot components , the derived temperature decreases to 0.6 kev for some of the regions , but the differences persist . therefore , the dispersion in the values of @xmath56 are significant . on the other hand , the temperature of the hot component is determined both by the ratio of the h - like and he - like lines and by the continuum above 5 kev , and is more uncertain because the effective area of the detector there is smaller . moreover , @xmath57 differs by up to 2 kev when the absorption model is changed . for instance , when a single absorption factor of the form of equation [ eq : abs ] is assumed , @xmath57 increases to 10 kev because the continuum is inferred to be flatter . as a result of the poorer constraint , the differences in @xmath57 are only significant at the @xmath58 level . the fluxes of both spectral components also vary spatially . the emission measures are better correlated with the observed flux than are the temperatures , which suggests that the density or volume of the plasma is the most important factor determining the surface brightness of the diffuse emission . the flux from both plasma components is highest in the east and northeast , as expected from the images in figures [ fig : softimg ] and [ fig : hardimg ] . the differences in the soft emission is the most dramatic : it is lowest in the northwest , southeast , and southwest , 50% higher in the region close to the galactic center , a factor of 3 higher in the east , and a factor of 9 brighter in the northeast . on the other hand , the observed flux from the hard component differs by less than 10% in the four dark regions located at @xmath59 from , and is only a 60% higher in the close region and in the northeast . the inferred de - absorbed hard fluxes differ even less , as the brightest regions produce only 25% more flux than the darkest . under the two - temperature plasma model , there appear to be significant ranges in the elemental abundances both when comparing individual elements , and when comparing the dark and bright regions . unfortunately , we have found that the abundances of all of the metals can be made consistent with solar values if we add a third thermal plasma component with @xmath60 kev to the model . therefore , we must view the measured abundances with skepticism . it will be necessary to obtain spectra of the diffuse emission with higher energy resolution to draw firm conclusions about the metal abundances in the emitting material . finally , we note that the neutral fe emission at 6.4 kev is a factor of 4 more intense in the northeast than in the rest of the regions examined . this can be seen clearly in the equivalent width maps of @xcite . the strength of the 6.4 kev fe emission clearly increases with that of the soft component of the diffuse emission in figure [ fig : softimg ] , although it tends to appear in more strongly localized regions than the soft plasma . in fact , neutral fe emission is responsible for the bright filamentary features in the northeast of the hard image @xcite . these models are the simplest that we have found that reproduce the data . however , the intrinsic spectrum conceivably could be more complicated . for instance , if the low - ionization fe line is produced as part of a reflection nebula , it should be accompanied by a scattered continuum component ( e.g. , * ? ? ? * ) . it has also been proposed that there is a non - thermal component to the spectrum between 28 kev , which represents a low - energy extension of the power - law emission seen above 10 kev ( e.g. , * ? ? ? * but see lebrun 2004 ) . adding a power - law component with photon index @xmath61 does not significantly improve the fit , although the lower bound on the temperature of the hot plasma decreases to @xmath62 kev . likewise , adding a third plasma component does not improve the fit . none of the additional components change the basic conclusions that we present in section [ sec : disc ] . as we will discuss in section [ sec : disc ] , a thermal plasma containing he - like and h - like ions of fe will inevitably be too hot to be bound to the galactic plane , and will therefore require a very large amount of energy to sustain . this has led several authors to propose that much of the continuum and iron emission are produced by non - thermal processes . if this is the case , then the centroid energies , widths , and line ratios of the iron emission provide the best constraints on these models . therefore , we have modeled the fe k-@xmath0 , he-@xmath0 , and h-@xmath0 line complexes with three gaussians . the fe he-@xmath0 line is a combination of forbidden ( 6.63 kev ) , inter - combination ( 6.67 kev ) , and resonance ( 6.70 kev ) transitions that can not be resolved as separate lines with the acis - i . likewise , the fe h-@xmath0 line ( 6.97 kev ) should contain a contribution from the k-@xmath33 transition ( 7.03 kev ) , which has an intensity that is @xmath63% of that of the k-@xmath0 transition ( e.g. , * ? ? ? we modeled the continuum emission between 4.58.0 kev with a power law to facilitate the computation of equivalent widths . the resulting model parameters are listed in table [ tab : iron ] . the uncertainties are 1@xmath19 , derived from a search in chi - squared space with @xmath44 for the centroids , widths , and intensities , and @xmath64 for the ratios of the intensities . one of the best diagnostics of the non - thermal models are the centroid energies of the lines ( e.g. , * ? ? ? * ; * ? ? ? * ) . due to the shift in the gain of the acis - i mentioned above , the observed energies of the fe he-@xmath0 and h-@xmath0 lines are shifted to lower energies by @xmath36% . these shifts are significant at the @xmath65 level . however , the fe k-@xmath0 lines lie , within their uncertainties , right at the expected energy of 6.40 kev . if we account for a 0.5% shift to lower energies , its true centroid could be as high as 6.43 kev . in some cases , non - thermal models for the iron emission also predict that the lines should be broadened ( e.g. , * ? ? ? the widths of the k-@xmath0 and he-@xmath0 lines can only be constrained meaningfully with the two spectra that have the highest signal - to - noise , those from the northeast and southeast . in the northeast , the fe k-@xmath0 line has a width of @xmath66 ev , while in the southeast its width is @xmath67 ev . if the width in the northeast is real , it would imply a velocity dispersion of @xmath68 km s@xmath4 . the fe he-@xmath0 line also appears to be resolved in the northeast , with a width of @xmath69 ev , while in the southeast the upper limit to the width is 40 ev . in both cases , however , the width is consistent with the 70 ev separation between the recombination and forbidden lines . these widths are lower than those reported from asca data by @xcite ( @xmath70 ev ) . the constraints from the remaining spectra are poorer , so we fixed the widths to 40 ev . in all cases except the northeast , the strongest of the iron lines is the he-@xmath0 transition . it is a factor of 1.43.3 stronger than the k-@xmath0 line , and a factor of 2.14.4 stronger than the h-@xmath0 line . however , the uncertainties on the line ratios are large enough that we can not identify significant variations between the dark regions . the northeast region stands out with a very strong k-@xmath0 line with equivalent width of 570 ev . for comparison , the fe k-@xmath0 features studied by @xcite had equivalent widths of 1 kev . overall , the properties of the iron emission are remarkably constant in the dark regions , with the only significant variations found in the bright regions to the northeast , and to a lesser degree in the east . in this section , we examine three aspects of the point sources . first , we determine the amount of flux from detected point sources in each region , to evaluate whether the spatially varying detection threshold affects the amount of flux attributed to diffuse emission . next , we compare the average spectrum of the detected point sources to that of the diffuse emission . finally , we assume that undetected point sources have spectra identical to the detected sources , and determine the maximum flux that they could contribute to the diffuse emission . in muno et al . ( 2004 ) , we examine the spectra of the point sources in considerable detail ; the methods for extracting and combining the spectra of the point sources are described there . in brief , we extracted spectra from within the 90% contour of the psf around each point source using the routine from the tools for x - ray analysis ( tara ) . we then summed the resulting spectra for each region . we computed the effective area functions for each source using , corrected them for the fraction of the psf enclosed by each region and for the hydrocarbon build - up on the detectors . we then averaged the effective area weighted by the count rate from each source . finally , we averaged the response functions that accompany the @xcite cti - corrector from the location of each point source , again weighted by the counts from each source . in order to confirm that the point - source spectra were not contaminated by diffuse emission , we re - extracted the spectra for several hundred point sources from regions that enclosed only 50% of the psf , and found that the resulting average spectrum was indistinguishable from that extracted from the 90% contour of the psf . vignetting limits our ability to resolve point sources at large offset angles from the aim - point , so we first established that a failure to resolve point sources does not affect our estimates of the diffuse flux . to do so , we extracted a summed spectrum from the point sources in each region . we used the diffuse emission from each region to estimate the background , and modeled the point source spectra with the same two-@xmath32 model as for the diffuse emission . we list the surface brightness from detected point sources per square arcminute in table [ tab : difflux ] , along with the total , soft , and hard flux from the diffuse emission . the flux from detected point sources is nearly identical in all of the dark regions located @xmath71 from the aim - point . therefore , we have probably been equally successful at resolving point sources in each dark region . a larger flux from point sources is observed in the northeast . although this excess flux from point sources could indicate that there are more stellar x - ray sources in this region , it is more likely that we mistakenly identified small knots in the diffuse emission as point sources . larger knots that could be identified as extended by eye were removed from the point source list . the flux from detected point sources close to the galactic center is higher because ( 1 ) the angular resolution is better within 5 of the aim point ( ) , so that it is possible to detect fainter sources , and ( 2 ) the surface density of point sources increases as @xmath72 toward . next , we compared the spectrum of the point sources from the entire field to that of the diffuse emission . we found in muno et al . ( 2004 ) that the averaged spectrum of the point sources changed only slightly when considering sources with intensities that ranged over a factor of @xmath73 , so we chose to compare the spectrum of point sources with fewer than 80 net counts to that of the diffuse emission . we used the spectrum of the diffuse emission from the entire field to estimate the background for the point sources . the average spectrum of the point sources and spectrum of the diffuse emission from the southeast are displayed in figure [ fig : psvdiff ] , while the ratios between count spectra from the dark regions of diffuse emission to that from the point sources is displayed in figure [ fig : pscomp ] . as noted by @xcite , the shapes and relative intensities of the he - like and h - like iron lines appear qualitatively similar in the point sources and diffuse emission . under the two-@xmath32 plasma model for the point sources , we find that the soft 0.8 kev component is heavily absorbed , so that it contributes only 3% of the total observed 28 kev flux . the hot 8 kev plasma component produces the remainder of the observed flux . therefore , the near - absence of a soft component is the main reason that the point source spectra appear much harder . in order to determine an upper limit to the amount of flux that undetected point sources can contribute to the diffuse emission , we have added a spectral component representing the point sources with fewer than 80 net counts to our two-@xmath32 models for the diffuse emission . only a constant normalization for the fiducial point source spectrum was allowed to vary , while the parameters of the spectral components representing the diffuse emission were allowed to vary as in section [ sec : twokt ] . we increased the value of the constant normalization until @xmath45 exceeded a threshold that corresponded to a 10% chance that the model and data were consistent . if the initial model was acceptable , the threshold was @xmath74 for 456 degrees of freedom . however , in a couple of cases the initial model including the point source spectrum was unacceptable at the 90% confidence level ( see table [ tab : twokt ] ) , so we varied the normalization until @xmath45 increased by @xmath75 ( this is equivalent to inflating the assumed uncertainties to force @xmath76 to equal 1 for the best - fit model ) . the surface brightness of the diffuse emission that can be accounted for by undetected point sources is listed by region in table [ tab : difflux ] . undetected point sources may produce @xmath77 erg @xmath3 s@xmath4 arcmin@xmath5 in the regions 7 from the galactic center , which is 3580% of the total observed diffuse flux . the upper limits on the fluxes from undetected point sources are 25 times larger than the total flux from the observed point sources . we also list in table [ tab : difflux ] the flux from the soft and hard components that remains when an undetected point source contribution is subtracted . the reduction in inferred diffuse flux is most dramatic in the hard band : in the southeast , southwest , and close dark regions , a hard diffuse component is not necessary if one includes a contribution of undetected point sources in the spectrum . moreover , the remaining three regions in which undetected point sources can not replace the hard component of the diffuse emission are also those regions in which the initial model for the diffuse emission ( without the point source component ) was formally unacceptable . the similarity between the spectra of the point sources and diffuse emission suggests that all of the hard diffuse emission could be produced by point sources . however , in this case the larger hard flux in the northeast would require that there are @xmath78% more stellar x - ray sources per solid angle than in the dark regions at a similar offset from the galactic center . we address this issue further in section [ sec : point ] , where we estimate the number of undetected sources that would be required to produce the observed @xmath79 kev diffuse emission . as mentioned in the introduction , these _ chandra _ observations of the diffuse emission within the 17by 17 field around provide three advantages over previous observations with _ asca _ @xcite and _ bepposax _ @xcite : ( 1 ) the instrument does not produce line emission between 27 kev that could contaminate the spectrum and thereby produce ambiguities in the measured ionization state of the emitting medium ( compare figure [ fig : diffraw ] and , e.g. , * ? ? ? * ) , ( 2 ) the long integration time allows us to measure the spectrum of the diffuse emission with high signal - to - noise on arcminute spatial scales ( figure [ fig : diffmod ] ) , and ( 3 ) the high angular resolution of _ chandra _ allows us to distinguish the truly diffuse emission from point sources , supernova remnants , and an apparent outflow from ( figure [ fig : rawimg ] ) . the spectrum of the diffuse emission from the galactic center is dominated by line emission from he - like and h - like si , s , ar , ca , and fe ( figure [ fig : diffraw ] ; see also * ? ? ? * ; * ? ? ? these lines indicate that the diffuse emission results from a two - component collisionally - ionized plasma , with temperatures of @xmath16 kev and @xmath17 kev . the line energies and ratios are consistent with those expected from plasma in thermal equilibrium ( compare * ? ? ? * ; * ? ? ? therefore , in section [ sec : prop ] , we adopt the assumption of thermal equilibrium to examine the physical properties of the putative plasma responsible for the diffuse emission . the main parameters of our spectral models , the temperature and emission measure of the putative plasma components , vary on scales of @xmath80 pc . the properties of the cool , @xmath16 kev component were determined primarily by the ratio of fluxes in the he - like and h - like transitions of si and s ( figure [ fig : diffmod ] ) , and were well - constrained . the temperature of the soft component of the plasma ranges between 0.7 to 0.9 kev , and its emission measure ranges between 124 @xmath81 pc ( table [ tab : twokt ] ) . the properties of the hot , @xmath17 kev component were derived from the ratio of fluxes in the he - like and h - like transitions of fe , and by the shape of the continuum . this hard component is less well - constrained , because the instrumental effective area is much smaller near the fe transitions at 67 kev than it is near the si and s transitions at 1.52.5 kev . the temperature of the hard component ranges between 69 kev , but is consistent with a mean value of 8 kev at the 2@xmath19 level . the emission measure ranges between 1.53.0 @xmath81 pc ( table [ tab : twokt ] ) . these spatial variations in the diffuse emission provide new insight into the origin of the putative plasma components that produce the diffuse emission , as we discuss in sections [ sec : disc : soft ] and [ sec : disc : hard ] . finally , we were able to separate the diffuse emission from point sources as faint as @xmath82 erg @xmath3 s@xmath4 . we have already noted that the flux from the point sources detected in our _ chandra _ image accounts for less than 10% of the diffuse emission from the galactic center @xcite . however , in the current paper we find that the average spectrum of the faintest point sources detected near is remarkably similar to that of the hard component of the diffuse emission ( see also * ? ? ? the similarity is particularly striking between 6.57.0 kev , where there is strong emission from he - like and h - like fe ( figure [ fig : psvdiff ] ) . as a result , if point sources that have not been detected have the same spectra as the detected ones , they could contribute up to 80% of the total 28 kev flux , and up to 100% of the hard component of the flux ( table [ tab : difflux ] ) . we address the plausibility that undetected point sources produce a significant fraction of the apparently diffuse flux in section [ sec : point ] . the two-@xmath32 plasma model provides as free parameters the temperature ( @xmath32 ) and emission measure ( @xmath83 ) of the plasma components . by assuming a depth for the emitting region , we can use these parameters to derive energies , densities , and time scales that can be used to understand the origin of the putative plasma . we list the properties for the putative soft and hard plasma components in table [ tab : prop ] . we assume a distance of 8 kpc to the galactic center throughout the rest of the paper @xcite . the luminosity of the diffuse emission provides a lower limit to the amount of energy required to sustain it . we have computed the luminosity from the de - absorbed 28 kev fluxes in table [ tab : twokt ] by applying a bolometric correction that we determined by integrating the total flux from the model in ; this is equivalent to using the cooling function in , for example , @xcite . the bolometric correction for the soft component is @xmath84 , and for the hard component is @xmath85 . most of the extra luminosity lies between 0.12 kev , and would be easily detectable with _ chandra _ were it not for the absorption toward the galactic center . these corrections are uncertain by about 50% , as they depend upon the assumed elemental abundances . with these caveats in mind , we find that the luminosities of the soft component range between @xmath86 erg s@xmath4 arcmin@xmath5 in the darkest regions , up to @xmath87 erg s@xmath4 arcmin@xmath5 in the northeast bright region . the luminosity of the hard component spans a smaller range , from @xmath88 erg s@xmath4 arcmin@xmath5 . the variations in the luminosity of the plasma are correlated with those in the emission measure , @xmath89 , and therefore either the depths of the emitting regions or the densities of the plasmas vary spatially over the image in figure [ fig : twocolor ] . the depths of the emitting regions are probably somewhere between 10 pc , which corresponds to the approximate diameter of the extraction regions we used , and 250 pc , which corresponds to the @xmath90 major axis of the elliptical region of 6.7 kev fe emission region observed with _ we will take the geometric mean of these extreme values , and assume that the depth is 50 pc . we report each quantity in table [ tab : prop ] with a scale factor @xmath91 to account for possible variations in the depth of up to a factor of 5 . this term can also account for the possibility that the filling factor of the plasma is less than unity . because we are only interested in order - of - magnitude estimates , we assume that the plasma is pure hydrogen . the density of the plasma is then related to the emission measure by @xmath92 . the mean density of the soft plasma is 0.1 @xmath48 in the darkest regions , and 0.5 @xmath48 in the bright region to the northeast . the density of the hard plasma is near 0.1 @xmath48 over most of the image , and increases by a factor of 2 in the northeast and within 4 of . for comparison , @xcite derive densities of 0.30.4 @xmath48 from their analysis of a larger , 1 square degree field observed with _ asca_. this is probably because the larger field is dominated by bright regions similar to the northeast region in our image , such as the radio arches region in the survey of @xcite . from the density and volume of the plasma , we can compute its total mass . the masses of both the soft and hard components in the dark regions are about @xmath93 @xmath94 arcmin@xmath5 . the density of soft plasma in the bright region is higher , so its mass is @xmath95 @xmath94 arcmin@xmath5 . the total mass of the plasma in the 17 by 17 field is about 500 @xmath96 @xmath94 . for comparison , in the 1 square degree _ asca _ field , @xcite derive a plasma mass of 20004000 @xmath94 , which is only slightly lower than our mass if one takes into account the difference in field - of - view . the energy density of the plasma is @xmath97 , and so is proportional to @xmath98 . in the dark regions , the soft component has an energy density of @xmath99 erg @xmath48 , or 200 ev @xmath48 . the hard component has an energy density of @xmath100 erg @xmath48 , or 1 kev @xmath48 . the total energy per arcmin@xmath101 is found by multiplying @xmath102 by the plasma volume , and so is proportional to @xmath96 . in the dark regions , the soft component has @xmath103 erg arcmin@xmath5 , while the hard component has @xmath104 erg arcmin@xmath5 . these values are nearly identical to those derived from the _ asca _ observations of @xcite . in the bright regions , the soft component has three times as much energy as it does in the dark regions , while the hard component contains only 50% more energy . the total thermal energy of the plasma in the image is over @xmath105 erg . dividing the total energy of the plasma by its luminosity yields the cooling time scale , @xmath106 . in the dark regions , the soft component cools in @xmath107 y. the bright region should cool more rapidly , in @xmath108 y , because @xmath109 . the hard component cools in @xmath110 y. however , it has been previously noted that an 8 kev plasma is too hot to be bound to the galactic plane ( e.g. , * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) , so it is important to consider the amount of energy that could be lost as the plasma expands . if we use the galactic potential from breitschwerdt , mckenzie , & vlk ( 1991 ) , we find that the escape velocity from the galactic center ( @xmath111 pc ) is approximately 900 km s@xmath4 . this can be compared to the sound speed of the plasma @xmath112 . if we assume @xmath113 ( for a monotonic , adiabatic gas ) and @xmath114 ( electrons and protons ) , the sound speed for the 0.8 kev plasma is @xmath115 km s@xmath4 , and for the 8 kev plasma is @xmath116 km s@xmath4 . therefore , only the cooler plasma is gravitationally bound to the galaxy . however , in the absence of other confining forces , even the soft plasma could expand significantly ; the contribution of the @xmath117 @xmath94 within the central 10 pc of the galaxy ( laundhardt , zylka , & mezger 2002 ) is negligible when considering whether the plasma can expand , as the corresponding escape velocity from the central parsecs is only 150 km s@xmath4 . using the potential in @xcite , the 0.8 kev plasma could expand to a height of several hundred parsecs . upper limits to the energy required to sustain the plasma components can be obtained by assuming that they expand adiabatically and form a wind from the galactic center . although computing the energy loss rate rigorously would require a full solution of wind equations such as those in @xcite , we can make a rough estimate of it by assuming @xmath118 . for the soft component of the plasma , the energy loss rate is @xmath119 erg s@xmath4arcmin@xmath5 , 300 times higher than the x - ray luminosity of the plasma . for the hot component of the plasma , this energy loss rate is @xmath120 erg s@xmath4arcmin@xmath5 , or 4 orders of magnitude higher than its x - ray luminosity . over the entire image , the upper limit to the power required is @xmath10 erg s@xmath4 . the corresponding cooling time if the plasma is expanding is @xmath121 years for the @xmath1 kev plasma , and @xmath122 years for the @xmath7 kev plasma . these are much shorter than the radiative cooling time scales . even if external forces are on average sufficient to confine the diffuse plasma , any over - density in the plasma should be smoothed out in a short time by the differential rotation at the galactic center . at 10 pc and for an enclosed mass of @xmath123 @xmath94 the orbital time scale is @xmath124 years . therefore , differential rotation would smooth out any variations in the plasma properties with a radial extent @xmath125 on a time scale of @xmath126 . for parsec - scale features , @xmath127 years , which is comparable to the cooling time of the soft plasma , but significantly shorter than the cooling time of the hard plasma . the pronounced variations in the surface brightness of the soft , @xmath1 kev component of the diffuse emission ( figure [ fig : softimg ] ) have important implications for the spatial distribution and age of the putative plasma that produces it . the fact that the spatial variations are much more pronounced in the soft emission than in the hard ( table [ tab : prop ] ) suggests that the soft plasma occupies a much smaller volume . indeed , if we assume that the soft and hard plasma are in pressure equilibrium , then the filling factor of the soft plasma would have to be roughly 110% of that of the hard plasma . at the same time , any over - density in the soft plasma is unlikely to survive very long , because the differential rotation of the galactic center should shear apart any coherent features . for example , the bright diffuse emission in the northeast has an angular extent that corresponds to a size of @xmath84 pc , and so differential rotation should dissipate it within the orbital time scale of @xmath128 y. the youth and small filling factor of the @xmath1 kev plasma are easily understandable if it is produced by supernova remnants . this is the common explanation for the origin of similar soft diffuse emission that is observed throughout the galactic disk , because supernova remnants are often observed to have spectra consistent with @xmath129 kev plasma ( e.g. , * ? ? ? supernova are also the largest known source of energy for heating the ism @xcite , and can easily provide enough energy to heat the diffuse soft plasma . if we assume that the dominant cooling mechanism for the soft plasma is radiative , then the energy input required to sustain it is only @xmath130 erg s@xmath4 for the inner 20 pc of the galaxy . if , on analogy with the conclusions of @xcite for the galactic disk , we assume that @xmath129% of the @xmath131 erg of kinetic energy per supernova heats the soft plasma in our image , then it could be sustained by supernova occurring at a rate of one every @xmath132 y. this is not unreasonable , because the total galactic supernova rate is thought to be on order 1 per 100 y , and the inner 20 pc of the galaxy contains approximately 0.1% of the galactic mass @xcite , so the expected rate in this field is also about one supernova per @xmath132 y. moreover , sgr a east @xcite and the radio wisp e @xcite are already thought to be remnants of recent supernova , so it seems likely that the inner 20 pc of the galaxy has experienced a supernova rate at least this high . it is also possible that the winds from young , massive wolf - rayet and early o stars could contribute to the @xmath133 kev plasma . a typical wr star can lose mass at a rate of @xmath134 y@xmath4 with a velocity of @xmath135 km s@xmath4 ( e.g. * ? ? ? * ; * ? ? ? the kinetic energy of such a wind is @xmath136 erg s@xmath4 . x - rays are produced by internal shocks in the winds of individual stars , but diffuse x - rays are only produced by the large shocks that occur when winds from clusters of these stars encounter the interstellar medium . in observations of massive star clusters , about 10% of the wind kinetic energy is converted into x - rays @xcite . therefore , a single wr and/or early o star could in principle produce the soft component of the diffuse emission . however , extended x - ray emission from massive stars is usually only associated with young stellar clusters that contain several massive stars within a core radius of @xmath137 pc , which typically produce x - rays in a region only @xmath138 pc in radius @xcite . therefore , the x - ray emission from the winds of massive stars is likely to only be important within an arcminute of any as - yet - undiscovered star clusters . the detection of strong @xmath1 kev emission to the northeast of the galactic center is also consistent with our assumption that the soft plasma originates from supernovae and winds from massive stars . the northeast region possesses several interesting properties that may be related to the enhanced diffuse x - ray emission : it contains large amounts of molecular gas ( * ? ? ? * tsuboi , handa , & ukita 1999 ; mezger , duschl , & zylka 1996 ; ) , it is located between young , massive star clusters at the galactic center @xcite and the radio arches region @xcite , and it is the site of the strongest low - ionization fe emission @xcite . the mere presence of molecular clouds is clearly not sufficient for forming the soft diffuse and 6.4 kev fe emission , as molecular clouds are also evident in the southeast , without any corresponding enhancement in the iron emission @xcite . given the arguments for the origin of the soft plasma above , it is more likely that both the soft diffuse emission and the neutral fe emission are associated with recent star formation . for instance , a type ii supernova in the northeast could produce both the bright , soft diffuse emission and the neutral fe emission ( e.g. , * ? ? ? * ; * ? ? ? * ) . outside of the galactic center , the most similar region may well be the carina nebula , which exhibits @xmath139 erg s@xmath4of x - rays from the outflow around @xmath140 car , the wr and o stars in the cluster trumpler 14 , and diffuse emission that may have resulted from recent supernovae @xcite . the hard , @xmath7 kev emission is distributed much more uniformly than the soft , but its intensity is still correlated with that of the soft emission . the correlation between the hard and soft emission suggests that they are produced by related physical processes . the relative uniformity of the hard emission may result from its higher sound speed , which would cause over - dense regions of @xmath7 kev plasma to expand on a time scale of @xmath141 y. however , the @xmath7 kev plasma is somewhat hotter than is usually observed from either supernova remnants or clusters of wr and early o stars ( see , e.g. , * ? ? ? * ; * ? ? ? * ) . moreover , the sound speed of the hot plasma is larger than the escape velocity from the galactic center , and therefore the energy required to sustain the expanding @xmath7 kev plasma within the image in figure [ fig : twocolor ] is @xmath10 erg s@xmath4 . this energy is four orders of magnitude larger than that required to sustain the @xmath1 kev plasma ( which probably does not cool by expanding ) , and is equivalent to the entire kinetic energy of one supernova occurring every 3000 y. this makes it difficult to understand the origin of this putative hot plasma , because supernovae are assumed to be the largest source of heat for the ism . moreover , using the values in table [ tab : prop ] , the plasma flowing out from within the inner @xmath142 pc of the galaxy would carry away a mass of roughly @xmath143 @xmath94 y@xmath4 . this mass loss rate is also equivalent to that from one supernova occurring every 3000 y. finally , this @xmath7 kev diffuse emission is observed from throughout the galactic disk , so the mechanism producing it must be widespread . in particular , the hot diffuse emission is probably not produced by the super - massive black hole at the galactic center , . several mechanisms have been proposed to explain the origin of the @xmath7 kev component of the galactic diffuse emission . one possibility is that the hard plasma is heated by magnetic reconnection that is driven by the turbulence that supernovae generate in the ism @xcite . magnetic connection could heat the plasma to @xmath144 , or , for @xmath145 @xmath48 and @xmath146 mg ( table [ tab : prop ] ) , @xmath147 kev . fields of comparable strength are inferred to exist near the galactic center ( e.g. , * ? ? ? moreover , @xcite also pointed out that with the right geometry , the same fields would also be strong enough to confine the @xmath7 kev plasma , thus possibly reducing the amount of energy required to sustain it . unfortunately , the magnetic fields toward the galactic center appear unlikely to confine the hot plasma . individual magnetic flux tubes are observed as synchrotron - emitting radio filaments oriented perpendicular to the galactic plane @xcite . these filaments seem to be interacting with molecular clouds in the region , and are therefore thought to have pressures comparable to those of the turbulent molecular clouds , so that @xmath148 mg ( compare * ? ? ? * ; * ? ? ? * ) . however , it is not clear whether the filaments represent only a small fraction of vertical fields that pervade the galactic center ( serabyn & morris 1996 ; chandran , cowley , & morris 2000 ) , or whether they represent purely local magnetic features @xcite . there is also a toroidal component observed through polarization measurements that dominates within molecular clouds @xcite . however , the zeeman - splitting measurements of @xcite placed upper limits of @xmath149 mg to the strength of any arcminute - scale ordered fields along the lines - of - sights towards most of the molecular clouds near the galactic center ; the only zeeman measurements that reveal fields @xmath150 mg are toward the circum - nuclear disk @xcite . taken together , these observations suggest that the magnetic fields are predominantly vertical at the galactic center , with a toroidal component produced by orbital shear in the molecular clouds @xcite . therefore , any hot plasma at the galactic center would be able to escape vertically away from the plane , thus forming a wind or fountain of plasma ( e.g. , * ? ? ? another class of hypotheses assume that the hard component of the galactic diffuse emission is produced by non - thermal processes associated with supernova shocks . for instance , low - energy ( @xmath151 mev ) cosmic rays could interact with the neutral ism to produce continuum emission through bremsstrahlung radiation and line emission through charge - exchange interactions ( e.g. , * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) . the cosmic - ray electrons could be accelerated by supernova or young pulsars ( e.g. , * ? ? ? * ) . originally , this model was attractive because it could also explain the non - thermal x - ray emission observed above 10 kev . however , observations with _ integral _ have resolved @xmath152% of the emission above 20 kev into discrete point sources @xcite . moreover , neither non - thermal electrons nor protons are efficient at producing bremsstrahlung radiation , so the energy required to produce the observed diffuse emission is nearly as large as that required to replenish a continually - expanding thermal plasma @xcite . finally , the model presented by @xcite does not predict the presence of the fe h-@xmath0 line at 6.9 kev , which is clearly evident in our _ chandra _ spectra ( figure [ fig : diffraw ] and table [ tab : iron ] ) . alternatively , the hard diffuse emission could originate from a quasi - thermal plasma that is generated by supernova shocks that propagate through the cooler 0.8 kev plasma @xcite . this process is a factor of @xmath153 more efficient at generating x - rays than is an expanding thermal plasma . therefore , an energy input of @xmath154 erg s@xmath4 , or the kinetic energy of one supernova occurring every @xmath155 y , is required to produce the observed hard diffuse emission . however , neither of the two model spectra presented by @xcite are entirely consistent with the data in table [ tab : iron ] . their models predict that fe k-@xmath0 lines should be of comparable strength as the he-@xmath0 line at 6.7 kev , whereas in all of the dark regions the fe k-@xmath0 lines are a factor of 23 weaker , and in the northeast bright region it is a factor of 1.8 stronger . their models also predict that the fe he-@xmath0 lines should be a factor of 1.72.5 stronger than the fe h-@xmath0 lines ( depending upon whether the background plasma has a temperature of 0.6 or 0.3 kev ) , whereas we find values that are generally higher , albeit only at the 12@xmath19 level in each case . finally , they predict that if the shocks form in a cool ( 0.3 kev ) background plasma , then the low - ionization fe line should have a centroid near 6.5 kev . the observed centroids are consistent with 6.4 kev , and values as high as 6.5 kev are excluded at the @xmath156 level in the southwest and northeast . therefore , the non - thermal models that have been published to date are challenged by the remarkable similarity between the observed spectrum and that expected from a @xmath7 kev plasma in thermal equilibrium . however , further exploration of the parameter space of the non - thermal models is needed to confirm or refute them definitively . the spectrum of the @xmath7 kev plasma is also very similar to that of the faintest detected point sources . therefore , in this section we use the @xmath157 distribution from equation 5 in @xcite to compute the number of point sources that would be required to produce the hard diffuse emission in the darkest region of the erg @xmath3 s@xmath4 arcmin@xmath5 in the northwest . the @xmath157 distribution allows us to compute the surface density of sources down to a limiting flux @xmath158 . adapting the results of @xcite , we find : @xmath159 we have made several modifications to this equation from the original version . first , we have included a factor @xmath160 for the offset from in arcminutes , which accounts for the decrease in surface density of the point sources within 9 from the galactic center . second , we have converted the photon fluxes in @xcite to energy fluxes by assuming 1 photons @xmath3 s@xmath4 @xmath161 erg @xmath3 s@xmath4 ( 2.08 kev ) , which is appropriate for a @xmath162 power - law spectrum absorbed by a @xmath163 @xmath3 column of gas and dust . finally , we have normalized the distribution in equation [ eq : mod ] to the surface density at @xmath164 , whereas equation 5 from @xcite was normalized to the density at 4.5 . we can then obtain the total flux from point sources ( @xmath165 ) in any given region by integrating the power laws in equation [ eq : mod ] : @xmath166 \\ \times \int { { 1 } \over { \theta } } da . \label{eq : flux}\end{aligned}\ ] ] here , @xmath0 is the power - law slope , @xmath167 is the scale factor for the flux , and @xmath168 is the normalization of the power law , all from equation [ eq : mod ] . @xmath169 and @xmath170 are the bounds on the point - source flux over which the total flux @xmath165 is computed . for the bright end of the distribution in equation [ eq : mod ] , we assume @xmath170 is @xmath171 erg @xmath3 s@xmath4 , which is the brightest point source that we observe in the image ; the values of @xmath165 are not very sensitive to this assumed upper bound . the number of point sources in a region is then found by inserting the assumed value of @xmath165 and solving for @xmath169 in equation [ eq : flux ] , and then inserting @xmath169 into equation [ eq : mod ] . based on the observed hard diffuse and point source flux in the northwest ( table [ tab : regions ] and [ tab : difflux ] ) , we take @xmath172 erg @xmath3 s@xmath4 , and find @xmath173 erg @xmath3 s@xmath4 . using this limiting flux in the @xmath174 distribution , we would predict a surface density of 800 undetected sources arcmin@xmath5 at an offset of 4.5 from , or a total of @xmath175 sources within 20 pc ( 9 ) of . there is no known class object of that could account for that large a number of hard x - ray sources at the galactic center . in fact , the total stellar mass within the inner 20 pc of the galaxy is only @xmath176 @xmath94 @xcite , so in order to account for the diffuse emission , @xmath177% of all stellar sources would have to be hard x - ray sources with @xmath178 erg s@xmath4 . the only plausible candidates for such a large population of hard x - ray sources are cataclysmic variables ( cvs ) and young stellar objects ( ysos ) . we can estimate the number of cvs within 20 pc of the galactic center by scaling their local density by the relative stellar mass density . based upon the mass model of @xcite , the stellar density in the inner 20 pc is 1000 @xmath94 pc@xmath179 , compared to a local density of 0.1 @xmath94 pc@xmath179 . the local space density of cvs is at most @xmath180 pc@xmath179 @xcite , so the space density at the galactic center should be less than @xmath181 pc@xmath179 . therefore , we expect only @xmath141 cvs there , which is an order of magnitude smaller than the number of point sources needed to explain the hard component of the diffuse emission . the number of ysos would depend on the recent formation rate for low - mass stars . if stars have formed steadily over the @xmath182 gyr lifetime of the galaxy , then the @xmath183 @xmath94 of stars within the inner 20 pc would have formed at a rate of @xmath184 y@xmath4 , which would imply that there should be @xmath141 stars younger than 1 myr old . this number is comparable to the limit obtained from the fact that only @xmath185 hard x - ray sources in our field exhibit flares with luminosities and durations that are similar to one seen from ysos @xcite . about 0.1% of ysos exhibit flares that would have been detectable from the galactic center , which suggests that @xmath186 ysos lie near the galactic center @xcite , which is still far too few to produce the hard diffuse emission . other x - ray sources that are similarly abundant are unlikely to contribute to the hard component of the diffuse emission because their luminosities are too low or their spectra are too soft . for instance , although rs cvns are at least as numerous as cvs @xcite , they have thermal spectra with @xmath187 kev @xcite . likewise , while isolated neutron stars accreting from the interstellar medium could make up @xmath129% of the mass of the galactic center @xcite , they must be less luminous than @xmath188 erg s@xmath4 , or else many thousands of similar systems would have been detected in the local galaxy in the _ rosat _ all - sky survey @xcite . in contrast , less than a dozen candidate isolated neutron stars were identified with _ rosat_. therefore , if the diffuse emission is produced by undetected point sources , they would have to belong to a population of sources that has not yet been identified . using a 600 ks exposure with the acis - i aboard _ chandra _ , we have studied the spectrum of diffuse x - ray emission from several regions within a projected distance of 20 pc of . the spectrum of the diffuse emission exhibits he - like and h - like lines from si , s , ar , ca , and fe , as well as a prominent low - ionization fe line . if the spectrum is modeled as originating from diffuse plasma , two components with temperatures of 0.8 kev and 8 kev are required , along with line emission from low - ionization fe at 6.4 kev . the energies and flux ratios of the lines from both temperature components are consistent with emission from plasmas in collisional ionization equilibrium . in table [ tab : summary ] , we provide a summary of the origins of the x - ray emission between 28 kev from the inner 20 pc of the galaxy . by far the largest contribution to the luminosity of the galactic center is from diffuse emission . in comparison , detected point sources contribute only 10% to the luminosity of the galactic center , while discrete filamentary features contribute less than 5% of the total luminosity of the inner 20 pc of the galaxy . these results are potentially useful for understanding the origin of diffuse x - ray emission from distant galaxies with quiescent central black holes . however , it is important to note that these observations of the galactic center are strongly affected by interstellar absorption with a column density of at least @xmath189 @xmath3 . therefore , the cool emission with @xmath190 kev that produces most of the 0.58.0 kev flux from distant galaxies is obscured at the galactic center . at the same time , _ chandra _ observations of other galaxies are not sensitive to the @xmath7 kev plasma that dominates the flux we observe from the galactic center , because this hard emission has a much lower surface brightness than the @xmath191 kev emission where the _ chandra _ effective area is largest ( 0.53 kev ) , and it is difficult to resolve from bright x - ray binaries . thus , these observations of the galactic center provide a unique view of the hottest components of the ism of galaxies . the properties of the soft , @xmath1 kev plasma component of the diffuse emission vary significantly across the image , both in temperature between 0.7 and 0.9 kev , and in surface brightness between @xmath192 and @xmath193 erg @xmath3 s@xmath4 arcmin@xmath5 . the variation in the properties of the soft plasma suggest that it is relatively young , because differential rotation at the galactic center should destroy any coherent features within @xmath194 y. supernovae probably supply most of this energy , although the winds from wr and early o stars could also contribute . within the inner 20 pc of the galaxy , the @xmath130 erg s@xmath4 lost by the plasma through radiative cooling could be replaced by 1% of the kinetic energy of one supernova occurring every @xmath195 y. the inner 20 pc of our galaxy contains about 0.1% of its total mass , so assuming that one supernova occurs every 100 y in the galaxy , this rate is roughly consistent with that expected near the galactic center . the hard component of the diffuse emission is more spatially uniform than the soft , but the intensities of the two components are still correlated . although this might suggest a common origin for the two plasma components , supernovae and massive stars are not usually observed to produce plasma with @xmath18 kev . this hard emission is distributed throughout the galactic plane , so it is not likely to be associated with an outburst from . instead , the hard emission could result from a @xmath196 kev plasma that is heated indirectly by massive stars and supernova remnants , which , for example , could drive reconnection in the magnetic fields near the galactic center @xcite . however , a 8 kev thermal plasma would freely expand away from the galactic center , and would require @xmath197 erg s@xmath4 to sustain . this is equivalent to the entire kinetic energy of one supernova every 3000 years , which is a much larger rate than usually assumed for supernova . supernova are the most energetic source of heat for the ism , so if the hard diffuse emission is produced by a @xmath7 kev plasma , it would imply that there is a significant shortcoming in our understanding of heating mechanisms for the ism . alternative explanations for the hard diffuse emission that were intended to lessen the energy required are equally unsatisfying . the suggestion that the hard diffuse emission originates from undetected stellar x - ray sources is unlikely because there is no known class of source that are numerous enough , bright enough , and hot enough to produce the observed flux of @xmath7 kev diffuse emission . likewise , if the hard diffuse emission originates from non - thermal processes , such as the shocks that accelerate cosmic rays , the energies and ratios of the intensities of the line emission should deviate measurably from the values expected for a plasma in thermal equilibrium ( e.g. , * ? ? ? these deviations are not observed in our _ chandra _ observations , which presents a challenge to the current non - thermal models . further observations should clarify the nature of the diffuse x - ray emission from the galactic center . x - ray missions with higher spectral resolution , such as astro - e 2 , will be able to better - constrain the properties of the putative diffuse plasma by resolving the individual transitions of he - like and h - like fe , and possibly measuring the velocity dispersions of the fe ions themselves . alternatively , a future hard x - ray survey , such as exist , could identify a heretofore unknown population of numerous , faint , hard x - ray sources that may be responsible for producing the @xmath7 kev diffuse emission . 0 baganoff , f. k. 2003 , , 591 , 891 bamba , a. , ueno , m. , koyama , k. , & yamauchi , s. 2003 , , 589 , 253 breitschwerdt , d. , mckenzie , j. f. , & vlk , h. j. 1991 , , 245 , 79 borkowski , k. j. , sarazin , c. l. , & blondin j. m. 1994 , , 429 , 710 bykov , a. m. 2002 , , 390 , 327 chandran , b. d. g. , cowley , s. c. , & morris , m. 2000 , , 528 , 723 chevalier , r. a. 1992 , , 397 , l39 chevalier , r. a. & clegg , a. w. 1985 , , 317 , 44 dahmen , g. , httemeister , s. , wilson , t. l. , & mauersberger , r. 1998 , , 331 , 959 dogiel , v. a. , inoue , h. , masai , k. , schnfelder , v. , & strong , a. w. , 2002 , , 581 , 1061 draine , b. t. 2003 , , 598 , 1026 ebisawa , k. , maeda , y. , kaneda , h. , & yamauchi , s. 2001 , science , 293 , 1633 favata , f. , micela , g. , & sciortino , s. 1995 , , 298 , 482 feigelson , e. d. 2004 , in stars as suns : activity , evolution , and planets , a. benz & a. dupree ( eds . ) iau symposium 219 , in press figer , d. f. , kim , s. s. , morris , m. , serabyn , e. , rich , r. m. , & mclean , i. s. 1999 , , 525 , 750 grosso , n. , montmerle , t. , feigelson , e. d. , & forbes , t. g. 2004 , submitted to ho , p. t. p. , jackson , j. m. , barret , a. h. , & armstrong , j. t. 1985 , , 288 , 575 kaneda , h. , makishima , k. , yamauchi , s. , koyama , k. , matsuzaki , k. , & yamasaki , n. y. 1997 , , 491 , 638 killeen , n. e. b. , lo , k. y. , & crutcher , r. 1992 , , 385 , 585 krabbe , a. 1995 , , 447 , l95 koyama , k. , ikeuchi , s. , & tomisaka , k. 1986a , , 38 , 503 koyama , k. , maeda , y. , sonobe , t. , takeshima , t. , tanaka , y. , & yamauchi , s. 1996 , , 48 , 249 koyama , k. , makishima , k. , tanaka , y. , & tsunemi , h. 1986b , , 38 , 121 larosa , t. n. , kassim , n. e. , lasio , t. j. w. , & hyman , s. d. 2000a , , 119 , 207 larosa , t. n. , lazio , t. j. w. , & kassim , n. e. 2001 , , 563 , 163 launhardt , r. , zylka , r. , & mezger , p. g. 2002 , , 384 , 112 lebrun , f. 2004 , , 428 , 293 leitherer , c. , robert , c. , & drissen , l. 1992 , , 401 , 596 lu , f. j. , wang , q. d. , & lang , c. c. 2003 , , 126 , 319 maeda , y. 2002 , , 570 , 671 masai , k. 1984 , ap&ss , 98 , 367 masai , k. , dogiel , v. a. , inoue , h. , schnfelder , v. , & strong , a. w. , 2002 , , 581 , 1071 markevitch , m. 2003 , , 583 , 70 mcnamara , d. h. , madsen , j. b. , barnes , j. , & ericksen , b. f. 2000 , , 112 , 202 mewe , r. , lemen , j. r. , & van den oord , g. h. j. 1986 , , 65 , 511 mezger , p. g. , duschl , w. j. , & zylka , r. , 1996 , aarev , 7 , 289 morris , m. baganoff , f. , muno , m. , howard , c. , maeda , y. , feigelson , e. , bautz , m. , brandt , w. n. , chartas , g. , garmire , g. , & townsley , l. 2003 , astronomische nachrichten , 324 , s1 , 167 muno , m. p. , baganoff , f. k. , bautz , m. w. , brandt , w. n. , broos , p. s. , feigelson , e. d. , garmire , g. p. , morris , m. , ricker , g. r. , & townsely , l. k. 2003a , , 589 , 225 muno , m. p. , baganoff , f. k. , & arabdjis , j. s. 2003b , , 598 , 474 muno , m. p. , baganoff , f. k. , bautz , m. w. , brandt , w. n. , feigelson , e. d. , garmire , g. p. , morris , m. , & ricker , g. r. , in preparation for murakami , h. , koyama , k. , sakano , m. , & tsujimoto , m. 2000 , , 534 , 283 novak , g. chuss , d. t. , renebarger , t. , griffin , g. s. , newcomb , m. g. , peterson , j. b. , loewenstein , r. f. , pernic , d. , & dotson , j. l. 2003 , , 583 , l83 park , s. , baganoff , f. k. , morris , m. , maeda , y. , muno , m. p. , howard , c. , bautz , m. w. , & garmire , g. p. 2004 , , 603 , 548 paumard , t. , maillard , j. p. , morris , m. , & rigaut , f. 2001 , , 366 , 466 perna , r. , narayan , r. , rybicki , g . , stella , l. , & treves , a. 2003 , , 594 , 936 raymond , j. c. & smith , b. w. 1977 , , 35 , 419 raymond , j. c. , cox , d. p. , & smith , b. w. 1976 , , 204 , 290 rosati , p. 2002 , , 566 , 667 steward , f. d. & chlebowski , t. 1982 , , 256 , 530 sakano , m. , warwick , r. s. , decourchelle , a. , & predehl , p. 2003 , , 340 , 747 schwope , a. d. , brunner , h. , buckley , d. , greiner , j. , heyden , k. v. d. , neizvestny , s. , potter , s. , & schwarz , r. 2002 , , 396 , 895 serabyn , e. , & morris , m. 1996 , _ ara&a _ , 34 , 645 sidoli , l. & mereghetti , s. 1999 , , 349 , l49 singh , k. p. , drake , s. a. , & white , n. e. 1996 , , 111 , 2415 skibo , j. g. , johnson , w. n. , kurfess , j. d. , kinzer , r. l. , jung , g. , grove , j. e. , purcell , w. r. , ulmer , m. p. , gehrels , n. , & tueller , j. 1997 , , 483 , l95 schlickeiser , r. 2002 , `` cosmic ray astrophysics '' , springer - verlag , berlin stevens , i. r. , & hartwell , j. m. 2003 , , 339 , 280 sunyaev , r. , markevitch , m. , & pavlinsky , m. 1993 , , 407 , 606 tan , j. d. & draine , b. t. 2003 , astro - ph/0310442 tanaka , y. , miyaji , t. , & hasinger , g. 1999 , astron . nachr . , 320 , 181 tanaka , y. , koyama , k. , maeda , y. , & sonobe , t. 2000 , , 52 , l25 tanaka , y. 2002 , , 382 , 1052 tanuma , s. , yokoyama , t. , kudoh , t. , matsumoto , r. , shibata , k. , & makishima , k. 1999 , , 51 , 161 townsley , l. k. , 2002a , nim - a , 486 , 716 townsley , l. k. , 2002b , nim - a , 486 , 751 townsley , l. k. , feigelson , e. d. , montmerle , t. , broos , p. s. , chu , y .- h . , & garmire , g. p. 2003 , , 593 , 874 tsuboi , m. , handa , t. , & ukita , n. 1999 , , 120 , 1 uchida , k. i. & gsten 1995 , , 298 , 473 valinia , a. & marshall , f. e. 1998 , , 505 , 134 valinia , a. , tatischeff , v. , arnaud , k. , ebisawa , k. , & ramaty , r. 2000 , , 543 , 733 vollmer , b. , zylka , r. , & duschl , w. j. 2003 , submitted , astro - ph/0306200 wang , q. d. , gotthelf , e. v. , & lang , c. c. 2002 , , 415 , 148 , b. 1995 , _ cataclysmic variable stars _ , cambridge university press weisskopf , m. c. , brinkman , b. , canizares , c. , garmire , g. , murray , s. , & van speybroeck , l. p. 2002 , , 114 , 1 worrall , d. m. , marshall , f. e. , boldt , e. a. , & swank , j. h. 1982 , , 255 , 111 yamauchi , s. , kaneda , h. , koyama , k. , makishima , k. , matsuzaki , k. , sonobe , t. , tanaka , y. , & yamasaki , n. 1996 , , 48 , l15 yamauchi , s. , kawada , m. , koyama , k. , kunieda , h. , tawara , y. , & hatsukade , i. 1990 , , 365 , 532 yamasaki , s. 1997 , , 481 , 821 yusef - zadeh , f. 2003 , astro - ph/0308008 yusef - zadeh , f. & morris , m. 1987 , , 322 , 721 yusef - zadeh , f. , law , c. , & wardle , m. 2002 , , 568 , l121 zane , s. , turolla , r. , & treves , a. 1996 , , 471 , 248 zylka , r. , mezger , p. g. , & wink , j. e. 1990 , , 234 , 133 lccccc 1999 sep 21 02:43:00 & 0242 & 40,872 & 266.41382 & -29.0130 & 268 + 2000 oct 26 18:15:11 & 1561 & 35,705 & 266.41344 & -29.0128 & 265 + 2001 jul 14 01:51:10 & 1561 & 13,504 & 266.41344 & -29.0128 & 265 + 2002 feb 19 14:27:32 & 2951 & 12,370 & 266.41867 & -29.0033 & 91 + 2002 mar 23 12:25:04 & 2952 & 11,859 & 266.41897 & -29.0034 & 88 + 2002 apr 19 10:39:01 & 2953 & 11,632 & 266.41923 & -29.0034 & 85 + 2002 may 07 09:25:07 & 2954 & 12,455 & 266.41938 & -29.0037 & 82 + 2002 may 22 22:59:15 & 2943 & 34,651 & 266.41991 & -29.0041 & 76 + 2002 may 24 11:50:13 & 3663 & 37,959 & 266.41993 & -29.0041 & 76 + 2002 may 25 15:16:03 & 3392 & 166,690 & 266.41992 & -29.0041 & 76 + 2002 may 28 05:34:44 & 3393 & 158,026 & 266.41992 & -29.0041 & 76 + 2002 jun 03 01:24:37 & 3665 & 89,928 & 266.41992 & -29.0041 & 76 llccc southeast & 34.7 & 7.5 & @xmath198 & @xmath199 + southwest & 14.0 & 7.7 & @xmath200 & @xmath201 + northwest & 23.9 & 7.8 & @xmath202 & @xmath203 + east & 12.0 & 7.8 & @xmath204 & @xmath205 + close & 9.0 & 3.9 & @xmath206 & @xmath207 + [ 5pt ] northeast & 46.0 & 7.6 & @xmath208 & @xmath209 lcccccc @xmath210 ( @xmath211 @xmath3 ) & 1.4@xmath212 & 1.0@xmath213 & 1.7@xmath214 & 2.2@xmath213 & 1.6@xmath213 & 7.2@xmath212 + @xmath215 ( @xmath211 @xmath3 ) & 4.8@xmath216 & 4.4@xmath217 & 4.9@xmath218 & 4.7@xmath219 & 4.8@xmath220 & 0.1@xmath221 + @xmath222 ( fixed ) & 0.95 & 0.95 & 0.95 & 0.95 & 0.95 & 0.94 + @xmath223 ( kev ) & 0.82@xmath224 & 0.71@xmath225 & 0.70@xmath224 & 0.92@xmath226 & 0.80@xmath227 & 0.88@xmath228 + @xmath229 ( @xmath230 @xmath81 pc ) & 2.1@xmath231 & 1.3@xmath232 & 2.2@xmath218 & 5.9@xmath233 & 3.0@xmath234 & 22.0@xmath235 + @xmath236 ( @xmath237 erg @xmath3 s@xmath4 arcmin@xmath5 ) & 0.3 & 0.2 & 0.2 & 0.6 & 0.4 & 1.7 + @xmath238 ( @xmath237 erg @xmath3 s@xmath4 arcmin@xmath5 ) & 0.8 & 0.4 & 0.7 & 2.1 & 1.1 & 5.6 + @xmath239 ( @xmath211 @xmath3 ) & 4.8@xmath240 & 4.0@xmath241 & 6.1@xmath242 & 4.7@xmath243 & 4.7@xmath244 & 1.16@xmath245 + @xmath246 ( @xmath211 @xmath3 ) & 83@xmath247 & 29@xmath248 & 72@xmath249 & 70@xmath250 & 28@xmath251 & 14.7@xmath252 + @xmath253 & 0.50@xmath254 & 0.52@xmath255 & 0.60@xmath256 & 0.58@xmath257 & 0.48@xmath255 & 0.95@xmath258 + @xmath259 ( kev ) & 7.1@xmath260 & 8.1@xmath261 & 8.8@xmath262 & 5.9@xmath263 & 8.6@xmath264 & 7.9@xmath265 + @xmath266 ( @xmath230 @xmath81 pc ) & 1.8@xmath267 & 1.5@xmath268 & 2.2@xmath269 & 2.5@xmath270 & 2.2@xmath268 & 2.7@xmath271 + @xmath272 ( @xmath237 erg @xmath3 s@xmath4 arcmin@xmath5 ) & 1.5 & 1.6 & 1.5 & 1.6 & 2.5 & 2.4 + @xmath273 ( @xmath237 erg @xmath3 s@xmath4 arcmin@xmath5 ) & 4.0 & 3.5 & 5.1 & 5.9 & 5.3 & 5.6 + @xmath274 & 1.0@xmath275 & 1.5@xmath241 & 1.5@xmath240 & 0.7@xmath276 & 1.3@xmath277 & 0.80@xmath245 + @xmath278 & 2.5@xmath279 & 2.7@xmath280 & 2.2@xmath241 & 1.6@xmath277 & 2.3@xmath281 & 1.03@xmath282 + @xmath283 & 3.6@xmath284 & 4.1@xmath285 & 2.8@xmath280 & 1.8@xmath242 & 3.3@xmath286 & 1.08@xmath287 + @xmath288 & 2.4@xmath289 & 2.3@xmath290 & 1.3@xmath291 & 2.0@xmath292 & 2.4@xmath291 & 1.1@xmath275 + @xmath293 & 0.7@xmath294 & 0.8@xmath275 & 0.7@xmath275 & 0.7@xmath295 & 0.8@xmath275 & 0.57@xmath296 + fe k-@xmath0 ( @xmath22 ph @xmath3 s@xmath4 arcmin@xmath5 ) & 11@xmath297 & 9@xmath298 & 15@xmath299 & 14@xmath300 & 14@xmath301 & 42@xmath298 + @xmath302 & 494.0/462 & 484.4/462 & 519.1/462 & 621.8/462 & 428.5/462 & 519.9/462 lcccccc @xmath303 & 1.1@xmath275 & 0.5@xmath295 & 0.5@xmath294 & 1.1@xmath295 & 0.8@xmath294 & 1.4@xmath304 + @xmath305 ( @xmath306 ph @xmath3 s@xmath4 kev@xmath4 arcmin@xmath5 ) & 1.9@xmath307 & 0.7@xmath265 & 0.7@xmath214 & 2.2@xmath308 & 2.1@xmath218 & 7.3@xmath261 + @xmath309 ( kev ) & 6.39@xmath310 & 6.37@xmath224 & 6.443@xmath311 & 6.38@xmath312 & 6.40@xmath313 & 6.395@xmath314 + @xmath315 ( ev ) & @xmath67 & 40 & 40 & 40 & 40 & 37@xmath316 + @xmath317 ( 10@xmath318 ph @xmath3 s@xmath4 arcmin@xmath5 ) & 4.0@xmath319 & 3.9@xmath234 & 7.6@xmath320 & 5.9@xmath321 & 8.4@xmath322 & 30.9@xmath323 + @xmath324 ( kev ) & 6.637@xmath325 & 6.665@xmath326 & 6.73@xmath313 & 6.670@xmath327 & 6.658@xmath328 & 6.668@xmath329 + @xmath330 ( ev ) & @xmath331 & 40 & 40 & 40 & 40 & @xmath332 + @xmath333 ( 10@xmath318 ph @xmath3 s@xmath4 arcmin@xmath5 ) & 12.2@xmath334 & 12@xmath298 & 10.3@xmath335 & 16@xmath336 & 17@xmath336 & 16.8@xmath261 + @xmath337 ( kev ) & 6.86@xmath225 & 6.98@xmath338 & 6.94@xmath245 & 6.97 & 6.94@xmath339 & 6.948@xmath340 + @xmath341 ( ev ) & 0 & 0 & 0 & 0 & 0 & 0 + @xmath342 ( 10@xmath318 ph @xmath3 s@xmath4 arcmin@xmath5 ) & 2.8@xmath343 & 4.7@xmath233 & 4.2@xmath344 & @xmath345 & 8.2@xmath346 & 6.7@xmath347 + @xmath348 & 0.3@xmath349 & 0.3@xmath212 & 0.7@xmath350 & 0.4@xmath212 & 0.5@xmath212 & 1.8@xmath212 + @xmath351 & 4.4@xmath352 & 2.5@xmath353 & 2.5@xmath354 & @xmath355 & 2.1@xmath262 & 2.5@xmath356 + @xmath302 & 204.1/230 & 245.6/230 & 236.8/230 & 315.0/231 & 219.9/230 & 240.6/230 lcccccc @xmath357 & 1.8 & 1.8 & 1.7 & 2.3 & 3.0 & 4.5 + @xmath358 & 0.2 & 0.3 & 0.2 & 0.2 & 1.0 & 0.4 + @xmath236 & 0.3 & 0.2 & 0.2 & 0.6 & 0.4 & 1.8 + @xmath272 & 1.5 & 1.6 & 1.5 & 1.6 & 2.5 & 2.6 + @xmath236 & 0.5 & 0.4 & 0.2 & 0.5 & 0.5 & 1.7 + @xmath272 & 0.0 & 0.0 & 0.8 & 0.9 & 0.1 & 1.9 + @xmath359 & 1.3 & 1.4 & 0.7 & 0.8 & 2.4 & 0.9 lccccccc @xmath360 ( @xmath361 erg s@xmath4 arcmin@xmath5 ) & & 11.8 & 6.3 & 10.1 & 32.8 & 16.4 & 87.1 + @xmath32 ( kev ) & & 0.8 & 0.7 & 0.7 & 0.9 & 0.8 & 0.9 + @xmath362 ( @xmath48 ) & @xmath98 & 0.1 & 0.1 & 0.2 & 0.2 & 0.2 & 0.5 + @xmath363 ( @xmath94 arcmin@xmath5 ) & @xmath96 & 1.0 & 0.8 & 1.0 & 1.6 & 1.2 & 3.1 + @xmath102 ( @xmath364 erg @xmath48 ) & @xmath98 & 0.3 & 0.2 & 0.3 & 0.5 & 0.3 & 1.0 + @xmath365 ( @xmath366 erg arcmin@xmath5 ) & @xmath96 & 2 & 2 & 2 & 4 & 3 & 8 + @xmath367 ( @xmath368 yr ) & @xmath369 & 0.6 & 0.8 & 0.6 & 0.4 & 0.5 & 0.3 + @xmath370 ( km s@xmath4 ) & & 510 & 470 & 470 & 540 & 500 & 530 + @xmath371 ( @xmath132 yr ) & @xmath91 & 0.9 & 1.0 & 1.0 & 0.9 & 1.0 & 0.9 + @xmath372 ( @xmath373 erg s@xmath4 ) & @xmath374 & 4 & 2 & 3 & 8 & 4 & 14 + @xmath375 ( @xmath376 yr ) & @xmath377 & 2 & 2 & 2 & 2 & 2 & 2 + @xmath360 ( @xmath361 erg s@xmath4 arcmin@xmath5 ) & & 6.2 & 5.4 & 7.9 & 9.1 & 8.2 & 8.7 + @xmath32 ( kev ) & & 7.1 & 8.1 & 8.8 & 5.9 & 8.6 & 7.9 + @xmath362 ( @xmath48 ) & @xmath98 & 0.1 & 0.1 & 0.2 & 0.2 & 0.2 & 0.2 + @xmath363 ( @xmath94 arcmin@xmath5 ) & @xmath96 & 0.9 & 0.8 & 1.0 & 1.1 & 1.0 & 1.1 + @xmath102 ( @xmath364 erg @xmath48 ) & @xmath98 & 2 & 2 & 3 & 2 & 3 & 3 + @xmath365 ( @xmath366 erg arcmin@xmath5 ) & @xmath96 & 18 & 19 & 24 & 18 & 24 & 25 + @xmath367 ( @xmath368 yr ) & @xmath369 & 10 & 11 & 10 & 6 & 10 & 9 + @xmath370 ( km s@xmath4 ) & & 1500 & 1600 & 1700 & 1400 & 1600 & 1600 + @xmath371 ( @xmath132 yr ) & @xmath91 & 0.3 & 0.3 & 0.3 & 0.4 & 0.3 & 0.3 + @xmath372 ( @xmath373 erg s@xmath4 ) & @xmath374 & 90 & 100 & 136 & 80 & 134 & 130 + @xmath375 ( @xmath376 yr ) & @xmath377 & 0.6 & 0.6 & 0.6 & 0.7 & 0.6 & 0.6 + lccc & @xmath378 & @xmath379 & [ 1 ] + central stellar cluster & 0.08 & @xmath380 & [ 1 ] + sgr a east & 1.4 & @xmath381 & [ 2 ] + bipolar outflow & 30 & @xmath382 & [ 3 ] + non - thermal filaments & 0.04 & @xmath383 & [ 4 ] + neutral iron filaments & 0.5 & @xmath384 & [ 5 ] + detected point sources & 290 & @xmath385 & [ 6 ] + diffuse emission & 290 & @xmath386 & [ 7 ] +
we examine the spectrum of diffuse emission detected in the 17by 17 field around during 625 ks of _ chandra _ observations . the spectrum exhibits he - like and h - like lines from si , s , ar , ca , and fe , that are consistent with originating in a two - temperature plasma , as well as a prominent low - ionization fe k-@xmath0 line . the cooler , @xmath1 kev plasma differs in surface brightness across the image between @xmath2 erg @xmath3 s@xmath4 arcmin@xmath5 ( observed , 28 kev ) . this soft plasma is probably heated by supernovae , along with a small contribution from the winds of massive wolf - rayet and o stars . the radiative cooling rate of the soft plasma within the inner 20 pc of the galaxy could be balanced by 1% of the kinetic energy of one supernova every @xmath6 y. the hotter , @xmath7 kev component is more spatially uniform , with a surface brightness of @xmath8 erg @xmath3 s@xmath4 arcmin@xmath5 ( observed ; 28 ) kev . the intensity of the hard plasma is correlated with that of the soft , but they are probably only indirectly related , because neither supernova remnants nor wr / o stars are observed to produce thermal plasma hotter than @xmath9 kev . moreover , a @xmath7 kev plasma would be too hot to be bound to the galactic center , and therefore would form a slow wind or fountain of plasma . the energy required to sustain such a freely - expanding plasma within the inner 20 pc of the galaxy is @xmath10 erg s@xmath4 . this corresponds to the entire kinetic energy of one supernova every 3000 y , which is unreasonably high . however , alternative explanations for the @xmath7 kev diffuse emission are equally unsatisfying . the hard x - rays are unlikely to result from undetected point sources , because no known population of stellar object is numerous enough to the observed surface brightness . there is also no evidence that non - thermal mechanisms for producing the hard emission are operating , as the expected shifts in the line energies and ratios from their collisional equilibrium values are not observed . we are left to conclude that either there is a significant shortcoming in our understanding of the mechanisms that heat the interstellar medium , or that a population of faint ( @xmath11 erg s@xmath4 ) , hard x - ray sources that are a factor of 10 more numerous than cvs remains to be discovered .
rna polymerase ( rnap ) is a molecular motor @xcite . it moves on a stretch of dna , utilizing chemical energy input , while polymerizing a messenger rna ( mrna ) @xcite . the sequence of monomeric subunits of the mrna is dictated by the corresponding sequence on the template dna . this process of template - dictated polymerization of rna is usually referred to as _ transcription _ it comprises three stages , namely , initiation , elongation of the mrna and termination . we first report analytical results on the characteristic properties of single rnap motors . in our approach @xcite , each rnap is represented by a hard rod while the dna track is modelled as a one - dimensional lattice whose sites represent a nucleotide , the monomeric subunits of the dna . the mechano - chemistry of individual rnap motors is captured in this model by assigning @xmath0 distinct `` chemical '' states to each rnap and postulating the nature of the transitions between these states . the dwell time of an rnap at successive monomers of the dna template is a random variable ; its distribution characterizes the stochastic nature of the movement of rnap motors . we derive the _ exact _ analytical expression for the dwell - time distribution of the rnaps in this model . we also report results on the collective movements of the rnaps . often many rnaps move simultaneously on the same dna track ; because of superficial similarities with vehicular traffic @xcite , we refer to such collective movements of rnaps as rnap traffic @xcite . our model of rnap traffic can be regarded as an extension of the totally asymmetric simple exclusion process ( tasep ) @xcite for hard rods where each rod can exist at a location in one of its @xmath0 possible chemical states . the movement of an rnap on its dna track is coupled to the elongation of the mrna chain that it synthesizes . naturally , the rate of its forward movement depends on the availability of the monomeric subunits of the mrna and the associated `` chemical '' transitions on the dominant pathway in its mechano - chemical cycle . because of the incorporation of the mechano - chemical cycles of individual rnap motors , the number of rate constants in this model is higher than that in a tasep for hard rods . consequently , we plot the phase diagrams of our model not in a two - dimensionl plane ( as is customary for the tasep ) , but in a 3-dimensional space where the additional dimension corresponds to the concentration of the monomeric subunits of the mrna . we take the dna template as a one dimensional lattice of length @xmath1 and each rnap is taken as a hard rod of length @xmath2 in units of the length of a nucleotide . although an rnap covers @xmath2 nucleotides , its position is denoted by the _ nucleotide covered by it . transcription initiation and termination steps are taken into account by the rate constants @xmath3 and @xmath4 , respectively . a hard rod , representing an mrna , attaches to the first site @xmath5 on the lattice with rate @xmath3 if the first @xmath6 sites are not covered by any other rnap at that instant of time . similarly , an mrna bound to the rightmost site @xmath7 is released from the system , with rate @xmath4 . we have assumed hard core steric interaction among the rnaps ; therefore , no site can be simultaneously covered by more than one rnap . at every lattice site @xmath8 , an rnap can exist in one of two possible chemical states : in one of these it is bound with a pyrophosphate ( which is one of the byproducts of rna elongation reaction and is denoted by the symbol @xmath9 ) , whereas no @xmath9 is bound to it in the other chemical state ( see fig.[fig - model ] ) . for plotting our results , we have used throughout this paper @xmath10,@xmath11 and @xmath12~\tilde{\omega}_{21}^f ~s^{-1}$ ] , where @xmath13 $ ] is concentration of nucleotide triphosphate monomers ( fuel for transcription elongation ) and @xmath14 . for every rnap , the dwell time is measured by an imaginary `` stop watch '' which is reset to zero whenever the rnap reaches the chemical state @xmath15 , _ for the first time _ , after arriving at a new site ( say , @xmath16-th from the @xmath8-th ) . let @xmath17 be the probability of finding a rnap in the chemical state @xmath18 at time @xmath19 . the time evolution of the probabilities @xmath17 are given by @xmath20 @xmath21 there is a close formal similarity between the mechano - chemical cycle of an rnap in our model ( see fig.[fig - model ] ) and the catalytic cycle of an enzyme in the michaelis - menten scenario @xcite . the states @xmath15 and @xmath22 in the former correspond to the states @xmath23 and @xmath24 in the latter where @xmath23 represents the free enzyme while @xmath24 represents the enzyme - substrate complex . following the steps of calculation used earlier by kuo et al . @xcite for the kinetics of single - molecule enzymatic reactions , we obtain the dwell time distribution @xmath25 \label{eq - ftgen}\end{aligned}\ ] ] where @xmath26 @xmath27 \(a ) + release ( and , hence , @xmath28 ) fixed , and ( b ) two different values of @xmath28 , keeping the ntp concentration fixed . , title="fig : " ] + ( b ) + release ( and , hence , @xmath28 ) fixed , and ( b ) two different values of @xmath28 , keeping the ntp concentration fixed . , title="fig : " ] the dwell time distribution ( [ eq - ftgen ] ) is plotted in fig.[fig - ft ] . depending on the magnitudes of the rate constants the peak of the distribution may appear at such a small @xmath19 that it may not possible to detect the existence of this maxium in a laboratory experiment . in that case , the dwell time distribution would appear to be purely a single exponential @xcite . it is worth pointing out that our model does not incorporate backtracking of rnap motors which have been observed in the _ in - vitro _ experiments @xcite . it has been argued by some groups @xcite that short transcriptional pausing is distinct from the long pauses which arise from backtracking . in contrast , some other groups @xcite claim that polymerase backtracking can account for both the short and long pauses . thus , the the role of backtracking in the pause distribution remain controversial . moreover , it has been demonstrated that a polymerase stalled by backtracking can be re - activated by the `` push '' of another closely following it from behind @xcite . therefore , in the crowded molecular environment of intracellular space , the occurrence of backtracking may be far less frequent that those observed under _ in - vitro _ conditions . our model , which does not allow backtracking , predicts a dwell time distribution which is qualitatively very similar to that of the short pauses provided the most probable dwell time is shorter than 1 s. from equation ( [ eq - ftgen ] ) we get the inverse mean dwell time @xmath29 } } \label{eq - avtgen}\ ] ] where @xmath30 and @xmath31 . the form of the expression ( [ eq - avtgen ] ) is identical to the michaelis - menten formula for the average rate of an enzymatic reaction . it describes the slowing down of the `` bare '' elongation progress of an rnap due to the ntp reaction cycle that it has to undergo . the unit of velocity is @xmath32 . the fluctuations of the dwell time can be computed from the second moment @xmath33 } { \left(\omega_{12}~\omega_{21}^f \right)^2 } \end{aligned}\ ] ] of the dwell time distribution . we find the randomness parameter @xcite @xmath34 is plotted against ntp concentration for three values of the parameter @xmath28 . , title="fig : " ] + note that , for a one - step poisson process @xmath35 , @xmath36 . the randomness parameter @xmath37 , given by ( [ eq - ranpar ] ) , is plotted against the ntp concentration in fig.[fig - ranpar ] for three different values of @xmath28 . at sufficiently low ntp concentration , @xmath37 is unity because ntp binding with the rnap is the rate - limiting step . as ntp concentration increases , @xmath37 exhibits a nonmonotonic variation . at sufficiently high ntp concentration , pp@xmath38-release ( which occurs with the rate @xmath28 ) is the rate - limiting step and , therefore , @xmath37 is unity also in this limit . this interpretation is consistent with the fact that the smaller is the magnitude of @xmath28 , the quicker is the crossover to the value @xmath36 as the ntp concentration is increased . the randomness parameter yields the diffusion coefficient @xcite @xmath39\nonumber\\ \label{eqn - diffusion}\end{aligned}\ ] ] the expression ( [ eqn - diffusion ] ) is in agreement with the general expression for the effective diffusion constant of a molecular motor with unbranched mechano - chemical cycle which was first reported by fisher and kolomeisky @xcite . now we will take into account the hard core steric interaction among the rnaps which are simultaneously moving on the same dna track . equations ( [ eq - masterp1 ] and [ eq - masterp2 ] ) will be modified to @xmath40 @xmath41 where @xmath42 is conditional probability @xcite of finding site @xmath43 ( @xmath44 for backward motion ) vacant , given there is a particle at site @xmath8 . due to the steric interactions between rnap s their stationary flux @xmath45 ( and hence the transcription rate ) is no longer limited solely by the initiation and release at the terminal sites of the template dna . we calculate the resulting phase diagram utilizing the extremum current hypothesis ( ech ) @xcite . the ech relates the flux in the system under open boundary conditions ( obc ) to that under periodic boundary conditions ( pbc ) with the same bulk dynamics . in this approach , one imagines that initiation and termination sites are connected to two separate reservoirs where the number densities of particles are @xmath46 and @xmath47 respectively , and where the particles follow the same dynamics as in the bulk of the real physical system . then @xmath48 the actual rates @xmath3 and @xmath4 of initiation and termination of mrna polymerization are incorporated by appropriate choice of @xmath46 and @xmath47 respectively . . the surfaces @xmath49 and @xmath50 separate the mc phase from the hd and ld phases , respectively . , title="fig : " ] + -@xmath4 plane for several values of @xmath51 . the numbers on the phase boundary lines represent the value of @xmath51 . the inclined lines have ld and hd above and below , respectively , while the mc phase lies in the upper right corner . ] an expression for @xmath52 was reported by us in ref . @xcite . in the special case where the dominant pathway is that shown in fig . [ fig - model ] ( @xmath53 = 0 for further calculation as @xmath54 ) , we have @xmath55 the number density @xmath56 that corresponds to the maximum flux is given by the expression @xmath57^{-1}\ ] ] by comparing ( [ eq : effj ] ) with the exact current - density relation of the usual tasep for extended particles of size @xmath2 @xcite , which have no internal states ( formally obtained by taking the limit @xmath58 in the present model ) , we predict that the stationary current ( i.e. the collective average rate of translation ) is reduced by the occurrence of the intermediate state 1 through which the rnaps have to pass . from ( [ ech ] ) one expects three phases , viz . a maximal - current(mc ) phase with with bulk density @xmath56 , a low - density phase ( ld ) with bulk density @xmath46 , and a high - density phase ( hd ) with bulk density @xmath47 . using arguments similar to those used in ref . @xcite in a similar context , we get @xcite @xmath59 and @xmath60 except that the projections are on @xmath51-@xmath4 plane for several values of @xmath3 . the inclined lines have ld and hd above and below , respectively . each vertical line separates the ld phase on the left from the mc phase on its right . ] the condition for the coexistence of the high density ( hd ) and low density ( ld ) phases is @xmath61 with @xmath62 . using the expression ( [ eq : effj ] ) for @xmath45 in ( [ eq - ldhd ] ) we get @xmath63 substituting ( [ eq - rhom ] ) and ( [ eq - rhop ] ) into ( [ eq - rhomp ] ) , we get the equation for the plane of coexistence of ld and hd to be @xmath64 where @xmath65 } . \label{eq : phasec}\end{aligned}\ ] ] in order to compare our result with the 2-d phase diagram of the tasep in the @xmath66-plane , we project 2-d cross sections of the 3-d phase diagram , for several different values of @xmath67 onto the @xmath66-plane . the lines of coexistence of the ld and hd phases on this projected two - dimensional plane are curved , a similar curvature is also reported by antal and schtz @xcite . this is in contrast to the straight coexistence line for ld and hd phases of tasep . the bulk density of the system is guided by following equations : @xmath68 \biggl[\dfrac{\omega_{12}\omega_{21}^f}{\omega_{12}+\omega_{21}^f}\biggr]~~\mbox{low density}\\ \rho_{+ } & ~ \mbox{if}~\omega_{\beta } < f(\omega_{\alpha},\omega_{21}^f ) { \rm and } ~\omega_{\beta } < \biggl[\dfrac{1-\rho_*\ell}{1-\rho_*(\ell-1)}\biggr ] \biggl[\dfrac{\omega_{12}\omega_{21}^f}{\omega_{12}+\omega_{21}^f}\biggr]~~\mbox{high density } \\ \rho _ { * } & ~\mbox{if}~\omega_{\beta}>\biggl[\dfrac{1-\rho_*\ell}{1-\rho_*(\ell-1)}\biggr ] \biggl[\dfrac{\omega_{12}\omega_{21}^f}{\omega_{12}+\omega_{21}^f}\biggr ] { \rm and } ~\omega_{\alpha } > \biggl[\dfrac{\rho_*}{1-\rho_*(\ell-1)}\biggr ] \biggl[\dfrac{\omega_{12}\omega_{21}^f}{\omega_{12}+\omega_{21}^f}\biggr]~~\mbox{maximal current}.\end{array } \right . \nonumber\\\end{aligned}\ ] ] in fig . [ fig:3d ] , we plot the 3d phase diagram . except that the projections are on @xmath51-@xmath3 plane for several values of @xmath4 . here the inclined lines have hd and ld , respectively , above and below . each vertical line separates the hd phase on the left from the mc phase on its right . ] in general , a plane @xmath69=constant intersects the surfaces i , ii and iii thereby generating the phase transition lines between the ld , hd and mc phases in the @xmath3-@xmath4 plane . we have projected several of these 2d phase diagrams , each for one constant value of @xmath69 in figure [ fig : xyplane ] . in the inset , we have shown the value of @xmath51 for different lines . we have also projected several 2d phase diagrams in the @xmath70-@xmath4 plane and @xmath51-@xmath4 plane , respectively , in figures [ fig : yzplane ] and [ fig : xzplane ] . in this paper we have reported the exact dwell time distribution for a simple 2-state model of rnap motors . from this distribution we have also computed the average velocity and the fluctuations of position and dwell time of rnap s on the dna nucleotides . these expressions are consistent with a general formula derived earlier by fisher and kolomeisky for a generic model of molecular motors with unbranched mechano - chemical cycles . taking into account the presence of steric interactions between different rnap moving along the same dna template we have plotted the full 3d phase diagram of a model for multiple rnap traffic . this model is a biologically motivated extension of the tasep , the novel feature being the incorporation of the mechano - chemical cycle of the rnap into the dynamics of the transcription process . this leads to a hopping process with a dwell time distribution that is not a simple exponential . nevertheless , the phase diagram is demonstrated to follow the extremal - current hypothesis @xcite for driven diffusive systems . using mean field theory we have computed the effective boundary densities that enter the ech from the reaction constants of our model . we observe that the collective average rate of translation as given by the stationary rnap current ( [ eq : effj ] ) is reduced by the need of the rnap to go through the pyrophosphate bound state . this is a prediction that is open to experimental test . the 2d cross sections of this phase diagram have been compared and contrasted with the phase diagram for the tasep . unlike in the tasep , the coexistence line between low- and high - density phase is curved for all parameter values . this is a signature of broken particle - vacancy symmetry of the rnap dynamics . the presence of this coexistence line suggests the occurrence of rnap `` traffic jams '' that our model predicts to appear when stationary initiation and release of rnap at the terminal sites of the dna track are able to balance each other . this traffic jam would perform an unbiased random motion , as argued earlier on general theoretical grounds in the context of protein synthesis by ribosomes from mrna templates @xcite . + * acknowledgments * : this work is supported by a grant from csir ( india ) . gms thanks iit kanpur for kind hospitality and dfg for partial financial support . 50 m. schliwa , ( ed . ) _ molecular motors _ , ( wiley - vch , 2003 ) . j. gelles and r. landick , cell , * 93 * , 13 ( 1998 ) . b. alberts et al . _ essential cell biology _ , 2nd ed . ( garland science , taylor and francis , 2004 ) . t. tripathi and d. chowdhury , phys . e * 77 * , 011921 ( 2008 ) . d. chowdhury , l. santen and a. schadschneider , phys . rep . * 329 * , 199 ( 2000 ) . s. klumpp and r. lipowsky , j. stat . phys . * 113 * , 233 ( 2003 ) . d. chowdhury , a. schadschneider and k. nishinari , phys . of life rev . * 2 * , 318 ( 2005 ) . m. voliotis , n. cohen , c. molina - paris and t.b . liverpool , biophys . j. * 94 * , 334 ( 2007 ) . s. klumpp and t. hwa , pnas * 105 * , 18159 ( 2008 ) . g. m. schtz , in : _ phase transitions and critical phenomena _ , vol . 19 ( acad . press , 2001 ) . m. dixon and e.c . webb , _ enzymes _ ( academic press , 1979 ) . kou , b.j . cherayil , w. min , b.p . english and x.s . xie , j. phys . b * 109 * , 19068 - 19081 ( 2005 ) . k. adelman , a. la porta , t.j . santangelo , j.t . lis , j.w . roberts and m.d . wang , pnas * 99 * , 13538 ( 2002 ) . a. shundrovsky , t.j . santangelo , j.w . roberts and m.d . wang , biophys . j. * 87 * , 3945 ( 2004 ) . abbondanzieri , w.j . greenleaf , j.w . shaevitz , r. landick and s.m . block , nature * 438 * , 460 ( 2005 ) . shaevitz , e.a . abbondanzieri , r. landick and s.m . block , nature * 426 * , 684 ( 2003 ) . e. galburt , s.w . grill , a. wiedmann , l. lubhowska , j. choy , e. nogales , m. kashlev and c. bustamante , nature * 446 * , 820 ( 2007 ) . neuman , e.a . abbondanzieri , r. landick , j. gelles and s.m . block , cell * 115 * , 437 ( 2003 ) . m. depken , e. galburt and s.w . grill , biophys . j. * 96 * , 2189 ( 2009 ) . v. epshtein and e. nudler , science * 300 * , 801 ( 2003 ) . m. schnitzer and s. block , cold spring harbor symp . biol . * 60 * , 793 ( 1995 ) t. tripathi , ph.d . thesis , iit kanpur ( 2009 ) . fisher and a.b . kolomeisky , pnas * 96 * , 6597 ( 1999 ) . v. popkov and g. m. schtz , europhys . lett . * 48 * , 257 ( 1999 ) . j. krug , phys . rev . lett . * 67 * , 1882 ( 1991 ) . c. macdonald , j. gibbs and a. pipkin , biopolymers , * 6 * , 1 ( 1968 ) shaw , r.k.p . zia and k.h . lee , phys . e * 68 * , 021910 ( 2003 ) . g. schnherr and g.m . schtz , j. phys . a * 37 * , 8215 ( 2004 ) . a. basu and d. chowdhury , phys . e * 75 * , 021902 ( 2007 ) . t. antal and g. m. schtz , phys . e * 62 * , 83 ( 2000 ) . schtz , int . j. mod b * 11 * , 197 ( 1997 ) .
polymerization of rna from a template dna is carried out by a molecular machine called rna polymerase ( rnap ) . it also uses the template as a track on which it moves as a motor utilizing chemical energy input . the time it spends at each successive monomer of dna is random ; we derive the exact distribution of these `` dwell times '' in our model . the inverse of the mean dwell time satisfies a michaelis - menten - like equation and is also consistent with a general formula derived earlier by fisher and kolomeisky for molecular motors with unbranched mechano - chemical cycles . often many rnap motors move simultaneously on the same track . incorporating the steric interactions among the rnaps in our model , we also plot the three - dimensional phase diagram of our model for rnap traffic using an extremum current hypothesis .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Lackawanna Valley Heritage Area Act of 1998''. SEC. 2. FINDINGS AND PURPOSE. (a) Findings.--The Congress finds the following: (1) The industrial and cultural heritage of northeastern Pennsylvania inclusive of Lackawanna, Luzerne, Wayne, and Susquehanna counties, related directly to anthracite and anthracite-related industries, is nationally significant, as documented in the United States Department of the Interior- National Parks Service, National Register of Historic Places, Multiple Property Documentation submittal of the Pennsylvania Historic and Museum Commission (1996). (2) These industries include anthracite mining, ironmaking, textiles, and rail transportation. (3) The industrial and cultural heritage of the anthracite and related industries in this region includes the social history and living cultural traditions of the people of the region. (4) The labor movement of the region played a significant role in the development of the Nation including the formation of many key unions such as the United Mine Workers of America, and crucial struggles to improve wages and working conditions, such as the 1900 and 1902 anthracite strikes. (5) The Department of the Interior is responsible for protecting the Nation's cultural and historic resources, and there are significant examples of these resources within this 4-county region to merit the involvement of the Federal Government to develop programs and projects, in cooperation with the Lackawanna Heritage Valley Authority, the Commonwealth of Pennsylvania, and other local and governmental bodies, to adequately conserve, protect, and interpret this heritage for future generations, while providing opportunities for education and revitalization. (6) The Lackawanna Heritage Valley Authority would be an appropriate management entity for a Heritage Area established in the region. (b) Purpose.--The objectives of the Lackawanna Heritage Valley American Heritage Area are as follows: (1) To foster a close working relationship with all levels of government, the private sector, and the local communities in the anthracite coal region of northeastern Pennsylvania and empower the communities to conserve their heritage while continuing to pursue economic opportunities. (2) To conserve, interpret, and develop the historical, cultural, natural, and recreational resources related to the industrial and cultural heritage of the 4-county region of northeastern Pennsylvania. SEC. 3. LACKAWANNA HERITAGE VALLEY AMERICAN HERITAGE AREA. (a) Establishment.--There is hereby established the Lackawanna Heritage Valley American Heritage Area (in this Act referred to as the ``Heritage Area''). (b) Boundaries.--The Heritage Area shall be comprised of all or parts of the counties of Lackawanna, Luzerne, Wayne, and Susquehanna in Pennsylvania, determined pursuant to the compact under section 4. (c) Management Entity.--The management entity for the Heritage Area shall be the Lackawanna Heritage Valley Authority. SEC. 4. COMPACT. To carry out the purposes of this Act, the Secretary of the Interior (in this Act referred to as the ``Secretary'') shall enter into a compact with the management entity. The compact shall include information relating to the objectives and management of the area, including each of the following: (1) A delineation of the boundaries of the Heritage Area. (2) A discussion of the goals and objectives of the Heritage Area, including an explanation of the proposed approach to conservation and interpretation and a general outline of the protection measures committed to by the partners. SEC. 5. AUTHORITIES AND DUTIES OF MANAGEMENT ENTITY. (a) Authorities of the Management Entity.--The management entity may, for purposes of preparing and implementing the management plan developed under subsection (b), use funds made available through this Act for the following: (1) To make loans and grants to, and enter into cooperative agreements with States and their political subdivisions, private organizations, or any person. (2) To hire and compensate staff. (b) Management Plan.--The management entity shall develop a management plan for the Heritage Area that presents comprehensive recommendations for the Heritage Area's conservation, funding, management, and development. Such plan shall take into consideration existing State, county, and local plans and involve residents, public agencies, and private organizations working in the Heritage Area. It shall include actions to be undertaken by units of government and private organizations to protect the resources of the Heritage Area. It shall specify the existing and potential sources of funding to protect, manage, and develop the Heritage Area. Such plan shall include, as appropriate, the following: (1) An inventory of the resources contained in the Heritage Area, including a list of any property in the Heritage Area that is related to the themes of the Heritage Area and that should be preserved, restored, managed, developed, or maintained because of its natural, cultural, historic, recreational, or scenic significance. (2) A recommendation of policies for resource management which considers and details application of appropriate land and water management techniques, including, but not limited to, the development of intergovernmental cooperative agreements to protect the Heritage Area's historical, cultural, recreational, and natural resources in a manner consistent with supporting appropriate and compatible economic viability. (3) A program for implementation of the management plan by the management entity, including plans for restoration and construction, and specific commitments of the identified partners for the first 5 years of operation. (4) An analysis of ways in which local, State, and Federal programs may best be coordinated to promote the purposes of this Act. (5) An interpretation plan for the Heritage Area. The management entity shall submit the management plan to the Secretary for approval within 3 years after the date of enactment of this Act. If a management plan is not submitted to the Secretary as required within the specified time, the Heritage Area shall no longer qualify for Federal funding. (c) Duties of Management Entity.--The management entity shall-- (1) give priority to implementing actions set forth in the compact and management plan, including steps to assist units of government, regional planning organizations, and nonprofit organizations in preserving the Heritage Area; (2) assist units of government, regional planning organizations, and nonprofit organizations in establishing and maintaining interpretive exhibits in the Heritage Area; assist units of government, regional planning organizations, and nonprofit organizations in developing recreational resources in the Heritage Area; (3) assist units of government, regional planning organizations, and nonprofit organizations in increasing public awareness of and appreciation for the natural, historical, and architectural resources and sites in the Heritage Area; assist units of government, regional planning organizations and nonprofit organizations in the restoration of any historic building relating to the themes of the Heritage Area; (4) encourage by appropriate means economic viability in the Heritage Area consistent with the goals of the plan; encourage local governments to adopt land use policies consistent with the management of the Heritage Area and the goals of the plan; (5) assist units of government, regional planning organizations, and nonprofit organizations to ensure that clear, consistent, and environmentally appropriate signs identifying access points and sites of interest are put in place throughout the Heritage Area; (6) consider the interests of diverse governmental, business, and nonprofit groups within the Heritage Area; (7) conduct public meetings at least quarterly regarding the implementation of the management plan; (8) submit substantial changes (including any increase of more than 20 percent in the cost estimates for implementation) to the management plan to the Secretary for the Secretary's approval; for any year in which Federal funds have been received under this Act, submit an annual report to the Secretary setting forth its accomplishments, its expenses and income, and the entity to which any loans and grants were made during the year for which the report is made; and (9) for any year in which Federal funds have been received under this Act, make available for audit all records pertaining to the expenditure of such funds and any matching funds, and require, for all agreements authorizing expenditure of Federal funds by other organizations, that the receiving organizations make available for audit all records pertaining to the expenditure of such funds. (d) Prohibition on the Acquisition of Real Property.--The management entity may not use Federal funds received under this Act to acquire real property or an interest in real property. Nothing in this Act shall preclude any management entity from using Federal funds from other sources for their permitted purposes. SEC. 6. DUTIES AND AUTHORITIES OF FEDERAL AGENCIES. (a) Technical and Financial Assistance.-- (1) In general.--The Secretary may, upon request of the management entity, provide technical and financial assistance to the management entity to develop and implement the management plan. In assisting the management entity, the Secretary shall give priority to actions that in general assist in-- (A) conserving the significant natural, historic, and cultural resources which support its themes; and (B) providing educational, interpretive, and recreational opportunities consistent with its resources and associated values. (2) Spending for non-federally owned property.--The Secretary may spend Federal funds directly on non-federally owned property to further the purposes of this Act, especially in assisting units of government in appropriate treatment of districts, sites, buildings, structures, and objects listed or eligible for listing on the National Register of Historic Places. The Historic American Building Survey/Historic American Engineering Record shall conduct those studies necessary to document the industrial, engineering, building, and architectural history of the region. (b) Approval and Disapproval of Compacts and Management Plans.--The Secretary, in consultation with the Governor of Pennsylvania, shall approve or disapprove a compact or management plan submitted under this Act not later than 90 days after receiving such compact or management plan. (c) Action Following Disapproval.--If the Secretary disapproves a submitted compact or management plan, the Secretary shall advise the management entity in writing of the reasons therefore and shall make recommendations for revisions in the compact or plan. The Secretary shall approve or disapprove a proposed revision within 90 days after the date it is submitted. (d) Approving Amendments.--The Secretary shall review substantial amendments to the management plan for the Heritage Area. Funds appropriated pursuant to this Act may not be expended to implement the changes made by such amendments until the Secretary approves the amendments. SEC. 7. SUNSET. The Secretary may not make any grant or provide any assistance under this Act after September 30, 2012. SEC. 8. AUTHORIZATION OF APPROPRIATIONS. (a) In General.--There is authorized to be appropriated under this Act not more than $1,000,000 for any fiscal year. Not more than a total of $10,000,000 may be appropriated for the Heritage Area under this Act. (b) 50 Percent Match.--Federal funding provided under this Act, after the designation of the Heritage Area, may not exceed 50 percent of the total cost of any assistance or grant provided or authorized under this Act.
Lackawanna Valley Heritage Area Act of 1998 - Establishes the Lackawanna Heritage Valley American Heritage Area, comprised of all or parts of four coal-producing counties in northeast Pennsylvania, to be managed by the Lackawanna Heritage Valley Authority. Directs the Secretary of the Interior to enter into a management compact with the Authority to determine Area goals and objectives. Directs the Authority to develop an Area management plan that presents comprehensive recommendations for the Area's conservation, funding, management, and development. Requires the plan to be submitted to the Secretary for approval within three years after the enactment of this Act. Outlines related management duties. Prohibits the Authority from using Federal funds to acquire real property under this Act. Provides for: (1) technical and financial assistance from the Secretary to the Authority to develop and implement the plan; (2) approval or disapproval of compacts and management plans; and (3) termination on September 30, 2012, of the Secretary's authority to make a grant or provide assistance under this Act. Authorizes appropriations. Prohibits Federal funding for the Area from exceeding 50 percent of total costs.
congenital central hypoventilation syndrome ( cchs ) is a rare autosomal dominant disorder of the autonomic nervous system ( ans ) characterized by an abnormal autonomic ventilatory response to progressive hypercapnia and sustained hypoxemia . most patients present with hypoventilation or apnea during the neonatal period , typically while asleep and/or awake without any other associated diseases such as cardiac , pulmonary , neuromuscular or brain stem abnormalities ( 1 , 2 ) . a rare association with hirschsprung disease ( haddad syndrome ) was first described in 1978 ; five cases of haddad syndrome have been reported in korea ( 3 - 7 ) . it is because of the association with aganglionosis of the bowel that a number of candidate genes have been considered . a common pathogenesis involving neural crest - derived cell lineages has been suggested ( 8) . studies of genes pertinent to the early embryologic development of the ans include the mammalian achaetescute homolog-1 ( mash1 ) , bone morphogenic protein-2 ( bmp2 ) , engrailed-1 ( en1 ) , tlx3 , endothelin converting enzyme-1 ( ece1 ) , endothelin-1 ( edn1 ) , phox2a , and phox2b among 67 probands with cchs . no disease - defining mutations were identified in mash1 , bmp2 , en1 , tlx3 , ece1 , edn1 , or phox2a . however , 97% of patients with cchs have been found to be heterozygous for the exon 3 polyalanine expansion mutation identified previously in phox2b ( 9 ) . it encodes a highly conserved paired - like homeo box transcription factor of 314 amino acids linked to the ret - gdnf signaling pathway . the dna binding homeo domain is encoded by exon 2 , whereas exon 3 encodes two short and stable polyalanine tracts of 9 and 20 residues ( 2 ) . expression studies performed in mice and humans have shown that this is a master regulatory gene , crucial for the normal development of the peripheral and central ans ( 10 - 12 ) . we herein report a case of cchs in a korean patient that was confirmed during the neonatal period by the identification of phox2b mutations . a 3,070 g male at 41 weeks gestation age was born in our hospital via cesarean section because of cephalopelvic disproportion on july 26 , 2007 . the boy was well at birth , with apgar scores of 7 and 9 at 1 and 5 min , respectively . he repeatedly had apnea with cyanosis and desaturation , noticed from the 15 hr after birth , and improved by awakening . he sometimes developed seizures when the paco2 accumulated , which needed supportive care with mechanical ventilation . 1 ) , mri of the brain , echocardiography and screening blood tests for metabolic disease . symptoms of respiratory failure with slow and irregular respiratory effort , cyanosis and seizures due to hypercapnia appeared during sleep but not during wakefulness ( fig . peripheral blood samples for the gene study were obtained from the patient and his mother at the age of 33 days with consented document of the family . he had a tracheostomy and was discharged with a home mechanical ventilation . at the age of 27 months , he is healthy except still ventilator - dependent during sleep . blood ( 4 ml ) was collected into an edta tube from the patient and his mother . genomic dna was obtained using a puregene reagent kit ( gentra , minneapolis , mn , usa ) according to the manufacturer 's instructions . the phox2b exon 3 region coding for the polyalanine repeat was amplified with primer pair 5'-ccaggtcccaatcccaac-3 ' ( forward ) and 5'-gagcccagccttgtccag-3 ' ( reverse ) ( fig . the pcr reactions were carried out using 0.25 u amplitaq gold polymerase ( applied biosystems , foster city , ca , usa ) in a total volume of 25 l , containing 50 ng genomic dna , 0.3 m primers , 2.5 mm mgcl2 , and 0.2 mm dntps . the amplification was performed with an initial denaturation at 95 for 10 min followed by 35 cycles of denaturation at 94 for 30 sec , annealing at 57 for 30 sec , and extension at 72 for 30 sec . after electrophoresis of the pcr products on a 4% denaturing polyacrylamide gel , the allele repeat number was determined by comparison of bands to known size standards ( 232 bp for normal 20 repeat allele ) ( 9 ) . the amplification of exon 2 was carried out using 1.25 u amplitaq gold polymerase ( applied biosystems ) in a total volume of 25 l , containing 50 ng of genomic dna , 0.5 m of primers , 1 mm mgcl2 , and 0.2 mm dntps . the amplification of exon 3 was performed using the gc - rich system ( roche molecular biochemicals , indianapolis , in , usa ) for its high gc content . the amplification for both exons was performed with an initial denaturation at 95 for 8 min , followed by 35 cycles of denaturation at 95 for 1 min , annealing at 62 for 1 min , and extension at 72 for 45 sec . the pcr products were column - purified and sequenced on an applied biosystems 3130 genetic analyzers ( applied biosystems ) . an extra band with a size > 232 bp was observed in the patient , indicating expansion of the polyalanine track . a 3,070 g male at 41 weeks gestation age was born in our hospital via cesarean section because of cephalopelvic disproportion on july 26 , 2007 . the boy was well at birth , with apgar scores of 7 and 9 at 1 and 5 min , respectively . he repeatedly had apnea with cyanosis and desaturation , noticed from the 15 hr after birth , and improved by awakening . he sometimes developed seizures when the paco2 accumulated , which needed supportive care with mechanical ventilation . 1 ) , mri of the brain , echocardiography and screening blood tests for metabolic disease . symptoms of respiratory failure with slow and irregular respiratory effort , cyanosis and seizures due to hypercapnia appeared during sleep but not during wakefulness ( fig . peripheral blood samples for the gene study were obtained from the patient and his mother at the age of 33 days with consented document of the family . he had a tracheostomy and was discharged with a home mechanical ventilation . at the age of 27 months , he is healthy except still ventilator - dependent during sleep . blood ( 4 ml ) was collected into an edta tube from the patient and his mother . genomic dna was obtained using a puregene reagent kit ( gentra , minneapolis , mn , usa ) according to the manufacturer 's instructions . the phox2b exon 3 region coding for the polyalanine repeat was amplified with primer pair 5'-ccaggtcccaatcccaac-3 ' ( forward ) and 5'-gagcccagccttgtccag-3 ' ( reverse ) ( fig . the pcr reactions were carried out using 0.25 u amplitaq gold polymerase ( applied biosystems , foster city , ca , usa ) in a total volume of 25 l , containing 50 ng genomic dna , 0.3 m primers , 2.5 mm mgcl2 , and 0.2 mm dntps . the amplification was performed with an initial denaturation at 95 for 10 min followed by 35 cycles of denaturation at 94 for 30 sec , annealing at 57 for 30 sec , and extension at 72 for 30 sec . after electrophoresis of the pcr products on a 4% denaturing polyacrylamide gel , the allele repeat number was determined by comparison of bands to known size standards ( 232 bp for normal 20 repeat allele ) ( 9 ) . the amplification of exon 2 was carried out using 1.25 u amplitaq gold polymerase ( applied biosystems ) in a total volume of 25 l , containing 50 ng of genomic dna , 0.5 m of primers , 1 mm mgcl2 , and 0.2 mm dntps . the amplification of exon 3 was performed using the gc - rich system ( roche molecular biochemicals , indianapolis , in , usa ) for its high gc content . the amplification for both exons was performed with an initial denaturation at 95 for 8 min , followed by 35 cycles of denaturation at 95 for 1 min , annealing at 62 for 1 min , and extension at 72 for 45 sec . the pcr products were column - purified and sequenced on an applied biosystems 3130 genetic analyzers ( applied biosystems ) . an extra band with a size > 232 bp was observed in the patient , indicating expansion of the polyalanine track . cchs was suspected in the present case based on following findings : 1 ) hypoventilation during sleep with no increase of respiratory frequency following severe desaturation , 2 ) onset of symptoms during the first day of life and a history of recurrent hypercapnic respiratory failure , and 3 ) absence of primary neuromuscular , lung or cardiac disease , or an identifiable brainstem lesion . cchs occurs in association with hirschsprung disease ( haddad syndrome ) by 15 - 20% frequency , tumors of neural crest origin ( neuroblastoma , ganglioneuroblastoma , glanglioneuroma ) by 3% , and growth hormone deficiency by < 1% ( 13 ) . korean patients with cchs combined with hirschprung disease have been reported based on clinical symptoms without genetic analysis ( 3 - 7 ) . the occurrence of cchs and hirschsprung disease suggests a common molecular pathogenesis involving defects of one or more genes that control the development of neural crest - derived cell lineages . this patient did not have any other abnormalities but will be monitored for a malignancy of the ans in the future . infants with cchs may present with hypoventilation of variable severity , ranging from complete apnea during sleep , severe hypoventilation during wakefulness , to mild hypoventilation during sleep ( 9 , 14 ) . the patient reported here , required a tracheostomy and used a home mechanical ventilator in the pressure - support mode . several studies have shown that a heterozygous mutation in phox2b is sufficient to cause cchs , and this has confirmed its autosomal dominant inheritance pattern with incomplete penetrance ( 9 , 10 , 15 - 19 ) . this gene is known to code for a highly conserved transcription factor and plays a key role in the development of ans reflex circuits in mice , hence the association with cchs . mapping of the phox2b mutation in chromosome 4p12 has been detected in approximately 97% of patients with cchs , and the majority of expanded alleles contained 25 - 27 polyalanine repeats ( 9 ) . weese - mayer et al . ( 9 ) suggested that the length of the phox2b polyalanine repeat mutation is associated with number of ans symptoms and that there is also a significant association between the number of repeats in the mutations and the ventilation support required . the current patient had a relatively small number of polyalanine repeats compared to previous cases and required intermittent ventilator support during sleep , which was for less than 12 hr a day . clinically , evaluation of the phox2b expansion could be used as a predictive test for cchs , therefore it might be particularly useful for parents of a child with cchs and grown children with cchs . in addition , prenatal testing can then be used to predict the severity of affected individuals in subsequent pregnancies . the identification of mutations in phox2b is important for the differential diagnosis in children with confusing symptoms such as asphyxia , prematurity , bronchopulmonary dysplasia , and neonatal seizures , and may lead to improved treatment options for children with cchs ( 9 ) .
congenital central hypoventilation syndrome ( cchs ) is a life - threatening disorder with apnea and cyanosis during sleep requiring immediate endotracheal intubation during the first day of life . the phox2b gene has been identified as the major gene involved in cchs . this is the first report of a korean neonate with cchs confirmed to have a phox2b mutation with expanded alleles containing 20 polyalanine repeats that is a relatively small number compared to previous cases . the patient required intermittent ventilator support during sleep only and did not suffer from any other disorders of the autonomic nerve system . he consistently needs ventilator support during sleep and remains alive . analysis of phox2b gene is useful for diagnosis and appropriate therapeutic intervention of cchs patients .
Khloe Kardashian Puts Relationship with James Harden On Hold Khloe Kardashian Puts Relationship with James Harden on Hold EXCLUSIVE Khloe Kardashian has put the brakes on her relationship with James Harden, in the wake of Lamar Odom's medical crisis ... sources tell TMZ. We're told Khloe is putting her relationship with the NBA star on ice, and as one source puts it, it doesn't take a rocket scientist to see she still has deep feelings for Lamar. TMZ broke the story ... Khloe will be by Lamar's side during his rehab, which will take many months. As we reported, Khloe and Lamar both signed the final divorce docs, but it takes anywhere from 2 to 4 months to process. ||||| Lamar Odom Kidneys Failing Transplant in Play Lamar Odom's Kidneys Failing ... Transplant in Play EXCLUSIVE Lamar Odom's kidneys are failing and he may need a transplant ... TMZ has learned. Sources familiar with Lamar's medical condition tell TMZ, the organs that have failed have all bounced back in significant degree, except the kidneys. As one source put it, "His kidneys are shot." Lamar is at Cedars-Sinai Medical Center in L.A., and we're told he will undergo 6 hours of dialysis a day, but the end game may well be a transplant. As for his move from Vegas to L.A., our sources say the main reason for the change is so Lamar can get better specialized care. His condition has improved, but the move is not a signal he's out of the woods. We're told doctors at Cedars will be performing brain tests, which will help determine the extent of damage the strokes had on Lamar. ||||| Playing Coach Jim Harrick Says Lamar Odom Hoped He and Khloe Kardashian Could Still Get Back Together Lamar Odom's college coach and mentor, Jim Harrick, tells ET that the ailing 35-year-old basketball star still loves his ex, 31-year-old reality star Khloe Kardashian. Harrick, who spoke with Odom three weeks ago, says that Odom "harbored a hope" that he and Khloe "could at some point get back together." Khloe filed for divorce in December 2013, although the two are still legally married. "I know Lamar did wrong by Khloe, but at heart, he's a good guy and he never meant to hurt her," Harrick, who has known Odom since he was 14 years old, says. "Lamar felt like the Kardashians were the first real family he had ever had, and they completed his circle of life." WATCH: EXCLUSIVE: Lamar Odom Opens His Eyes and Was Able to Communicate Though, of course, there were also pressures brought on by being associated with the Kardashians. Harrick says Odom moved to Las Vegas, in part, to get away from "the media circus" surrounding his life since marrying Khloe. "When he can't face things, Lamar runs and hides to cope with the pressure," Harrick says. Still, the 77-year-old coach says Odom was "really happy and upbeat" when he talked with him less than a month ago. "He was laughing and joking, and I had no idea anything was up," he says. "I didn't think anything was wrong and he seemed fine." As for Odom's two children with his ex-girlfriend, Liza Morales -- Destiny and Lamar Jr. – Harrick says Odom "completely loved them" and "would talk about them all the time." He also notes how the death of his son, Jayden, in 2006 due to Sudden Infant Death Syndrome when he was just six months old, greatly affected him. "He became lost and deeply down," Harrick says about Odom's state of mind after losing his youngest son. "Lamar has always provided for his kids and wanted to be a good dad, but admitted he hadn't always been there for them." A source previously told ET that the Kardashian family "still loves Lamar," though he and Khloe had split. Almost the entire Kardashian-Jenner family -- Khloe, Kris, Kourtney, Kim, and Kylie -- has been spotted in Vegas supporting Odom, though everyone except Khloe left on a private jet on Thursday. "Lamar is not a bad person, he just has very bad demons," the source told ET, referring to speculation that Odom had allegedly been facing substance abuse problems in recent years. "When Lamar is good, he's good. But when he's bad, there's no way of getting any worse." "Khloe is absolutely devastated and beyond upset," another source told ET on Thursday. "She has not left Lamar's bedside since she arrived in Las Vegas." WATCH: Lamar Odom Surrounded by Women in Brothel Surveillance Footage, Receipt Shows Thousands Spent But there is some good news on Odom's current condition. On Friday, ET exclusively learned that Odom had opened his eyes and was able to communicate. Watch below:
– Khloe Kardashian really is sticking by Lamar Odom through his health crisis, even though their divorce is all but finalized. Not only did she accompany him back to Los Angeles on Monday for more specialized treatment, but TMZ reports she's taking a break from her current romance in order to focus on Odom. She had been dating another NBA star, James Harden, but sources tell TMZ Kardashian clearly still has feelings for Odom. She's expected to remain by his side while he undergoes months of rehab. As for Odom's prognosis, sources say his family has been warned there will be some permanent damage, but the extent is not yet clear. He's currently at Cedars-Sinai Medical Center, where doctors will do brain tests to find out how much damage the strokes did. And, while most of his organs have improved, a source tells TMZ Lamar's "kidneys are shot." He's set to get six hours of dialysis per day and may ultimately need a kidney transplant. (Odom apparently still has strong feelings for Kardashian as well.)
Breaking News Emails Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings. / Updated By Courtney Kube, Robert Windrem, William M. Arkin and Phil Helsel North Korea carried out another banned ballistic missile test on Friday, U.S. officials confirmed to NBC News, but the missile exploded just after launch. "The launch was unsuccessful and exploded in midair," a South Korean military official told NBC News. South Korea is "totally ready to meet any and all kinds of provocation," the official added. Two U.S. officials said the missile was short-range, capable of reaching the South Korean capital of Seoul, but not Japan. U.S. Pacific Command said the missile was launched at 5:33 a.m. Saturday Seoul time (4:33 p.m. Friday ET) from near Pukchang airfield, and the missile did not leave North Korean territory. The move is the latest provocation amid heightened up tensions on the peninsula. The White House said it is aware of the missile test and President Donald Trump has been briefed. Trump responded on Twitter about three hours after the missile launch: "North Korea disrespected the wishes of China & its highly respected President when it launched, though unsuccessfully, a missile today. Bad!" The test comes a day after Trump told Reuters in an interview that the U.S. and North Korea could be headed toward a "major, major conflict," and as South Korea announced it had installed key parts of a U.S. missile defense system. Related: Sec. Tillerson to UN Security Council: 'Act Before North Korea Does' In the Reuters interview, Trump said: "We'd love to solve things diplomatically but it's very difficult." Earlier this week, North Korea conducted what were described as major live-fire drills as it marked the founding of its military. A spokesperson for South Korean presidential candidate Moon Jae-in warned that if North Korea continues with provocations, "it will be met by tough punishment from the international community." Japanese Chief Cabinet Secretary Yoshihide Suga said Japan was taking every precaution after the missile test. "The Japanese government will closely coordinate with the related countries such as the U.S. and South Korea at the United Nations Security Council and so on to urge North Korea to restrain itself while we are taking every precaution to face any contingency," Suga said. A man watches a TV news program reporting about North Korea's missile firing with a file footage, at Seoul Train Station in Seoul, South Korea, Saturday, April 29, 2017. Lee Jin-man / AP North Korea has conducted several ballistic missile launches this year. The country is barred by United Nations resolutions from carrying out ballistic missile tests. North Korea on April 15 launched a missile but it failed "almost immediately," according to U.S. and South Korean military officials. The country has conducted five nuclear tests since 2006, including two last year. North Korea has warned it is ready to carry out the test of an intercontinental ballistic missile at "any time," but it has never successfully launched such a missile. ||||| South Korea's Foreign Minister Yun Byung-se, left, talks with U.S. Secretary of State Rex Tillerson after a meeting of the Security Council at United Nations headquarters, Friday, April 28, 2017. (AP... (Associated Press) South Korea's Foreign Minister Yun Byung-se, left, talks with U.S. Secretary of State Rex Tillerson after a meeting of the Security Council at United Nations headquarters, Friday, April 28, 2017. (AP Photo/Richard Drew) (Associated Press) SEOUL, South Korea (AP) — North Korea test-fired a mid-range ballistic missile from the western part of its country Saturday, but the launch apparently failed, South Korea and the United States said Saturday. The test will be condemned by outsiders as yet another step in the North's push for a nuclear-tipped missile that can strike the U.S. mainland. South Korea's Joint Chiefs of Staff said in a statement that the North fired the unidentified missile from around Pukchang, which is near the capital Pyongyang, but provided no other details. A U.S. official, speaking on condition of anonymity to discuss sensitive matters, said the missile was likely a medium-range KN-17 ballistic missile. It broke up a couple minutes after the launch and the pieces fell into the Sea of Japan. A South Korean military official also said without elaborating that the launch was believed to be a failure. He didn't want to be named, citing office rules. The official couldn't immediately confirm how far the missile flew or whether it had exploded shortly after launch. North Korea routinely test-fires a variety of ballistic missiles, despite United Nations prohibitions, as part of its weapons development. While shorter-range missiles are somewhat routine, there is strong outside worry about each longer-range North Korean ballistic test. Saturday's launch comes at a point of particularly high tension. U.S. President Donald Trump took an initial hard line with Pyongyang and sent a U.S. aircraft supercarrier to Korean waters. His diplomats are now taking a softer tone. On Friday, the United States and China offered starkly different strategies for addressing North Korea's escalating nuclear threat as Trump's top diplomat demanded full enforcement of economic sanctions on Pyongyang and urged new penalties. Stepping back from suggestions of U. S. military action, he even offered aid to North Korea if it ends its nuclear weapons program. The range of Secretary of State Rex Tillerson's suggestions, which over a span of 24 hours also included restarting negotiations, reflected America's failure to halt North Korea's nuclear advances despite decades of U.S.-led sanctions, military threats and stop-and-go rounds of diplomatic engagement. As the North approaches the capability to hit the U.S. mainland with a nuclear-tipped missile, the Trump administration feels it is running out of time. ||||| A rocket-themed float moves through Kim Il-Sung square during a mass rally marking the 105th anniversary of the birth of late North Korean leader Kim Il-Sung, in Pyongyang (AFP Photo/ED JONES) North Korea test-fired a ballistic missile Saturday in apparent defiance of a concerted US push for tougher international sanctions to curb Pyongyang's nuclear weapons ambitions. The latest launch, which South Korea said was a failure, came just hours after US Secretary of State Rex Tillerson warned the UN Security Council of "catastrophic consequences" if the international community -- most notably China -- failed to pressure the North into abandoning its weapons programme. Military options for dealing with the North were still "on the table", Tillerson warned in his first address to the UN body. The launch ratchets up tensions on the Korean peninsula, with Washington and Pyongyang locked in an ever-tighter spiral of threat, counter-threat and escalating military preparedness. US President Donald Trump, who has warned of a "major conflict" with North Korean leader Kim Jong-Un's regime, said the latest test was a pointed snub to China -- the North's main ally and economic lifeline. "North Korea disrespected the wishes of China & its highly respected President when it launched, though unsuccessfully, a missile today. Bad!" Trump tweeted. The US is deploying a naval strike group led by an aircraft carrier to the Korean peninsula, and a missile-defence system called Terminal High Altitude Area Defense (THAAD) that officials say will be operational "within days". North Korea recently conducted its biggest-ever firing drill and has threatened to "bury at sea" the US aircraft carrier, amid signs it could be preparing for a sixth nuclear test. South Korea's defence ministry said it suspected Saturday's missile test had failed after a brief flight, while the US military's Pacific Command confirmed the rocket did not leave North Korean territory. South Korea condemned the launch, with foreign ministry spokesman Cho June-Hyuck saying that if the North continued to "play with fire", it would "face strong punitive steps in various levels", including from the UN Security Council. Japan has lodged a "serious protest and criticism" to the North, Chief Cabinet Secretary Yoshihide Suga told reporters after a national security council meeting. - Risk of nuclear attack 'real' - China pushed back at Tillerson's call at the UN Security Council for it to do more to rein in Pyongyang, arguing that it was unrealistic to expect one country to solve the conflict. "The use of force does not solve differences and will only lead to bigger disasters," Chinese Foreign Minister Wang Yi said. His country, he said, should not be "a focal point of the problem on the peninsula" and stressed that "the key to solving the nuclear issue on the peninsula does not lie in the hands of the Chinese side". Russia joined China in appealing for a return to talks and de-escalation. Military action was "completely unacceptable", Russian Deputy Foreign Minister Gennady Gatilov told the council, and a miscalculation could have "frightening consequences". But Tillerson argued that diplomacy had to be backed with credible muscle. "Diplomatic and financial levers of power will be backed up by willingness to counteract North Korean aggression with military action, if necessary," he said. "The threat of a North Korean nuclear attack on Seoul or Tokyo is real, and it is likely only a matter of time before North Korea develops the capability to strike the US mainland." - Key powers divided - The meeting of the top UN body on Friday laid bare major differences among key powers over the way to address the North Korea crisis. Over the past 11 years, the Security Council has imposed six sets of sanctions on Pyongyang -- two adopted last year -- to significantly ramp up pressure and deny the North Korean regime the hard currency revenue needed for its military programmes. But UN sanctions experts have repeatedly told the council the measures have had little impact because they have been poorly implemented. Tillerson called on all countries to downgrade or sever diplomatic relations with North Korea and impose targeted sanctions on entities and individuals supporting its missile and nuclear program. The United States is ready to impose sanctions on third countries where companies or individuals are found to have helped North Korea's military programmes, he said.
– North Korea launched a ballistic missile test around 5pm ET Friday (6am Saturday Seoul time), NBC News reports. The launch—an apparent failure—was confirmed by US and South Korean officials. One US official tells the AP the projectile was likely a medium-range KN-17 missile. The official says the missile broke up and fell into the Sea of Japan minutes after it launched. According to AFP, the South Korean military says the missile exploded seconds after launch. "The launch was unsuccessful and exploded in midair," a South Korean military official tells NBC. South Korean officials are still trying to figure out how far the missile traveled before exploding. It was launched from an area north of Pyongyang. The test comes amid a growing war of words between North Korea and the Trump administration. On Thursday, President Trump said the US and North Korea were on the path toward a "major, major conflict." UN resolutions bar North Korea from conducting ballistic missile tests, though it's already launched several this year.
designing protocols of quantum information processing one usually deals with some particular initial states . one is then interested in describing the evolution of such a concrete quantum state and its properties in time . for instance , one studies the time dependence of the degree of quantum entanglement , which characterizes the non classical correlations between subsystems and is treated as a crucial resource in the theory of quantum information @xcite . as a reference point one may compare the degree of entanglement of the analyzed state with analogous properties of a typical , random state . such random states are also of a direct physical interest since they arise under the action of a typical quantum chaotic system see e.g. @xcite . in this work we investigate mean values of certain measures of quantum entanglement , averaged over the entire space of pure states of a hilbert space of a given size . there exist several measures of quantum entanglement which do not increase under local operations and satisfy the required properties listed in @xcite , but it is hardly possible to single out the `` best '' universal quantity . on the contrary , different entanglement measures occurred to be optimal for various tasks , so it is likely we will have to learn to live with quite a few of them @xcite . the measures of quantum entanglement for a pure state of a bipartite system , @xmath10 , rely on its schmidt coefficients @xcite equivalent to the spectrum @xmath11 of the reduced system , @xmath12 . by construction the sum of all schmidt coefficients equals unity , @xmath13 , so just @xmath14 of them are independent . to quantify entanglement of a pure state one uses entanglement monotones @xcite , defined as quantities which do not increase under ocal perations and lassical ommunication ( the so called operations ) . entanglement of a pure state of a @xmath0 system is therefore completely described by a suitable set of @xmath14 independent entanglement monotones . it is convenient to work with the ordered set of coefficients , @xmath15 . the first example of such a set of entanglement monotones found by vidal consists of sums of @xmath16 largest coefficients , @xmath17 with @xmath18 @xcite . alternatively , one can use rnyi entropies of @xmath14 different orders . another set of monotones may be constructed out of symmetric polynomials of the schmidt coefficients of order @xmath19 @xcite , @xmath20 for large @xmath21 these polynomials become small , so it is of advantage to consider cognate quantities , @xmath22 . gour noted that taking the @xmath21-th root of the polynomials does not spoil the monotonicity and proposed to used normalized quantities @xmath23 as alternative measures of quantum entanglement @xcite . in particular he found unique properties of the last polynomial @xmath24 , equal to the determinant of the reduced matrix , @xmath25 . its rescaled @xmath21th root , @xmath26proportional to the geometric mean of all schmidt coefficients , was called @xmath4concurrence in @xcite , where its operational interpretation as a type of entanglement capacity was suggested . this quantity extended by the convex roof construction for mixed states , played a crucial role in demonstration of an asymmetry of quantum correlations @xcite and was used to characterize the entanglement of assistance @xcite . the aim of this work is to compute mean values and to describe probability distributions for the determinant @xmath1 and its root @xmath4 of random pure states of a bipartite system , generated with respect to the natural , unitary invariant measure on the space of pure states , also called ubini tudy ( ) measure . our analysis is performed for a bipartite system of an arbitrary size @xmath21 , and in particular we treat in detail the interesting limiting case @xmath5 . although our study directly concerns bipartite systems , one may infer some statements valid also in the general case of multipartite systems . the paper is organized as follows . in section ii we review a concept of a random pure state and describe certain probability measures in this set . average values of the @xmath4concurrence are computed in section iii , while the subsequent section concerns with probability distribution of this measure of quantum entanglement . the paper is concluded with some final remarks while the discussion of the asymptotics of probability distributions is postponed to an appendix . consider a pure state of a bipartite @xmath27 system represented in a product basis @xmath28 coincide with the eigenvalues of a positive matrix @xmath29 , equal to the density matrix obtained by a partial trace on the @xmath30dimensional space . the matrix @xmath31 needs not to be hermitian , the only constraint is the trace condition , @xmath32 . furthermore , the natural unitarily invariant measure on the space of pure states corresponds to taking @xmath31 as a matrix from the ginibre ensemble @xcite . thus our problem consists in analyzing the distribution of determinants of random wishart matrices @xmath33 normalized by fixing its trace . schmidt coefficients s distributions are given by @xcite [ gen_meas ] @xmath34}/2 } \theta(\lambda_i)\prod_{i < j}|\lambda_i-\lambda_j|^{\beta}\ , \ ] ] in which the cases of real or complex @xmath31 are distinguished by the _ repulsion exponent _ @xmath35 @xcite being equal @xmath36 , respectively @xmath37 and the normalization @xmath38 reads @xcite @xmath39}}^n } { \prod_{j=0}^{n-1 } \gamma{\left [ ( k - j)\beta/2 \right ] } \gamma{\left [ 1+(n - j)\beta/2 \right ] } } \ .\label{b_coeff}\ ] ] formulae describe a family of probability measures in the simplex of eigenvalues of a density matrix of size @xmath21 . the integer number @xmath30 , determining the size of the ancilla , can be treated as a free parameter . another important probability measure in the space of mixed quantum states is induced by the euclidean geometry and the ilbert chmidt ( ) distance . assuming that each ball of a certain radius contains the same volume , one arrives at the measure @xcite [ consths ] latexmath:[\[p^{(\beta)}_{\text{{\textsf{hs}}}}(\lambda_1,\ldots,\lambda_n)\ : = \ h_{n}^{(\beta ) } \delta{\left ( \sum_{i=1}^n \lambda_i -1 \right)}\prod_{i=1}^n \:\theta(\lambda_i)\prod_{i < j } parameter @xmath35 distinguishes as before between the real and the complex cases . the above normalization constant @xmath41 reads @xmath42 } \prod_{j=1}^n \biggl [ \frac { \gamma(1+j\beta/2 ) \gamma[1 + ( j-1)\beta/2 ] } { \gamma(1+\beta/2 ) } \biggr]\quad \cdot\label{consths2}\ ] ] we observe that the distribution , normalization constants included , can be recasted into the form , provided that we choose @xmath43 , that is @xmath44 using this observation , one can get a useful procedure for generating random density matrices distributed according to the measure taking normalized wishart matrices @xmath33 , with @xmath31 belonging to the ginibre ensemble of hermitian matrices of appropriate dimension . aiming to derive the averaged moments needed in section [ ttrree ] , it is convenient to change variable in by putting @xmath45 and obtaining : [ constab ] @xmath46 } \prod_{j=1}^n \biggl [ \frac { \gamma(1+j\beta/2 ) \gamma[\alpha + ( j-1)\beta/2 ] } { \gamma(1+\beta/2 ) } \biggr]\quad \cdot\label{constab2}\ ] ] in the above formula the real variable @xmath47 can be used as a free parameter instead of the integer @xmath30 . in this section we are going to compute averages over an ensemble of random density matrices distributed according to the measure , which is induced by the euclidean geometry . this corresponds to fixing the size @xmath30 of the ancilla according to , depending on whether the real or the complex case is concerned . denoting the eigenvalues of the density matrix @xmath48 by @xmath49 , the moments of the determinants @xmath50 read @xmath51 the product of heaviside step functions , present in the definition of @xmath52 , allows us to extend the domain of integration on the entire axis . the integrand of coincides with the factor present in the right hand side of equation , provided that the parameter @xmath47 is set there to @xmath53 . using this the integral can be computed from , and reads @xmath54 for sake of clarity , from now on the sub and super script @xmath55 and @xmath56 will be often replaced by @xmath57 , respectively @xmath58 . making use of equation , one obtains the moments of the @xmath4concurrence by imposing @xmath59 in the ratios @xmath60 , rescaled by a factor @xmath61 . thus we get now @xmath62 in fig . [ d_mean ] the mean values @xmath63 and variance @xmath64 are represented as a function of @xmath21 for both complex and real cases . concurrence for * ( a ) * complex and * ( b ) * real random mixed states of a @xmath65 system distributed according to the measure . the average is computed by means of equation ; error bars represent the variance of @xmath66 . dashed line represent the asymptote @xmath67 , whose explanation is given in section [ asymptote ] . ] this section is devoted to the study of probability distributions . we shall start with the simplest problem of determining the distribution of the determinant @xmath1 of a @xmath69 density matrix @xmath70 distributed according to the ( ) measure . in this case an explicit solution is easily obtained by integrating the dirac delta @xmath71 over the distribution @xmath72 of , that is @xmath73 it is a very simple distribution since @xmath74 . thus @xmath75 \label{p2(d)1}\ \cdot\ ] ] the @xmath4concurrence distribution @xmath76 can be computed either by integrating @xmath77 over @xmath72 , or simply using the latter result together with @xmath78 ; in both cases ( see fig . [ littog ] ) @xmath79 \label{p2(d)11}\ \cdot\ ] ] note that , due to @xmath80 , ( only ) for the case @xmath81 the @xmath4concurrence given by reduces to the standard concurrence @xcite , @xmath82 . thus formula for the complex case coincides with the distribution of concurrence @xmath83 obtained in @xcite . for higher @xmath21 we will construct the distribution @xmath84 from all moments @xmath85 given by equation ; indeed @xmath86 with @xmath87 and @xmath88 , and so we can obtain @xmath89 by inverse laplace transform or inverse mellin transform as integral along the imaginary @xmath90axis : @xmath91 although equations and allow us to compute the @xmath92 probabilities , the cognate quantities @xmath68 can be determined as well by using @xmath93 by taking from the explicit expression for @xmath94 , one can indeed get the simple expression @xmath95 from now on formulae and figures will be given indifferently for both @xmath4 and @xmath1 distribution , being clear their mutual relation . in particular the @xmath1distribution is more indicated in showing details of calculation , for its simpler form , whereas the @xmath4distribution better shows features in the pictures , for its domain being independent of @xmath21 . concurrence s distributions @xmath66 are compared for different @xmath21 in the case of * ( a ) * complex and * ( b ) * real random pure states . the distributions are obtained by performing numerically @xcite the inverse laplace transform of equation . dashed vertical line centered in @xmath67 denotes the position of the dirac delta corresponding to @xmath96 , as it is shown in section [ asymptote ] . ] important is the asymptotic behavior of the gamma function for large argument ( stirling s formula ) @xmath97 } \nonumber\ ] ] for @xmath98 and @xmath99 . this implies the asymptotic behavior of for large @xmath100 : @xmath101{0pt}{5ex}\langle d^m_{\mathds{c } } \rangle_n & \simeq d^{\text{{\textsf{s}}}}_{\mathds{c}}{\left ( m , n \right ) } : = a_n^{\mathds{c}}\cdot\frac{\textrm{e}^{-mn\log n}}{m^{(n^2 - 1)/2}}\\ \rule[-2ex]{0pt}{5ex}\langle d^m_{\mathds{r } } \rangle_n & \simeq d^{\text{{\textsf{s}}}}_{\mathds{r}}{\left ( m , n \right ) } : = a_n^{\mathds{r}}\cdot\frac{\textrm{e}^{-mn\log n}}{m^{(n^2+n-2)/4 } } \end{cases } \quad,\quad\text{with}\quad \begin{cases } \rule[-3ex]{0pt}{5ex } a_n^{\mathds{c}}:= \frac{(2\pi)^{(n-1)/2 } \gamma(n^2)}{n^{n^2 - 1/2 } \prod_{j=1}^n \gamma(j)}\\ \rule[-2ex]{0pt}{5ex}a_n^{\mathds{r}}:= \frac{(2\pi)^{(n-1)/2 } \gamma{\left [ { \left ( n^2+n \right)}/{2 } \right]}}{n^{(n^2+n-1)/2 } \prod_{j=1}^n \gamma{\left [ { \left ( j+1 \right)}/{2 } \right ] } } \end{cases } \label{momasympt}\quad\cdot\ ] ] as a consequence the integral converges and moreover it vanishes if @xmath102 or @xmath103 , because in that case we can close the contour in ( [ p_n ( d ) ] ) in the right @xmath90halfplane according to the jordan s lemma @xcite . physically this means that there are no density matrices with determinants greater than the one with maximal entropy . in the rest of this section we will give the asymptotic behavior of distributions @xmath92 for the two edges of the domain , that is @xmath104 and @xmath105 . the details of calculation , together with the explicit @xmath21dependence of all coefficients listed here in the following , are collected in appendix [ appendice ] . in particular , when very close to the completely mixed state , that is @xmath106 , we have the result ( see fig . [ asy1 ] ) @xmath101{0pt}{5ex}p_n^{\mathds{c}}(d ) \simeq a_n^{\mathds{c}}\cdot\frac{(-\log d -n\log n)^{(n^2 - 3)/2}}{d\ { \big[{\left ( n^2 - 3 \right)}/{2}\big]}{\displaystyle ! } } \\ \rule[-2ex]{0pt}{5ex}p_n^{\mathds{r}}(d ) \simeq a_n^{\mathds{r}}\cdot\frac{(-\log d -n\log n)^{(n^2+n-6)/4}}{d\ { \big[{\left ( n^2+n-6 \right)}/{4}\big]}{\displaystyle ! } } \end{cases } \label{dasdx}\ \cdot\ ] ] bins histogram of @xmath107 determinants of @xmath108 complex density matrices distributed accordingly to the measure is compared with the right asymptote given by equation ( plotted in solid line ) . same analysis is depicted in panel * ( b ) * , but for @xmath108 real density matrices . ] moreover , using together with @xmath109 we simply find @xmath101{0pt}{5ex}p_n^{\mathds{c}}(g ) \simeq { \widetilde{a}}_n^{\mathds{c}}\cdot\frac{(1-g^n)^{(n^2 - 3)/2}}{g}\\ \rule[-2ex]{0pt}{5ex}p_n^{\mathds{r}}(g ) \simeq { \widetilde{a}}_n^{\mathds{r}}\cdot\frac{(1-g^n)^{(n^2+n-6)/4}}{g } \end{cases } \quad,\quad\text{with}\quad \begin{cases } \rule[-3ex]{0pt}{5ex } { \widetilde{a}}_n^{\mathds{c}}:= a_n^{\mathds{c}}\cdot \frac{n}{\gamma{\big[{\left ( n^2 - 1 \right)}/{2}\big]}}\\ \rule[-2ex]{0pt}{5ex } { \widetilde{a}}_n^{\mathds{r}}:= a_n^{\mathds{r}}\cdot \frac{n}{\gamma{\big[{\left ( n^2+n-2 \right)}/{4}\big ] } } \end{cases } \nonumber\quad\cdot\ ] ] for the other part of the spectrum , that is for very small @xmath1 , the probability @xmath110 can be expanded in a power series with some logarithmic corrections , as follows : @xmath111 in particular , coefficients @xmath112 are computed in appendix [ appendice ] for all @xmath113 , whereas for @xmath114 and @xmath115 we limit ourself to explicitly solve the case @xmath116 ( the case @xmath81 is simply given by formula ) . bins histogram of @xmath117 @xmath4concurrence of @xmath118 complex density matrices distributed according to the measure . the other panels * ( b ) * * ( c ) * and * ( d ) * shows histograms ( for different @xmath21 ) together with the distribution of @xmath4concurrence obtained by inverse laplace transforming as in equation ( plotted in solid lines ) . the left asymptote given by eq . , computed up to @xmath119 , is also plotted in dashed line for comparison ; in panel * ( b ) * we also add the contribution given by @xmath120 coefficients , using a dotted line . ] the situation is similar when we do consider , in the same region of the domain , the probability @xmath121 , corresponding to small determinants of reduced @xmath122 real density matrices distributed . the expansion is still a power series ( plus logarithmic corrections ) but the exponents are now semi integer , according to the mechanism described in appendix [ appendice ] , thus the probability reads : @xmath123 iterating the recursion relation for the gamma function @xmath124 , we can recast expression of the @xmath90moment of the @xmath4concurrence of complex random pure state as @xmath125^{n } \right\}}{\left\ { \prod_{k=1}^{n-1}\left(1+\frac{m}{kn}\right)^{n - k } \right\}}\quad,\nonumber\ ] ] with the asymptotics characterized with help of the euler constant @xmath126 , @xmath127{}\frac{1}{n^m}\quad , \nonumber\\\left[\frac{m}{n}\;\gamma\left(\frac{m}{n}\right)\right]^{n}&\xrightarrow[\ n\to\infty\ ] { } \textrm{e}^{-\gamma m}\nonumber \intertext{and } \prod_{k=1}^{n-1}\left(1+\frac{m}{kn}\right)^{n - k}&\xrightarrow[\ n\to\infty\ ] { } n^{m}\textrm{e}^{m(\gamma-1)}\quad,\nonumber \intertext{so that finally}\left < g_{\mathds{c}}^m\right>_n & \xrightarrow[\ n\to\infty\ ] { } \textrm{e}^{-m}\quad\cdot\label{cplxas}\end{aligned}\ ] ] for the analogue moments of @xmath4concurrence of real random pure state , some technicality requires that the sequence of odd and even @xmath21 has to be analyzed separately , although it is not hard to prove that the limit is the same . for that reason , we will simply illustrate the case @xmath128 , @xmath129 , for which gives @xmath130^{p } \right\ } } { \left\ { \left[{\left ( \frac{m}{2p}+\frac{1}{2 } \right)}\;\gamma\left(\frac{m}{2p}+\frac{1}{2}\right)\right]^{p } \right\}}\times\\ \times { \left\ { \prod_{k=1}^{p-1}\left(1+\frac{m}{2pk}\right)^{p - k } \right\}}{\left\ { \prod_{k=\frac{3}{2}}^{p-\frac{1}{2}}\left(1+\frac{m}{2pk}\right)^{p - k-\frac{1}{2 } } \right\}}\quad,\nonumber\end{gathered}\ ] ] with @xmath131{}{{\left ( \frac{2}{2p+1 } \right)}}^m \quad , \nonumber\\\left[\frac{m}{2p}\;\gamma\left(\frac{m}{2p}\right)\right]^{p } & \xrightarrow[\ p\to\infty\ ] { } \textrm{e}^{-\gamma \frac{m}{2 } } \quad , \nonumber\\\left[{\left ( \frac{m}{2p}+\frac{1}{2 } \right)}\;\gamma\left(\frac{m}{2p}+\frac{1}{2}\right)\right]^{p } & \xrightarrow[\ p\to\infty\ ] { } \frac{\pi^{\frac{p}{2}}}{2^{{\left ( p+m \right)}}}\:\textrm{e}^{m{\left ( 1-\frac{\gamma}{2 } \right ) } } \quad , \nonumber\\\prod_{k=1}^{p-1}\left(1+\frac{m}{2pk}\right)^{p - k } & \xrightarrow[\ p\to\infty\ ] { } p^{\frac{m}{2}}\textrm{e}^{\frac{m}{2}{\left ( \gamma-1 \right)}}\nonumber \intertext{and } \prod_{k=\frac{3}{2}}^{p-\frac{1}{2}}\left(1+\frac{m}{2pk}\right)^{p - k-\frac{1}{2 } } & \xrightarrow[\ p\to\infty\ ] { } \textrm{e}^{\frac{m}{2}{\left ( \gamma-1 \right)}}\textrm{e}^{-m}\:2^m{{\left ( p+\frac{1}{2 } \right)}}^{\frac{m}{2 } } \quad\cdot \nonumber\ ] ] putting all factors together we arrive at the general result ( compare with ) @xmath132 the above expression , valid for both @xmath133 , is useful to derive the limiting distribution @xmath134 we see from that its average is @xmath135 and its variance is @xmath136 ; such behavior can be recognized in fig [ d_mean ] . moreover , by fixing @xmath137 , one can see that @xmath138 of is nothing but the laplace transform of the function @xmath139 so that , by inverse laplace transforming , we obtain @xmath140 rewriting the argument of the dirac delta we finally arrive at @xmath141 in other words , we have shown that for large systems the g concurrence of random states is localized arbitrarily close to the averaged value . a similar concentration effect has recently been quantified @xcite for bipartite @xmath27 systems . in particular the von neumann entropy of the reduced density matrix of the first subsystem concentrates around the entropy of the maximally mixed state , @xmath142 , if we let the dimension @xmath30 of the auxiliary subsystem to go to infinity faster than @xmath21 . when @xmath143 , so that the induced distribution coincides with the ilbert chmidt distribution , and @xmath5 , then von neumann entropy concentrate around @xmath144 @xcite . remarkably , @xmath4concurrence displays a similar concentration effect ; moreover , we are in position to prove the convergence of its distribution to a dirac delta centered at a non trivial value @xmath145 . the determinants and @xmath4concurrence may be also averaged in the general case of asymmetric induced measure . consider an interesting case @xmath146 . as for the distribution discussed in section [ sezione2 ] the expectation value and the higher moments may be expressed as a ratio of normalization constants and . for instance , the moments read @xmath147{0pt}{4.5ex}\ \begin{cases}\displaystyle \rule[-5.5ex]{0pt}{2.5ex}\langle g^m_{\mathds{c } } \rangle_{n , k } & = n^m \frac{b^{(2)}_{n , k } } { c_n^{(m / n+k - n+1\:,\:2 ) } } = \ \displaystyle n^m \ \frac{\gamma\left(n k\right ) } { \gamma\left(n k+m\right ) } \ \prod_{j=1}^{n}\ \frac{\gamma\left(k\,-\,n\,+\,j\;+\;{m}/{n}\right ) } { \gamma\left(k\,-\,n\,+\,j\right)}\\ \langle g^m_{\mathds{r } } \rangle_{n , k}\displaystyle & = n^m \frac{b^{(1)}_{n , k } } { c_n^{(m / n+(k - n+1)/2\:,\:1 ) } } = \ \displaystyle n^m\ \frac{\gamma\left({n k}/{2}\right ) } { \gamma\left({n k}/{2}+m\right ) } \ \prod_{j=1}^{n}\ \frac{\gamma{\left [ { \left ( k\,-\,n\,+\,j \right)}/{2}+{m}/{n } \right ] } } { \gamma{\left [ { \left ( k\,-\,n\,+\,j \right)}/{2 } \right ] } } \end{cases}\quad\cdot\ } \label{mom_on_ind}\ ] ] let us now study a particular case of the induced measure , for which we consider bipartite systems of arbitrarily large dimension , with the only constraint that the ratio between the size @xmath30 of the ancilla and the size @xmath21 of the principal subsystem are fixed and greater than one . let this ratio be expressed by the rational number @xmath148 , with the @xmath149 and @xmath150 integers ; this means that we are considering systems with @xmath151 . with the same tools used in computing , one can let @xmath152 go to infinity and obtain @xmath153{0pt}{4.5ex}\ g(m)&\coloneqq\lim_{j\to\infty}\left < g_{{\left ( \beta \right)}}^m\right>_{j\ell_1,j\ell_2}= x_q^{-m}\quad,\quad\forall\ \beta\in{\left\ { 1,2 \right\}}\quad,\label{genas2}\intertext{with}x_q&\coloneqq \;\frac{1}{\textrm{e}}\ { { \left ( \frac{q}{q-1 } \right)}}^{q-1}\label{pos_q}\quad,\quad q>1\quad\cdot\end{aligned}\ ] ] the limiting distribution @xmath154 , can be earned as before and reads @xmath155{0pt}{4.5ex}\ p^{(\beta)}_q(g)\coloneqq\lim_{j\to\infty}p_{j\ell_1,j\ell_2}^{(\beta)}(g)=\delta(g - x_q)\quad , \,\ \nonumber\ ] ] for the complex as well as for the real case . although the accumulation point @xmath156 is not defined for the case @xmath157 ( that is the case in which states in the principal system are distributed ) , we find however @xmath158 , confirming our previous result . moreover such values represent an infimum for @xmath156 , whereas it attains the supremum on the other part of the domain , that is for @xmath159 . such case correspond an extremely large environment , for which @xmath160 , that is in turn the @xmath4concurrence of the completely mixed state . thus we find another evidence that large environment concentrates reduced density matrices around the maximally mixed states @xcite . the generalized @xmath4concurrence is likely to be the first measure of pure state entanglement for which one could find not only the mean value over the set of random pure states , but also compute explicitly all moments and describe its probability distribution , deriving an analytic expression in the large @xmath21 limit . this offers for our work various potential applications . on one hand , analyzing a concrete quantum state and its entanglement we may check , to what extend its properties are non typical . in practice this can be done by a comparison of its @xmath4concurrence @xmath4 with the mean value @xmath161 , and by comparing its deviation from the average , @xmath162 , with the root of the variance of the distribution . on the other hand , if one needs a quantum state of some particular properties , one may estimate how difficult it is to obtain such a state at random . for instance , looking for a state of a large degree of entanglement , with concurrence greater than a given value @xmath163 , one can make use of the derived probability distribution by integrating it from @xmath163 to unity in order to evaluate the probability to generate the desired state by a fully uncontrolled , chaotic quantum evolution . although in this work we have concentrated our attention on pure states of bipartite systems , the averages obtained for the asymmetric induced measures with @xmath146 may be easily applied for the more general , multipartite case . consider a system containing @xmath164 qudits ( particles described in a @xmath165dimensional hilbert space ) . this system may be divided by an arbitrary bi partite splitting into @xmath166 and @xmath167 particles , and one can study entanglement between both subsystems see e.g. @xcite . the partial trace over @xmath166 qudits is equivalent to the partial trace performed over a single ancilla of size @xmath168 , so setting size of the system @xmath169 one may read out the average concurrence from eq . . in particular , if @xmath164 is even and we put @xmath170 , then the ratio @xmath171 is equal to @xmath172 and in the asymptotic limit @xmath173 the concurrence concentrates around the mean which depends only on the asymmetry @xmath16 of the splitting . our research may also be considered as a contribution to the random matrix theory : we have found the distribution of the determinants of random wishart matrices @xmath33 , normalized by fixing their trace . furthermore , the analysis of the distribution of @xmath4-concurrence in the limit of large system sizes provides an illustrative example of the geometric concentration effect , since in high dimensions the distribution of the determinant is well localized around the mean value . this observation can also be related to the central limit theorem applied to logarithms of the eigenvalues of a density matrix , the sum of which is equal to the logarithm of the determinant . it is a pleasure to thank p. hayden and p. horodecki for stimulating discussions . this work was financed by the /transregio12 project financed by . we also acknowledge support provided by the eu research project and the grant @xmath36 @xmath174 @xmath175 @xmath176 of polish ministry of science and information technology . the starting point is integral . since all the poles of the integrand are in the left half plane ( see it in ) , the contour integration along the imaginary axis can be modified into the one along the right asymptotic half plane , that is on a very large semicircle connecting @xmath177 to @xmath178 ; this allow us to use the stirling s formula for replacing @xmath179 with @xmath180 ( see formula ) in the integrand of . of course we made an approximation , but we know that the formula we ended up matches the correct result ( @xmath181 for @xmath182 ) in the point @xmath183 , so that such approximation would hold close to that point . now we observe that @xmath180 has poles only in @xmath184 , so that our contour of integration can be modified provided that we do not cross the origin , and we do so obtaining @xmath185 where @xmath186 is now the contour that , starting from @xmath177 get close to the negative real axis on the asymptotic left lower quarter plane , winds around @xmath187 in the counterclockwise direction , and then approaches @xmath178 on the asymptotic left upper quarter plane . but now we apply once more jordan s lemma and we remove the asymptotic semi circle from @xmath186 . after rescaling @xmath188 , with the latter defined by @xmath189 and close to @xmath36 , we arrive at the well known hankel s contour integral for the inverse of the gamma function ( @xmath190 ) @xcite , that leads to and gives the asymptotic behavior for @xmath191 . [ [ left - asymptote - of - p_nmathdsc - d - for - complex - random - pure - states ] ] left asymptote of * @xmath192 * for complex random pure states ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ now let us consider the behavior of @xmath193 at the lower edge of the spectrum @xmath194 . in that case one can close the integral in the left halfplane obtaining contributions from all the poles of the gamma functions in @xmath195 ( see ) . such poles are located at each of the negative integers @xmath196 ; fortunately there is the factor @xmath197 such that we obtain a series in powers of @xmath1 . because of the multiple gamma functions in , most of the poles are degenerate and the general feature ( for an arbitrary large @xmath21 ) is that the pole in @xmath198 is of order @xmath199 : due to this fact the @xmath1powers in the expansion get in general a logarithmic correction . the first pole at @xmath200 is non degenerate and yields @xmath201 including the next order2 pole ( @xmath202 ) contribution we find the asymptotic expansion for @xmath194 @xmath203 with @xmath204{0pt}{5ex}\\{\widetilde{x}}_n^{\mathds{c}}&={\textstyle x_n^{\mathds{c}}\big(n+n\psi(n^2 - 2n)-4 - 2\psi(1)-(n-2)\psi(n-2)\big)}\end{cases } \label{cdto0 } \quad\cdot\ ] ] here @xmath205 is the digamma function . ] , or polygamma function of order @xmath136 , with @xmath206 note that the euler constant @xmath186 cancels everywhere . by adding the next order3 pole ( @xmath207 ) contribution one gets in general the terms in corresponding to the @xmath114 and @xmath115 coefficients , although the latter are in general rather complicated , involving polygamma function of order higher than @xmath136 . this is not the case when @xmath116 , for which a cancelation makes @xmath207 a pole of order @xmath37 , and the coefficients read : @xmath208 [ [ left - asymptote - of - p_nmathdsr - d - for - real - random - pure - states ] ] left asymptote of * @xmath209 * for real random pure states ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ we will apply the same reasoning of the previous case , just now differing for the fact that , when @xmath210 , the @xmath211 pole @xmath212 of the integrand of is @xmath213 ; in general , for arbitrarily large @xmath21 , its corresponding order is given by @xmath214 , where @xmath215 means the larger integer not exceeding @xmath216 . in particular , the firsts two poles @xmath200 and @xmath217 are non degenerate and yield , @xmath218 . ] @xmath219 including the next two @xmath37order poles contributions ( @xmath220 and @xmath221 ) we determine , for @xmath222 @xmath223{0pt}{5ex}\\ { \widetilde{x}}_n^{\mathds{r}}&={\textstyle x_n^{\mathds{r}}{\left\ { n+n\psi{\left ( \frac{n^2 - 3n}{2 } \right)}-8-\frac{3}{2}\:\psi{\left ( \frac{1}{2 } \right)}-2\:\psi{\left ( 1 \right)}-\frac{n-3}{2}\:\psi{\left ( \frac{n-3}{2 } \right)}-\frac{n-4}{2}\:\psi{\left ( \frac{n-4}{2 } \right ) } \right\}}}\end{cases}\label{pn(1)_r1}\quad , \intertext{and for $ n>4 $ } \,&\begin{cases}w_n^{\mathds{r}}&=-\frac{\sqrt{\pi}}{3}\frac{2^{2n-3}\;\gamma\big(\frac{n^2+n}{2}\big)}{\gamma{\left ( \frac{n^2 - 4n}{2 } \right)}\;\gamma{\left ( \frac{n+1}{2 } \right)}\;\gamma{\left ( n-1 \right)}\;\gamma{\left ( n-3 \right)}}\rule[-3ex]{0pt}{5ex}\\{\widetilde{w}}_n^{\mathds{r}}&={\textstyle w_n^{\mathds{r}}{\left\ { n+n\psi{\left ( \frac{n^2 - 4n}{2 } \right)}-\frac{35}{3}-\frac{5}{2}\:\psi{\left ( \frac{1}{2 } \right)}-2\:\psi{\left ( 1 \right)}-\frac{n-4}{2}\:\psi{\left ( \frac{n-4}{2 } \right)}-\frac{n-5}{2}\:\psi{\left ( \frac{n-5}{2 } \right ) } \right\ } } } \end{cases}\label{pn(1)_r2}\quad,\end{aligned}\ ] ] where we made use once more of the @xmath224digamma function ; moreover , the notation @xmath225 is understood . the case @xmath116 constitutes an exception for @xmath226 and @xmath227 s coefficients , because of the lowering of the order of @xmath228 and @xmath229 poles ; moreover , for the latter pole , the same happens also for @xmath230 . all these coefficients need separate calculations and read @xmath231
average entanglement of random pure states of an @xmath0 composite system is analyzed . we compute the average value of the determinant @xmath1 of the reduced state , which forms an entanglement monotone . calculating higher moments of the determinant we characterize the probability distribution @xmath2 . similar results are obtained for the rescaled @xmath3 root of the determinant , called @xmath4concurrence . we show that in the limit @xmath5 this quantity becomes concentrated at a single point @xmath6 . the position of the concentration point changes if one consider an arbitrary @xmath7 bipartite system , in the joint limit @xmath8 , @xmath9 fixed . e - mail : [email protected] [email protected] [email protected]
OKLAHOMA CITY – A business owner who made national headlines earlier this year is now asking for help. The owner of P.B. Jams, in Warr Acres, noticed someone had been rummaging through the trash, specifically food containers. She says it broke her heart to know someone was so down on their luck they would be digging through her trash. So, rather than ignore it, she decided to help. “That really, it hurt me that someone had to do that,” said Ashley Jiron, owner of P.B. Jams. So she put a sign on the dumpster and at the front door, telling whomever was eating from her trash that they’re “a human being, and worth more than a meal from a dumpster.” The sign goes on to say the person is welcome to come in to the sandwich shop for a meal, free of charge. “I think we’ve all been in that position where we needed someone’s help and we just needed someone to extend that hand and if I can be that one person to extend that hand to another human being then I will definitely do it,” Ashley said. Now, Jiron says she is the one in need of help. Jiron says she is fighting to keep the doors open, but is not ready to give up yet. She has created a GoFundMe account in hopes of raising $5,000 to pay for expenses related to P.B. Jams. See a mistake? Report a typo here. ||||| Hello to all of you. I am Ashley Jiron, the owner of P.B. Jams in Warr Acres, Oklahoma - a small, local restaurant that allows our guests to prepay a meal for persons in need. Not only this, but we give back to a different charity/organization each month using all monetary funds that have been donated. Why? Because I believe that humans are worth it. When I began this business venture I studied demographics, areas, traffic, and everything I thought I needed to successfully run a business, but no studying prepared me for the harsh realities of business ownership. Now I am coming to you because I need your help to stay afloat so that I have time to grow my business and continue my efforts to help our fellow neighbors in need. As I always say: let's work with each other, rather than against each other, because together we can make a difference. Thank you for your time and support of my journey. - Ashley
– As Ashley Jiron explains in a post on her Oklahoma restaurant's Facebook page, she's spent the past few weeks mulling over words that she really didn't want to say: "It is with a heavy heart that I have to announce P.B. Jams' permanent closing after the holidays." Jiron, who gained national attention earlier this year by kindly offering to feed an anonymous person who was rummaging through her eatery's dumpster, is financially struggling and was afraid she'll have to shut her business down and stop not only serving her customers, but also her community: Since her generous note to the dumpster diver went viral, she's also set up a #ShareTheNuts campaign that allows people to come into her restaurant and prepay for meals for people in need. Jiron uses any money left over from the campaign at the end of the month to make purchases for other organizations that help the community. All this while she's been struggling on her own raising two young daughters. "We've had good months and we've had bad months," she writes on Facebook. "We've had electricity and we've had no electricity. We've had a roof and we've lost a roof." But Jiron says she's been inspired to keep fighting for PB Jams because of the outpouring of both financial and emotional support from the public, KFOR reports. "You've given me the motivation I needed to not give up," she says on Facebook, noting that despite her initial reluctance to crowdfund, she's set up a GoFundMe page to help pay for some of her restaurant's expenses so that she can "continue my efforts to help our fellow neighbors in need." She adds on Facebook, "I'll keep fighting for my business until I cannot fight anymore." (A Utah man committed "random acts of pasta" to feed the homeless last winter.)
replacement of missing teeth with dental implant procedures is one of the greatest advances in dentistry . problem of resorbed ridges and the ways to add hard and soft tissue in defective sites to provide adequate height and width for appropriate implant insertion has still remained challenging . for correction of defective ridges some solutions presented including : onlay lateral ridge bone grafting , horizontal osteodistraction , and guided bone regeneration techniques . lateral ridge split technique is a way to solve the problem of the width in narrow ridges with adequate height . dental implant placement in atrophic ridges with deficient ridges with onlay bone grafting techniques ( autografts / allografts ) need some time between bone grafting and dental implant insertion ( 3 - 6 month ) and there is always the possibility of bone graft failure . crest split augmentation technique with simultaneous implant insertion will reduce the time as well as the surgical procedures . survival rate of implants inserted in ridge split alveolar ridges is reported between 86% and 97% . the patients acceptance rate for this technique is very high due to its low morbidity and shorter time intervals in comparison with autologous onlay bone grafting . this study was conducted on 25 patients in 38 locations that received 82 dental implants . after clinical and radiographic examinations of edentulous regions in both jaws , anterior or posterior segments with 3 - 4 mm width at crest region were chosen ( the minimal accepted length of remaining bone was 10 mm ) [ figure 1 ] . patients had good general health conditions without active periodontal diseases . patients who had each of the conditions including systemic diseases that influence wound healing like diabetes mellitus , the need for simultaneous sinus lifting or inferior alveolar nerve lateralization , a thin ridge that does not widened apically , and enlarged maxillary incisive foramen were excluded from the study . a total number of 25 patients with the above conditions participated in study . all the data were analyzed by statistical package for social sciences ( spss ) software version 11.5 ( spss inc , chicago illinois , usa ) . this study was approved by the research deputyship of mashhad university of medical sciences regarding methodological and ethical issues . a written consent was obtained from each individual after introducing the aims and procedures of the study and answering their questions . ridge conditions suitable for ridge splitting : buccolingual width between 3 - 4 mm , gradual increase from ridge crest toward basal bone and sufficient height of alveolar ridge under local anesthesia and after full thickness reflection of mucoperiosteal flap a trapezoid flap ( crestal incision and two vertical releases ) was reflected and the width of the bone directly measured with a collis . ridge split was applied with osteotome ( 8 mm / obwegeser ) , after the crest being prepared with surgical fissure bur in straight high speed handpiece [ figures 2 and 3 ] . one centimeter penetration of the osteotome blade in ridge crest would automatically expand the ridge . since osteotome thickness increases from tip toward shaft further the osteotome penetrates , more the ridge will expand . slight buccolingual movement of the osteotome increases the expansion . after obtaining adequate width a paralleling device is inserted in osteotomy site to prevent collapse of expanded cortical plates . with an implant insertion contrangle in low speed ( slower than the usual speed for the specific region ) , the bur is inserted between the cortical plates ; then the rotary movement begins while the bur is in the bone between the cortical plates . this inhibits damage to the cut edge of cortical plates ( this technical note is more important in drilling with larger diameter implant burs ) . it is better that similar diameter implants is inserted in prepared sites [ figure 4 ] . fixtures are selected from bone level systems and inserted the same level to the ridge crest . the space between cortical plates is then filled with biomaterial ( cerasorb ) [ figure 5 ] . in single fixture insertion , there was no need for biomaterial . finally , cover screw was tightened and primary soft tissue closure was obtained . a control radiography ( opg ) or periapical was taken before second phase of surgery . the patients were followed up for at least 6 month after prosthetic treatment . in three patients , for whom cortical plate fracture occurred during surgery , biomaterials were inserted ; fractured cortical plate was fixed with fine wire and 3 month later dental fixture insertion attempted . fissure bur marking before beginning of osteotomy osteotome obwegesser ( 8 mm width ) was used in this study the same diameter implants are inserted at bone level intercortical space is filled with cerasorb under local anesthesia and after full thickness reflection of mucoperiosteal flap a trapezoid flap ( crestal incision and two vertical releases ) was reflected and the width of the bone directly measured with a collis . ridge split was applied with osteotome ( 8 mm / obwegeser ) , after the crest being prepared with surgical fissure bur in straight high speed handpiece [ figures 2 and 3 ] . one centimeter penetration of the osteotome blade in ridge crest would automatically expand the ridge . since osteotome thickness increases from tip toward shaft further the osteotome penetrates , more the ridge will expand . slight buccolingual movement of the osteotome increases the expansion . after obtaining adequate width a paralleling device is inserted in osteotomy site to prevent collapse of expanded cortical plates . with an implant insertion contrangle in low speed ( slower than the usual speed for the specific region ) , the bur is inserted between the cortical plates ; then the rotary movement begins while the bur is in the bone between the cortical plates . this inhibits damage to the cut edge of cortical plates ( this technical note is more important in drilling with larger diameter implant burs ) . it is better that similar diameter implants is inserted in prepared sites [ figure 4 ] . fixtures are selected from bone level systems and inserted the same level to the ridge crest . the space between cortical plates is then filled with biomaterial ( cerasorb ) [ figure 5 ] . in single fixture insertion , there was no need for biomaterial . finally , cover screw was tightened and primary soft tissue closure was obtained . a control radiography ( opg ) or periapical was taken before second phase of surgery . the patients were followed up for at least 6 month after prosthetic treatment . in three patients , for whom cortical plate fracture occurred during surgery , biomaterials were inserted ; fractured cortical plate was fixed with fine wire and 3 month later dental fixture insertion attempted . fissure bur marking before beginning of osteotomy osteotome obwegesser ( 8 mm width ) was used in this study the same diameter implants are inserted at bone level intercortical space is filled with cerasorb the patients aged from 16 - 78 years and 10.5% of them had edentulous space in anterior maxilla . other quadrants ( left lower , left upper , right lower , and right upper ) had nearly equal values ( 21.1 - 23.7% ) . presplit mean width was 3.2 0.34 mm ( min 2.8 mm and max 4.2 mm ) . post - split mean width was 5.57 0.49 ( min 3.7 mm and max 6.3 mm ) . the mean gain in crest ridge after ridge split was 2 0.3 mm . statistical analysis showed significant differences in width before and after operation ( p > 0.05 ) . after at least 6 month of follow up all implants ( 82 implants ) survived and were functional . ridge split technique in implant dentistry was introduced for the first time by simion et al . in 1992 . greenstick fracture of cortical plates ( buccal in maxilla and lingual in mandible ) occurs in some patients [ figure 6 ] . placement of bone substitutes in intercortical space ( interposition bone grafting ) has advantages of internal perfusion , prevention from particle migration and displacement , omission of the need for donor site and fixation screw and reduction of graft resorbtion probability . simultaneous insertion of dental implants has advantages such as reducing waiting time from surgery to beginning of prosthetic treatment , requiring less amount of biomaterials , and preventing the collapse of distended buccal and lingual / palatal walls . for creating split between the cortical plates , different osseous surgical tools such as hand instruments ( chisel and osteotome ) , rotary instruments ( surgical bur in high speed handpieces ) , and piezosurgery instruments has been used successfully . the bone apical to ridge split helps to gain primary stability of inserted implants therefore simultaneous need for sinus lifting ( open or closed ) , insufficient space between inferior dental canal and ridge crest , or deep submandibular fossa would prohibit the application of this technique . since after ridge splinting , there will be an increase in the width of the alveolar ridge crest primary soft tissue closure over the submerged implants and grafted biomaterials between them is the last and most important step in this technique that should be considered before beginning of the surgery . the problem of soft tissue almost always occurs in upper jaw because of limited elasticity of palatal mucosa . this flap which has random pattern vascularity has other advantages over providing tensionless closure of soft tissue over grafted region . these include vertical augmentation of the soft tissue , providing keratinized tissue over split ridge , producing similar color with adjacent gingiva after epithelialization , and providing donor site with minimal morbidity [ figure 8 ] . greenstick fracture of buccal cortical plate pediculated connective tissue flap of palate ( vip - ct ) covered the expanded alveolar ridge in anterior maxillary region histologic feature of epithelialized vip - ct flap after 3 months ( h and e , original magnication 100 ) modifications of lateral ridge split technique a problem mostly occurring in lower jaw is that cortical expansion is obtained by lingual displacement of lingual plates and buccal cortical plates will expand minimally , which could place inserted implants in more lingual position to the previous ridge crest [ figure 9 ] . corticotomy of a rectangular buccal segment and staged ridge splitting technique are two ways to overcome this problem . another consideration of this technique is the proximity of the osteotomy site near natural adjacent tooth . close proximity increases the possibility of injury to the tooth root ; therefore , dental fixtures are usually placed in a more distal position from natural teeth which could create prosthetic problems . anterior maxillary region sometimes has the problem of enlarged incisive foramen in some patients that will inhibit simultaneous application of ridge split technique along with implant insertion . lingual position of inserted fixtures at mandibular posterior region in comparison with lower dental arch the recommended width of ridge for ridge splinting is 3 - 4 mm . in our study , there were four patients with the ridge width lower than this amount . difference between measured widths in cbct with direct bone measurements after flap reflection was the reason . however , this technique successfully worked in these patients . success rate of implants in the present study was 100% that may be due to good patient selection and automatic deletion of the patients in whom this technique was not appropriate ( three patients with cortical bone fracture ) which is consistent with other researches . this denotes that if this technique is used properly and in a right situation , the result will be predictable . elasticity of the bone is reduced and the expansion needs more detailed attention to the technical note . however , old age is not a concern and this technique was used successfully in these patients . in three young patients aged from 16 - 24 years , this technique was used to reconstruct anterior maxillary region after traumatic loss of anterior teeth . no case of mandibular incisor tooth loss replacement this study showed that the time interval between ridge splitting along with simultaneous implant insertion and the beginning of prosthodontic treatment could be reduced as low as 3 months which is shorter than other studies . ridge splitting technique in both jaws will have the predictable outcomes , if appropriate cases selected and special attention is paid to details ; then the waiting time between surgery and beginning of prosthodontic treatment can be reduced to 3 month .
background : lateral ridge split technique is a way to solve the problem of the width in narrow ridges with adequate height . simultaneous insertion of dental implants will considerably reduce the edentulism time.materials and methods : twenty - five patients who were managed with ridge splitting technique were enrolled . thirty - eight locations in both jaws with near equal distribution in quadrants received 82 dental fixtures . beta tricalcium phosphate ( cerasorb ) was used as biomaterial to fill the intercortical space . submerged implants were used and 3 months later healing caps were placed . direct bone measurements before and after split were done with a collis . patients were clinically re - evaluated at least 6 months after implant loading . all the data were analyzed by statistical package for social sciences ( spss ) software version 11.5 ( spss inc , chicago illinois , usa ) . frequency of edentulous spaces and pre / post operative bone width was analyzed . paired t - test was used for statistical analysis . difference was considered significant if p value was less than 0.05.results:mean value for presplit width was 3.2 0.34 mm while post - split mean width was 5.57 0.49 mm . mean gain in crest ridge after ridge splitting was 2 0.3 mm . statistical analysis showed significant differences in width before and after operation ( ( p > 0.05 ) . all implants ( n = 82 ) survived and were in full function at follow up ( at least 6 months after implant loading).conclusion : ridge splitting technique in both jaws showed the predictable outcomes , if appropriate cases selected and special attention paid to details ; then the waiting time between surgery and beginning of prosthodontic treatment can be reduced to 3 month .
in past years , the spectroscopy of heavy flavor quarkonium has seen great progress , particularly the charmonium spectrum . many charmonium - like states ( such as @xmath5 , @xmath6 and so on ) with remarkable and unexpected properties have been reported . these exotic states present great challenges to our understanding of the structure of heavy flavor quarkonium and quantum chromodynamics ( qcd ) at low energy , for a review , see refs.@xcite . on the other hand , many bottomonium states have been reported as well . in 2008 , the spin - singlet pseudoscalar partner @xmath7 was found by the babar collaboration with mass @xmath8 mev @xcite . the @xmath9 was discovered in 2010 in the @xmath10 final state with mass @xmath11 mev @xcite . in addition , the babar collaboration reported the p - wave spin - singlet @xmath12 via its radiative decay into @xmath13 with mass @xmath14 mev @xcite . this state is confirmed by the belle collaboration @xcite , and its mass is measured to be @xmath15 mev . meanwhile , the radial excitation state @xmath16 was also found by the belle collaboration with mass @xmath17 mev @xcite . recently the atlas collaboration has reported the discovery of the @xmath4 state through reconstruction of the radiative decay modes of @xmath18 , and its mass barycenter is measured to be @xmath19 gev @xcite . in particular , the belle collaboration has observed an enhancement in the production process @xmath20 @xcite . the fit using a breit - wigner resonance shape yields a peak mass of @xmath21 $ ] mev and a width of @xmath22 $ ] mev . in the following , we shall denote this state as @xmath23 . moreover , the babar collaboration measured the @xmath24 cross section between 10.54 gev and 11.20 gev @xcite , the @xmath25 and the @xmath26 states , which are candidates of @xmath27 and @xmath28 respectively , were observed . their masses and widths are fitted to be @xmath29 gev , @xmath30 mev , @xmath31 gev and @xmath32 mev , which are different from the previously measured values . in particular , two charged narrow structures at 10610 mev and 10650 mev in the @xmath33 and @xmath34 have been reported recently @xcite . in summary , the current experimental data indicate that there may be exotic bottomonium - like structures similar to the charmonium sector . furthermore , lhcb has begun to run @xcite , belle will be updated to belle ii and a new superb factory will be built in italy @xcite , we expect that more heavy bottomonium states including the possible exotic extensions will be observed in the future . motivated by the above exciting experimental progress in @xmath0 states , we shall carry out a careful , detailed study of bottomonium spectroscopy in this work , notably the poorly understood higher - mass @xmath0 levels . thus , we can determine whether future observed bottomonium - like states could be accommodated as canonical @xmath0 states by comparing their masses with the mass spectrum predicted in this work . it is well - known that simple potential models , which incorporate a color coulomb term at short distances , a linear scalar confining term at large distances , and a gaussian - smeared one - gluon exchange spin - spin hyperfine interactions , have been frequently used to describe both the charmonium and bottomonium spectrums . generally , the mixture between the quark model @xmath0 basis states and the two - meson continuum has been neglected in these models , which are called quenched " quark models . the effects of the unquenched quark model " including virtual hadronic loops have been studied extensively in the framework of the coupled - channel method @xcite . the hadronic loop has turned out to be highly non - trivial , it can give rise to mass shifts to the bare hadron states and contribute continuum components to the physical hadron states . the possibility that loop effects may be responsible for the anomalously low masses of the new narrow charm - strange states @xmath35 and @xmath36 has been suggested by several groups @xcite . the hadronic loop in charmonium has been explored as well , and the mass shifts and continuum mixing due to loops of @xmath37 , @xmath38 , @xmath39 and @xmath40 meson pairs have been studied extensively @xcite . both the mass shifts and the two - meson continuum components of the physical charmonium states were found to be rather large . in particular , a @xmath41 state with mass about 3872 mev could possibly be generated dynamically . inspired by the large physical effects of hadronic loops in both the @xmath42 and charmonium states , we expect that the virtual hadronic loop should also play an important role in bottomonium spectroscopy . in this work , we shall study the bottomonium spectrum in detail , and take the hadronic loop effects into account . this paper is organized as follows . we present the framework of the coupled - channel analysis in section ii . the non - relativistic potential model is outlined in section iii . section iv is devoted to the numerical results for the masses of the bottomonium states with and without hadronic loop effects , as well as phenomenological implications . we present our conclusions and discussion in section v. coupling of @xmath43 states to the @xmath44 mesons loop.,scaledwidth=50.0% ] in bottomonium , the process @xmath45 via light quark pair @xmath46 creation would induce the hadronic loop shown in fig.[figcc ] , where the initial bottomonium decays into intermediate virtual @xmath44 states and then reforms the original bottomonium state . here @xmath47 denotes a general @xmath47 meson , it can be @xmath47 , @xmath48 , @xmath49 or @xmath50 , the same convention will be used henceforth without speci?cation . since the open - flavor decay couplings of bottomonium states to two - body @xmath44 final states are large , the resulting loop effects should be important . this kind of virtual hadronic loop is universal , but it is not usually included in quark potential models and is only partially present in the quenched lattice qcd . the coupled - channel model is an appropriate framework for analyzing these hadronic loop effects @xcite . in the simplest version of the coupled - channel model @xcite , the full hadronic state is represented as @xmath51 . @xmath52 denotes the bare confined @xmath0 states with the probability amplitude @xmath53 , @xmath54 ( @xmath55 ) is the @xmath56(@xmath57 ) eigenstate describing the @xmath58(@xmath59 ) meson , and @xmath60 is the wavefunction in the two - meson channel @xmath61 . the wavefunction @xmath62 obeys the equation @xmath63 where @xmath64 is the hamiltonian for the valence @xmath0 system , with the eigenstates determined by @xmath65 . the hamiltonian @xmath66 acts between the constituents of @xmath58 and @xmath59 separately , where the interactions between @xmath58 and @xmath59 are neglected . the continuum two - meson state @xmath61 is the eigenstate of @xmath66 , @xmath67 where @xmath68 , @xmath69 , @xmath70 and @xmath71 are the masses of @xmath58 and @xmath59 respectively , and @xmath72 is the reduced mass of the two - meson system . @xmath73 couples the bare state @xmath52 with the two - body continuum @xmath61 . let us consider one bare state @xmath74 , the matrix element of @xmath73 is of the following form : @xmath75 substituting eq.([radd1 ] ) and eq.([radd2 ] ) into eq.([cce ] ) , we get the system of coupled equations for @xmath76 and @xmath60 , @xmath77 this coupled - channel equation can be solved straightforwardly , and we finally obtain the master equation @xmath78 here @xmath79 is the self - energy function for the hadronic loop induced by the intermediate states @xmath58 and @xmath59 , it is explicitly given by @xmath80 using the relation between the helicity amplitude @xmath81 and the partial wave amplitude @xmath82 @xcite , we have latexmath:[\ ] ] where @xmath210 , and @xmath211 denotes created quark mass @xmath212 , @xmath213 or @xmath214 . @xmath215 is the wavefunction of the initial meson @xmath216 in momentum space , and @xmath217 and @xmath218 are the wavefunctions of the final state mesons @xmath58 and @xmath206 respectively . taking into account the phase space , we get the differential decay rate @xmath219 where @xmath220 is the momentum of the final state mesons in the rest frame of meson @xmath216 @xmath221[m^2_a-(m_b - m_c)^2]}\big/(2m_a).\ ] ] to compare with the experiments , we transform the amplitude @xmath202 into the partial wave amplitude @xmath82 by the recoupling calculation @xcite , then the decay width is @xmath222 since we neglect mass splitting within the same isospin multiplet , to sum over all channels , one should multiply the mass shift due to a specific hadronic loop by the flavor factor @xmath223 which is listed in table [ tab : flavor ] .
we study the bottomonium spectrum in the nonrelativistic quark model with the coupled - channel effects . the mass shifts and valence @xmath0 component are evaluated to be rather large . we find that the hadronic loop effects can be partially absorbed into a reselection of the model parameters . no bottomonium state except @xmath1 and @xmath2 with mass around 10890 mev is found in the quark models both with and without coupled - channel effects , so we suggest that @xmath3 is an exotic state beyond the quark model , if it is confirmed to be a new resonance . the predictions for the @xmath4 masses are consistent with the atlas measurements . if some new bottomonium - like states are observed at lhcb or superb in the future , we can determine whether they are conventional bottomonium or exotic states by comparing their masses with the mass spectrum predicted in our work . 0.5 cm pacs numbers : 12.39.jh , 12.40.yx , 14.40.pq , 14.40.rt
SECTION 1. SHORT TITLE. This Act may be cited as the ``Increasing the Department of Veterans Affairs Accountability to Veterans Act of 2015''. SEC. 2. REDUCTION OF BENEFITS FOR MEMBERS OF THE SENIOR EXECUTIVE SERVICE WITHIN THE DEPARTMENT OF VETERANS AFFAIRS CONVICTED OF CERTAIN CRIMES. (a) In General.--Chapter 7 of title 38, United States Code, is amended by adding at the end the following: ``Sec. 715. Senior executives: reduction of benefits of individuals convicted of certain crimes ``(a) Reduction of Annuity for Removed Employee.--The covered service of an individual removed from a senior executive position under section 713 shall not be taken into account for purposes of calculating an annuity with respect to such individual under chapter 83 or chapter 84 of title 5, if the individual is convicted of a felony that influenced the individual's performance while employed in the senior executive position. ``(b) Reduction of Annuity for Retired Employee.--(1) The Secretary may order that the covered service of an individual who is subject to a removal or transfer action under section 713 but who leaves employment at the Department prior to the issuance of a final decision with respect to such action shall not be taken into account for purposes of calculating an annuity with respect to such individual under chapter 83 or chapter 84 of title 5, if the individual is convicted of a felony that influenced the individual's performance while employed in the senior executive position. ``(2) The Secretary shall make such an order not later than 7 days after the date on which such individual is convicted of such felony. ``(3) Not later than 30 days after the Secretary issues any order with respect to an individual under paragraph (1), the Director of the Office of Personnel Management shall recalculate the annuity of the individual. ``(c) Lump-Sum Annuity Credit.--Any individual with respect to whom an annuity is reduced under subsection (a) or (b) shall be entitled to be paid so much of such individual's lump-sum credit as is attributable to the period of covered service. ``(d) Definitions.--In this section: ``(1) The term `covered service' means, with respect to an individual subject to a removal or transfer action under section 713, the period of service beginning on the date that the Secretary determines under such section that such individual engaged in activity that gave rise to such action and ending on the date that such individual is removed from the civil service or leaves employment at the Department prior to the issuance of a final decision with respect to such action, as the case may be. ``(2) The term `lump-sum credit' has the meaning given such term in section 8331(8) or section 8401(19) of title 5, as the case may be. ``(3) The term `senior executive position' has the meaning given such term in section 713(g)(3). ``(4) The term `service' has the meaning given such term in section 8331(12) or section 8401(26) of title 5, as the case may be.''. (b) Application.--The amendment made by subsection (a) shall apply to any action of removal or transfer under section 713 of title 38, United States Code, commencing on or after the date of enactment of this section. (c) Clerical Amendment.--The table of sections at the beginning of such chapter is amended by adding at the end the following new item: ``715. Senior executives: reduction of benefits of individuals convicted of certain crimes.''. SEC. 3. REFORM OF PERFORMANCE APPRAISAL SYSTEM FOR SENIOR EXECUTIVE SERVICE EMPLOYEES OF THE DEPARTMENT OF VETERANS AFFAIRS. (a) Performance Appraisal System.-- (1) In general.--Chapter 7 of title 38, United States Code, as amended by section 2, is further amended by adding at the end the following new section: ``Sec. 717. Senior executives: performance appraisal ``(a) Performance Appraisal System.--(1) The performance appraisal system for individuals employed in senior executive positions in the Department required by section 4312 of title 5 shall provide, in addition to the requirements of such section, for five annual summary ratings of levels of performance as follows: ``(A) One outstanding level. ``(B) One exceeds fully successful level. ``(C) One fully successful level. ``(D) One minimally satisfactory level. ``(E) One unsatisfactory level. ``(2) The following limitations apply to the rating of the performance of such individuals: ``(A) For any year, not more than 10 percent of such individuals who receive a performance rating during that year may receive the outstanding level under paragraph (1)(A). ``(B) For any year, not more than 20 percent of such individuals who receive a performance rating during that year may receive the exceeds fully successful level under paragraph (1)(B). ``(3) In evaluating the performance of an individual under the performance appraisal system, the Secretary shall take into consideration any complaint or report (including any pending or published report) submitted by the Inspector General of the Department, the Comptroller General of the United States, the Equal Employment Opportunity Commission, or any other appropriate person or entity, related to any facility or program managed by the individual. ``(b) Change of Position.--(1) At least once every five years, the Secretary shall reassign each individual employed in a senior executive position to a position at a different location that does not include the supervision of the same personnel or programs. ``(2) The Secretary may waive the requirement under paragraph (1) for any such individual, if the Secretary submits to the Committees on Veterans' Affairs of the Senate and House of Representatives notice of the waiver and an explanation of the reasons for the waiver. ``(c) Report.--Not later than March 1 of each year, the Secretary shall submit to the Committees on Veterans' Affairs of the Senate and House of Representatives a report on the performance appraisal system of the Department under subsection (a). Each such report shall include, for the year preceding the year during which the report is submitted, all documentation concerning each of the following for each individual employed in a senior executive position in the Department: ``(1) The initial performance appraisal. ``(2) The higher level review, if requested. ``(3) The recommendations of the performance review board. ``(4) The final summary review. ``(5) The review of the Inspector General of the Department of the information described in paragraphs (1) through (4). ``(d) Definition of Senior Executive Position.--In this section, the term `senior executive position' has the meaning given that term in section 713(g)(3) of this title.''. (2) Clerical amendment.--The table of sections at the beginning of such chapter is further amended by adding at the end the following new item: ``717. Senior executives: performance appraisal.''. (3) Conforming amendment.--Section 4312(b) of title 5, United States Code, is amended-- (A) in paragraph (2), by striking ``and'' at the end; (B) in paragraph (3), by striking the period at the end and inserting ``; and''; and (C) by adding at the end the following: ``(4) that, in the case of the Department of Veterans Affairs, the performance appraisal system meets the requirements of section 716 of title 38.''. (b) Review of SES Management Training.-- (1) Review.--Not later than 180 days after the date of the enactment of this Act, the Secretary of Veterans Affairs shall enter into a contract with a nongovernmental entity to review the management training program for individuals employed in senior executive positions (as such term is defined in section 713(g)(3) of title 38, United States Code) of the Department of Veterans Affairs that is being provided as of the date of the enactment of this Act. Such review shall include a comparison of the training provided by the Department of Veterans Affairs to the management training provided for senior executives of other Federal departments and agencies and to the management training provided to senior executives in the private sector. The contract shall provide that the nongovernmental entity must complete and submit to the Secretary a report containing the findings and conclusions of the review by not later than 180 days after the date on which the Secretary and the nongovernmental entity enter into the contract. (2) Report to congress.--Not later than 60 days after the date on which the Secretary receives the report under paragraph (1), the Secretary shall submit to the Committees on Veterans' Affairs of the Senate and House of Representatives the report together with a plan for carrying out the recommendations contained in the report. SEC. 4. LIMITATION ON ADMINISTRATIVE LEAVE FOR MEMBERS OF THE SENIOR EXECUTIVE SERVICE WITHIN THE DEPARTMENT OF VETERANS AFFAIRS. (a) In General.--Chapter 7 of title 38, United States Code, is further amended by adding after section 717 (as added by section 3) the following new section: ``Sec. 719. Administrative leave limitation and report ``(a) Limitation Applicable to Members of the Senior Executive Service Within the Department of Veterans Affairs.--(1) The Secretary may not place any covered individual on administrative leave, or any other type of paid non-duty status, for more than a total of 14 days during any 365-day period. ``(2) The Secretary may waive the limitation under paragraph (1) and extend the administrative leave or other paid non-duty status of a covered individual placed on such leave or status under paragraph (1) if the Secretary submits to the Committees on Veterans' Affairs of the Senate and House of Representatives a detailed explanation of the reasons the individual was placed on administrative leave or other paid non-duty status and the reasons for the extension of such leave or status. Such explanation shall include the name of the covered individual, the location where the individual is employed, and the individual's job title. ``(3) In this subsection, the term `covered individual' means an individual (as defined in section 713(g)(1)) occupying a senior executive position (as defined in section 714(g)(3))-- ``(A) who is subject to an investigation for purposes of determining whether such individual should be subject to any disciplinary action under this title or title 5; or ``(B) against whom any disciplinary action is proposed or initiated under this title or title 5. ``(b) Report on Administrative Leave.--(1) Not later than 30 days after the end of each quarter of any calendar year, the Secretary shall submit to the Committees on Veterans' Affairs of the House of Representatives and the Senate a report listing the name of any employee of the Department (if any) who has been placed on administrative leave, or any other type of paid non-duty status, for a period longer than 7 days during such quarter. ``(2) Any report submitted under subsection (a) shall include, with respect to any employee listed in such report, the position occupied by the employee, the number of days of such leave, and the reason that such employee was placed on such leave.''. (b) Application.-- (1) Administrative leave limitation.--Section 719(a) of title 38, United States Code (as added by subsection (a)), shall apply to any action of removal or transfer under section 713 of such title or title 5, United States Code, commencing on or after the date of enactment of this section. (2) Report.--The report under section 719(b) of such title (as added by subsection (a)) shall begin to apply in the quarter that ends after the date that is 6 months after the date of enactment of this section. (c) Clerical Amendment.--The table of sections at the beginning of such chapter is amended by adding at the end the following new item: ``719. Administrative leave limitation and report.''.
Increasing the Department of Veterans Affairs Accountability to Veterans Act of 2015 Requires the reduction of the federal annuities of individuals removed from the Department of Veterans Affairs (VA) Senior Executive Service (SES) if they are convicted of a felony that influenced their performance while employed in such position. Authorizes the VA Secretary to order the reduction of the federal annuities of individuals who were convicted of such a felony and were subject to removal or transfer from the VA SES, but who left the VA before final action was taken. Reduces such annuities by excluding the covered service performed after the activity that subjects such an individual to transfer or removal occurs. Requires the performance appraisal system for VA SES employees to provide for five specified annual summary ratings of levels of performance. Provides that in any given year no more than: (1) 10% of such employees may receive the outstanding level of performance, and (2) 20% of such employees may receive the exceeds-fully-successful level of performance. Requires the Secretary to take any complaint or report from an appropriate person or entity related to any facility or program managed by an SES employee into account in evaluating that employee's performance. Directs the Secretary, at least once every five years, to reassign each SES employee to a position at a different location that does not include the supervision of the same personnel or programs. Allows the Secretary to waive such requirement if the Secretary submits to Congress notice of, and the reasons for, such waiver. Directs the Secretary to contract with a nongovernmental entity for a review of the management training program for VA SES employees. Prohibits the Secretary from placing a VA SES employee on administrative leave, or any other type of paid non-duty status, for more than a total of 14 days during any 365-day period. Allows the Secretary to waive such prohibition with respect to such an employee if the Secretary provides Congress with a detailed explanation of the reasons the employee was placed on such leave or status and the reasons for extending that placement.
SECTION 1. SHORT TITLE. This Act may be cited as the ``Universal Student Nutrition Act of 1993''. SEC. 2. FINDINGS. The Congress finds that-- (1) the national school lunch and breakfast programs are vital to protecting the health and well-being of the Nation's children; (2) these essential child nutrition programs help prepare children to learn and to combat childhood hunger; (3) the national school lunch program serves approximately 25,000,000 per day, and the school breakfast program serves approximately 4,000,000 children per day; (4) there are approximately 4,000,000 eligible low-income students who are not participating in the free and reduced price school meal programs; (5) in the last decade-- (A) Federal subsidies for school meal programs have been reduced; (B) bonus commodities from the Department of Agriculture for such programs have almost vanished; (C) the administrative complexity and cost of administering such programs have increased; and (D) indirect cost assessments are draining the financial resources of such programs; and (6) many schools, mostly high schools, are dropping out of the school lunch program as a result of the trends described in paragraph (5). SEC. 3. ESTABLISHMENT OF OPTIONAL UNIVERSAL SCHOOL LUNCH AND BREAKFAST PROGRAM. (a) In General.--The National School Lunch Act (42 U.S.C. 1751 et seq.) is amended by inserting after section 11 the following new section: ``SEC. 11A. OPTIONAL UNIVERSAL SCHOOL LUNCH AND BREAKFAST PROGRAM. ``(a) In General.-- ``(1) Establishment.--The Secretary shall establish an optional universal school lunch and breakfast program (in this section referred to as the ``universal program''). ``(2) Description.--The universal program shall consist of school lunch and breakfast service offered without cost at school to all students in attendance at the participating schools who wish to participate in a manner consistent with the requirements otherwise applicable to the school lunch program under this Act and to the school breakfast program under section 4 of the Child Nutrition Act of 1966. ``(3) Eligibility.--Any school participating in the school lunch program under this Act or the school breakfast program under the Child Nutrition Act of 1966 may elect to participate in the universal program. ``(b) Universal Payment Rate.-- ``(1) In general.--Subject to paragraph (3), in lieu of receiving the national average payment per lunch determined under section 4 and section 11, and the national average payment per breakfast determined under section 4 of the Child Nutrition Act of 1966, each school participating in the universal program shall receive the universal payment rates determined under paragraph (2) for each lunch and breakfast served under the program. ``(2) Establishment.--Subject to paragraph (3), the Secretary shall establish the universal payment rates for purposes of this section. Such rates shall be equal to the national average cost of producing a school lunch, and the national average cost of producing a school breakfast, respectively, as determined by the Secretary. In making the determination required by the preceding sentence, the Secretary shall establish a maximum amount that can be charged to a participating school food service authority for indirect expenses. ``(3) Commodities.--Schools participating in the universal program shall receive the same level of commodities that they would receive under the school lunch program under this Act and under the school breakfast program under section 4 of the Child Nutrition Act of 1966. ``(c) Competitive Foods Policy.--Schools participating in the universal program may sell competitive foods under regulations issued by the Secretary.''. (b) Effective Date.--The Secretary of Agriculture shall issue regulations to carry out section 11A of the National School Lunch Act (as added by subsection (a) of this section) that provide for the implementation of such section not later than July 1, 2000. SEC. 4. DIETARY GUIDELINES. (a) School Lunch Program.--Section 9(a)(1) of the National School Lunch Act (42 U.S.C. 1758(a)(1)) is amended by striking ``on the basis of tested nutritional research'' and inserting ``in accordance with the Dietary Guidelines for Americans developed by the Department of Agriculture''. (b) School Breakfast Program.--Section 4(e)(1) of the Child Nutrition Act of 1966 (42 U.S.C. 1773(e)(1)) is amended by striking ``on the basis of tested nutritional research'' and inserting ``in accordance with the Dietary Guidelines for Americans developed by the Department of Agriculture''. SEC. 5. NUTRITION EDUCATION. Section 19(i)(1) of the Child Nutrition Act of 1966 (42 U.S.C. 1788(i)(1)) is amended by inserting ``and each fiscal year beginning on or after October 1, 1995,'' after ``October 1, 1978,''.
Universal Student Nutrition Act of 1993 - Amends the National School Lunch Act to establish an optional universal school lunch and breakfast program. Requires that the Secretary of Agriculture's minimum nutritional requirements for the current school lunch and school breakfast programs be prescribed in accordance with the Dietary Guidelines for Americans developed by the Department of Agriculture. Amends the Child Nutrition Act of 1966 to require that grants to States for nutrition education and information be based on a rate of 50 cents for each child enrolled in schools or institutions in the State.
centaurus a ( ngc 5128 , cen a ) is the nearest ( [email protected] mpc , @xmath61 kpc ) active galaxy to the milky way , and because of its proximity has been well studied across the electromagnetic spectrum @xcite . optically , cen a is an elliptical galaxy crossed by a dust lane , thought to be the result of a merger with a small spiral galaxy @xcite . radio observations of cen a show a bright nucleus , a milliarcsecond - scale jet and counter jet , a one - sided kiloparsec scale jet ne of the nucleus , two radio lobes ( the inner radio lobes ) ne and sw of the nucleus , and extended , diffuse emission spanning an 8@xmath74@xmath8 region on the sky @xcite . earlier x - ray observations of cen a found a complex morphology with emission from several distinct components including the active nucleus , the jet , the hot ism , and a population of x - ray binaries @xcite . rosat hri and exosat le observations detected x - ray emission from the vicinity of the southwest radio lobe @xcite , but the limited spectral resolution of these observations and possible confusion with a bright foreground star made interpretation uncertain . the _ einstein _ observatory first detected extended x - ray emission from elliptical and s0 galaxies , and since then the spectra and morphology of these objects has been extensively studied with rosat and asca @xcite . for the most massive , luminous , early - type galaxies , this emission originates in a hot , low - density corona that is bound by the gravitating dark - matter potential . the x - ray spectra of these halos are generally well described by a single - temperature optically thin plasma model with depleted abundance values with respect to solar of [email protected] . typical temperatures and central densities of this gas are @xmath51 kev and 0.1 @xmath9 , respectively @xcite . the cooling time of the gas in the central regions of these galaxies is @xmath510@xmath10 yrs , implying that in the absence of any energy input , the gas will cool and condense onto the central black hole . for less massive early - type galaxies , such as cen a , the situation is somewhat more complex as the contribution of a population of lmxbs becomes increasingly more important to the overall x - ray emission @xcite . _ chandra _ and _ xmm - newton _ have greatly increased our understanding of these objects through spatially resolved measurements of the temperatures and elemental abundances from which the characteristics of dark matter halos can be deduced @xcite , and by resolving out the lmxb population in nearby objects @xcite . recent _ chandra _ observations of the environments of several radio galaxies , notably ngc 4636 @xcite , m84 @xcite , hydra a @xcite , perseus a @xcite , 3c 317 @xcite , and m87 @xcite , have demonstrated a complex interaction between the relativistic plasma of the radio lobes and the thermal , x - ray emitting gas of the interstellar or intracluster medium . there has also been considerable theoretical interest in this subject @xcite . in several cases , the thermal gas appears to have been displaced by the expansion of the radio lobes , creating a cavity or hole in the x - ray emission from the interstellar or intracluster medium . in some cases , x - ray emitting shells have been observed surrounding this cavity . in most cases where such shells have been detected , they are cooler than the surrounding medium and are thought to the result of entrainment of cool material from the central regions of the galaxy or cluster by the inflation of the lobe . in a few cases ( ngc 4636 and cyg a ) , there is some evidence that the shells are hotter than the ambient medium and have been shock heated by the supersonic inflation of the radio lobe . one important consequence of this interaction , particularly if the expansion is supersonic , is that the kinetic energy of the particles in the radio lobes could be transferred to the thermal medium thereby heating it . the energy in shock - heated gas would be transferred to the ism via conduction , perhaps even in the presence of magnetic fields @xcite . this process could be particularly important in understanding the dynamics of the central regions of cooling - flow clusters , as the thermal lifetime of the centrally condensed material present in these objects is less than the hubble time and heat conduction is less effective in the cores . the gas must therefore be occasionally reenergized during its lifetime . radio galaxies are one possible source for this reheating . unfortunately , there has been little direct evidence for heating of the interstellar or intracluster medium by the radio plasma for any of the sources listed above , and the relationship between nuclear outflows , nuclear activity , and cooling flows is uncertain . it has been suggested that there is a cyclical relationship between the cooling of the x - ray emitting corona and galaxy activity @xcite . clearly the relationship between the hot ism , radiation from the central black hole , and outflows of relativistic plasma generated by nuclear activity is very complex . in this paper , we present the results from two _ chandra_/acis - i observations and an _ xmm - newton _ observation of x - ray emission from the ism and from the radio lobes of cen a. the primary goals of this work are to determine the thermodynamic state of the hot ism in cen a and to better understand the energetics and dynamics of the interaction between the relativistic plasma of the radio lobes and the x - ray emitting corona . in this paper , we present the temperature and surface brightness profile of the ism , and discuss the nature of the x - ray enhancement associated with the southwest radio lobe . we demonstrate that it is unlikely that this enhancement is non - thermal in nature , but most likely originates from compression and shock - heating of the hot ism via the supersonic expansion / inflation of the radio lobe . this is the fifth paper in our series on _ chandra _ and _ xmm - newton _ observations of cen a. in four previous papers on cen a , we presented results of a _ chandra_/hrc observation @xcite , and results of the two deeper _ chandra_/acis - i observations on the x - ray point source population @xcite , the x - ray jet @xcite , and complex morphology of the ism @xcite . future publications will present an analysis and discussion of the spectrum of the active nucleus ( kraft _ et al . _ , in preparation ) , a detailed comparison of high - resolution x - ray and radio observations of the jet ( hardcastle _ et al . _ , in preparation ) , and a more detailed analysis of the morphology and dynamics of the hot ism ( karovska _ et al . _ , in preparation ) . this paper is organized as follows . section 2 contains a brief summary of the observations and the instrumentation . spectra and images of the ism and the radio lobes are presented and discussed in sections 3 and 4 , respectively . we end with a summary and conclusions in section 5 . we use j2000 coordinates throughout the paper . cen a was observed twice with the _ chandra_/acis - i instrument , on december 5 , 1999 and may 17 , 2000 , and once with _ xmm - newton _ on february 2 , 2001 . the observation times were 35856 @xmath11 and 36510 @xmath11 for the two _ chandra _ , and 23060 @xmath11 with _ xmm - newton _ with the medium optical blocking filter inserted . a summary of the observational parameters is contained in table [ obslog ] . descriptions of the instrumental capabilities of the two observatories are presented elsewhere @xcite . the absolute position on the sky of the _ chandra _ observations was determined by comparison of the x - ray point sources at the edge of the fov with stars in the usno catalog ( see @xcite for details ) . then the absolute position of the _ xmm - newton _ observation was determined by comparing the positions of x - ray point sources with _ chandra _ positions . a considerable ( @xmath12 ) correction was applied to the _ xmm - newton _ set as a result . after correction , both data sets are aligned on the sky and with respect to each other to better than @xmath13 . we have generally relied on the _ xmm - newton _ data for spectral analysis of extended low surface brightness features , but we have used the _ chandra_/acis - i data to determine the positions of point sources . the _ chandra _ raw event table was filtered to include only grades 0,2,3,4 , and 6 . all events below 0.4 kev and above 5 kev were removed . the response of the acis - i drops rapidly below 0.4 kev , so that most of the events below this are background . above 5 kev , most of the events are either particle background events or events in the psf wings from the bright nucleus . all events at node boundaries were removed because of uncertainties in grade reconstruction . all events with pulse - height invariant channel ( pi ) equal to 0 , 1 , or 1024 were removed as they represent unphysical signals ; hot ccd columns and pixels were removed . short - term transients due to cosmic rays which produced events in three or more consecutive frames that could mimic a point source ( van speybroeck 2000 , private communication ) also were removed . data from the two observations were co - added to create images , but not for spectral analysis for two reasons . first , the focal - plane temperatures differed ( -110@xmath8c during the first observation , and -120@xmath8c during the second ) . second , differences in roll angles of the satellite and pointing directions between the observations resulted in the southwest lobe being placed along the edge of the i3 chip for the first observation and on the i0 chip close to the best focus of the telescope for the second . all images presented below were exposure corrected . data from all three ccd imaging spectroscopy instruments ( epic / mos1 , mos2 , and the pn camera ) from the _ xmm - newton _ observation were used in this analysis . the epic / mos events tables were filtered to include only events with patterns 0 through 12 , and with the flag parameter less than or equal to 1 . the epic / pn events tables were filtered to include only events with pattern 0 , and with the flag parameter less than or equal to 1 . all events with pi value greater than 12000 ( i.e. energy greater than 12 kev ) were also removed as the telescope has no response to x - rays above this energy . the data from mos1 and mos2 , but not from the pn camera , were combined to create images as shown below . all spectral fits were done on the three data sets independently . response matrices and ancillary response files were generated using the _ arfgen _ and _ rmfgen _ tools in the sas software ( @xmath14 ) . all spectral fitting ( both _ chandra _ and _ xmm - newton _ ) was performed using the xspec ( v 10.0 ) software package . all exposure corrections made to the _ xmm - newton _ were done using the exposure maps provided with the pipeline processed data . co - added , exposure corrected , adaptively smoothed x - ray images of cen a from the _ chandra _ and from the pn camera/_xmm - newton _ observations are shown in figures [ chdimg ] and [ xmmrad ] , respectively . radio contours ( 13 cm - gaussian width of beam @xmath15(ra ) @xmath16 @xmath17(dec ) taken with the australia telescope compact array ( atca ) ) have been overlaid onto figure [ xmmrad ] . the nucleus , the jet ne of the nucleus and the two inner lobes of cen a are clearly visible in the radio contours . the bright active nucleus , the jet , the diffuse emission from the ism , and many point sources ( mostly xrbs within cen a ) are clearly visible in the x - ray images . there is also an x - ray enhancement along the edge of the southwest radio lobe , and an excess of diffuse emission ( above that of the ism ) in the interior of the lobe . the close alignment of the x - ray enhancement just at the edge of the lobe with the lowest radio contours strongly argues that they are related . no significant excess or deficit of emission ( i.e. above or below that expected from the hot ism ) is detected in the vicinity of the ne radio lobe . to determine the temperature and radial surface - brightness profile of the coronal x - ray emission as a function of distance from the nucleus , it is necessary to exclude all of the emission from the point sources and to carefully estimate the background . the bulk of the emission from the ism of cen a is below 1 kev where _ xmm - newton _ has considerably more effective area than _ chandra_/acis - i . for this and another reason described below , we rely primarily on the _ xmm - newton _ data for determination of the parameters of the ism . we used the _ chandra _ data to determine the positions of the point sources , and then excluded a region of @xmath18 radius around these positions in the _ xmm - newton _ data . the jet and southwest radio lobe also were excluded from all analysis of the ism . careful background subtraction is critical to the accurate determination of spectral parameters and to the radial surface - brightness profile . in the central regions of the galaxy , the emission of the ism is seen clearly above the background ( see figures [ chdimg ] and [ xmmrad ] ) . in our initial analysis , we attempted to use the standard background files that are available online for both _ chandra _ and _ xmm - newton_. these turned out to be entirely inappropriate for two reasons . first , they are generated from several high galactic latitude observations and cen a is located in the north polar spur ( nps ) ( l=309.52 , b=+19.42 ) . the intensity and spectrum of the xrb of the nps region is considerably different than that at high galactic latitude @xcite . second , the standard background files for _ xmm - newton _ were created from observations using the thin optical blocking filter , and our observation was made with the medium optical blocking filter to avoid contamination from cen a. it is therefore necessary to use a locally determined background at the edge of the fov . unfortunately cen a is so close to us that diffuse x - ray emission fills the entire fov of both observatories . at the edge of the fov , the surface brightness of the ism is still a significant fraction ( @xmath19% ) of the total ( background+source ) surface brightness . fortunately , the background of both observatories below 1 kev is dominated by emission from the diffuse x - ray background ( xrb ) , not charged particles , so that we could model the spatial variation of the background by the telescope vignetting function . the measured spatial variation of the background at low energies is somewhat flatter than what one would expect based on the vignetting . we have used the results of analysis of the low energy background in the pn camera to model this effect . ] any region along the edge of the fov contains both source and background , so it is therefore necessary to iteratively estimate the contributions from source and background at the edge of the fov . we selected regions in the pn and mos data @xmath514@xmath20 nw of the nucleus and fit a four component model with two mekal plasmas plus a power - law with galactic absorption ( n@xmath21=7@xmath1610@xmath22 @xmath23 ) and a neutral al k@xmath24 line at 1.49 kev ( an instrumental artifact ) , to model the background . the best - fit temperatures for the thermal components are consistent with previous measurements of the nps @xcite . this model is an overestimate of the true background because it contains some flux from cen a as well . using this background , we then determined the temperature of the corona as a function of distance in five annular bins each @xmath25 wide centered on the nucleus . the parameterized background model , appropriately scaled for solid angle , was included in the spectral fitting as a fixed component . for the coronal emission from cen a , we assumed a mekal model with depleted ( z=0.4 ) abundances and galactic absorption . we performed the fits with the abundance as a free parameter as well , but at the temperature of the ism ( @xmath5 0.3 kev ) , the emission is line dominated and the continuum level poorly constrained . the abundance can be traded against the normalization and is thus poorly constrained . except in the central @xmath25 bin where variable absorption from the dust lane hardens the spectrum , all fits indicate a temperature between 0.27 and 0.32 kev , with a small decrease as a function of distance from the nucleus . we note that the temperature probably does rise toward the center , but this is difficult to quantify because of the complex morphology due to the variable absorption of the dust lane . we then computed the radial surface brightness profile of the coronal emission in the 0.4 to 1.0 kev band excluding all point sources , the jet , the southwest radio lobe , and subtracted the ( overestimated ) background described above . we fit a @xmath0-model of the form @xmath26 to the profile where @xmath27 is the surface brightness at the center of the galaxy , @xmath28 is the distance from the center of the galaxy , and @xmath29 and @xmath0 are parameters determined by fitting . the electron density is then given by @xmath30 where @xmath31 is the central density . the central density can be computed from the surface brightness profile and the integrated luminosity . the luminosity within a radius @xmath32 is then given by @xcite @xmath33 where @xmath34 is the radiative cooling coefficient ( ergs @xmath35 s@xmath4 ) , @xmath36 and @xmath37 are the central electron and proton densities , respectively , and @xmath11 is the distance along the line of sight in units of @xmath29 . the parameter @xmath0 is determined by a least squares fit to the radial surface brightness profile , and the normalization of the density profile , @xmath31 , is determined by converting the observed x - ray flux to luminosity using xspec and solving the integral in equation [ lumeq ] . throughout this paper , we have assumed that @xmath38 , which is appropriate for the sub - solar abundance plasma . because of the complex structure and variable absorption by the dust lane in the central @xmath25 of the galaxy , the parameter @xmath29 , the core radius from equation 1 , is not constrained . there is evidence that the temperature of the gas in the central @xmath25 of the galaxy is somewhat hotter than the outer regions , so the isothermal approximation is not appropriate . even detailed deprojection analysis might not fully account for the complex three dimensional structure of the x - ray emission . we therefore decided to fix @xmath29 at 0.5 kpc ( @xmath39 ) and fit the profile for @xmath0 between @xmath25 and @xmath40 from the nucleus . the parameters @xmath29 and @xmath0 are generally strongly coupled @xcite , but we are fitting the surface brightness profile in the region where it is well modeled by a power - law ( i.e. @xmath41 ) , so that @xmath0 is well constrained , and so is @xmath42 far from the nucleus . this last point is of particular importance in the discussion of the southwest radio lobe below . the best - fit @xmath0 model was extrapolated to the position of the background region to determine the contribution of emission from cen a in the background estimate . approximately 30% of the flux in this region originates from the galaxy and is not true background . using an appropriately reduced background , the radial surface brightness profile was recomputed . the radial surface brightness profile and the temperature profile using the iteratively determined background are shown in figures [ sbp ] ( pn camera only ) and [ tprof ] ( pn and mos cameras , respectively . implicit in the interpretation of the radial profile is that the temperature of the ism in the background region is not significantly different than that closer to the galaxy . beyond the central region of the galaxy , the temperature of the ism is slowly decreasing with increasing radius , but we will assume a constant temperature of @xmath50.29 kev in the analysis below . the best - fit values of @xmath0 are [email protected] and [email protected] for the pn and mos images , respectively . the fit was made between @xmath25 and @xmath40 from the nucleus to avoid complications due to variable absorption by the dust lane and the possible temperature rise in the central regions of the galaxy . the uncertainty in @xmath0 is dominated by uncertainty in the estimate of the background . the reduced @xmath43 of the fit is 2.1 and 1.8 for the pn and mos cameras , respectively , indicating a marginal fit . this is due to the azimuthal structure in both the surface brightness ( see figure [ xmmrad ] ) and temperature ( see discussion below ) . for the purposes of understanding the dynamics of the radio lobe discussed below , these variations are fairly large ( up to a factor of 2 in surface brightness or 40% in particle density ) but we will use the azimuthally averaged @xmath0-model for estimates of the density and pressure of the ism . the central density of the ism for the assumed value of @xmath29=0.5 kpc is @xmath44=(3.7@xmath4510@xmath46 cm @xmath47 . we have performed a similar analysis on the _ chandra _ data , but derive a higher temperature of about @xmath50.4 - 0.45 kev . we determined the background in a similar manner to that described above using data from the s2 chip , and other than normalization , we derive similar spectral parameters from this background data set as we did from the _ xmm - newton _ data . we believe that this temperature difference is caused by a combination of instrumental effects including : the low sensitivity of the acis - i instrument below 0.6 kev relative to the _ xmm - newton _ cameras , the complex temperature structure in the ism , systematic uncertainties in the low - energy quantum efficiency due to the build up of contamination ( a. vikhlinin , private communication ) , for detailed discussion of the acis contamination ] and to the complexities in the spatial non - uniformity of the gain of the acis - i instrument at low energies , and systematic uncertainties in the spectrum and normalization of the background .. we have adjusted the arfs of the _ chandra _ spectra of the ism using the _ apply_acisbs _ script provided by the cxc and we still find temperatures which are systematically higher than _ xmm - newton_. we conclude that the acis - i instrument is not particularly sensitive to temperature determination for plasmas with temperature below about 0.5 kev . we rely on the results of the analysis of the cen a ism from the _ xmm - newton _ data exclusively in the remainder of this paper . a 50% higher temperature for the cen a ism would change some of our final estimates of pressures , mach numbers , etc . , for the dynamics of the radio lobe , but none of our basic conclusions . motivated partly by the complex x - ray morphology apparent in figure [ xmmrad ] and partly by the desire to confirm independently the results of the spectral fitting of the _ xmm - newton _ data , we have created a temperature map of the mos data using the technique described in @xcite . in this technique , images in five energy bands are created , and the bin size of each cell is adaptively chosen so that the signal to noise ratio is constant . simulated spectra are created using xspec , and the temperature of each cell is determined via least squares fitting of the data to these spectra . the temperature map is shown in figure [ tmap ] . as can be seen from the figure , there is clearly azimuthal temperature structure in the ism , but the average temperature of the ism is about 0.3 kev . this independent analysis is consistent with the temperature derived on the basis of spectral fitting of the _ xmm - newton _ data . much of the complex structure of the ism , particularly in the central @xmath25 of the galaxy and to the nw and se of the nucleus ( i.e. perpendicular to the axis of the radio components ) , is probably related to a recent galaxy merger and not due to interaction with the relativistic plasma of the jet and lobes . a preliminary discussion of this phenomenon has been presented elsewhere @xcite . for our analysis below , we will use the azimuthally averaged radial surface - brightness and temperature profiles . the best - fit index to the surface brightness profile ( see table [ bmodtab ] ) , @[email protected] ( the average of the mos and pn values ) , is rather flat , but not inconsistent with recent _ chandra _ observations of the hot ism of ngc 4697 @xcite , an isolated elliptical about twice as massive as cen a , and other early - type galaxies observed with _ einstein _ there is clearly some azimuthal structure in the surface brightness profile , particularly to the nw of the nucleus , which is most likely related to a merger event with a small spiral galaxy @xcite . ignoring the surface brightness variations and assuming the gas is in hydrostatic equilibrium with the dark matter potential of the galaxy , the total gravitating mass within a radius @xmath28 of the nucleus is given by @xmath48 where @xmath49 and @xmath50 are the radial distributions of the particle density and temperature , respectively @xcite . assuming that @xmath50=const=0.29 kev , and the particle density is given by the @xmath0 model profile described in the previous section , the total gravitating mass as a function of distance from the nucleus is shown in figure [ gravmass ] . for comparison , two measurements of the gravitating matter using planetary nebulae are shown as well @xcite . these two different measurements were made from the same data with different dynamical models . we find that within 15 kpc of the nucleus , the total mass of cen a is @xmath52@xmath1610@xmath51 m@xmath52 . the unabsorbed x - ray luminosity of the ism within 10 kpc of the nucleus in cen a is 7.71@xmath5310@xmath54 ergs s@xmath4 ( 0.4 - 1.0 kev band ) . this corresponds to an unabsorbed luminosity of 1.26@xmath5510@xmath56 ergs s@xmath4 ( 0.1 - 10 kev band ) . as with the @xmath0-model fit above , the uncertainty in these numbers is dominated by the uncertainty in the background subtraction . these numbers include an estimate of the effect of the variable absorption of the dust lane , and the slightly higher temperature of the central @xmath25 . the effect of the variable dust absorption was made by estimating an average absorption of the dust lane using the extinction map of @xcite . this was converted to an equivalent column density using the @xmath57/@xmath58 ratio determined from dust scattering haloes by rosat @xcite . a scale factor was then computed using this value of @xmath57 in pimms , the measured temperature of the central region , and the observed fraction of this inner region that is obscured by the dust lane . this x - ray luminosity is at the low end of the range of x - ray luminosities for galaxies with the optical luminosity of cen a ( @xmath59=-20.4 ) ( see figure 2 of @xcite ) . if the radial - surface brightness profile does not steepen for a large distance beyond 10 kpc from the nucleus , this x - ray luminosity would only represent a lower limit as the total x - ray flux would be dominated by the outer regions of the galaxy . the integrated x - ray luminosity of all xrbs within @xmath60 of the nucleus of cen a with @xmath6110@xmath62 ergs s@xmath4 ( 0.4 - 10.0 kev band ) is 4.6@xmath1610@xmath54 ergs s@xmath4 @xcite . assuming a 5 kev thermal bremsstrahlung spectrum for the xrb population with galactic absorption , this corresponds to an ( absorbed ) x - ray luminosity of 6.4@xmath1610@xmath63 ergs s@xmath4 in the 0.4 - 1.0 kev band . the shape of the x - ray point - source luminosity function ( lf ) below 10@xmath62 ergs s@xmath4 is somewhat uncertain because this is the approximate flux below which the observations are complete and unbiased , but unless the lf again steepens below 10@xmath64 ergs s@xmath4 , the integrated x - ray luminosity of sources with @xmath6510@xmath62 ergs s@xmath4 contributes at most an additional few tens of percent to the total x - ray luminosity of the point sources . below 1 kev , the xrbs contribute only @xmath510% of the integrated x - ray flux . at higher energies , the xrbs contribute a significant fraction of the total x - ray emission . one of the remarkable features shown in figures [ chdimg ] and [ xmmrad ] is the x - ray enhancement coincident with the southwest radio lobe . the relationship between the x - ray and radio emission of the sw lobe is more clearly shown in figure [ rawlobe ] , which contains a raw _ chandra _ image in the 0.5 - 2.0 kev band with 13 cm contours overlaid . the boundary of the x - ray shell is highlighted by arrows . the bright x - ray enhancement is clearly visible along the edge of the radio lobe and appears to be contained within the lobe , although the x - ray emission clearly lies beyond the nw and se radio contours . we argue that most or all of the x - ray enhancement actually lies beyond the boundaries of the radio emission for two reasons . first , the lobe is not in the plane of the sky ( see below ) and we are seeing the features in projection . second , the radio map has been considerably broadened by the @xmath66 restoring beam , and the bulk of the radio emission is therefore interior to the x - ray emission . the diffuse structures of the x - ray shell are more clearly shown in figure [ swlobe ] , which contains an adaptively smoothed , exposure corrected _ chandra_image of the southwest radio lobe in the 0.5 - 2 kev band with the 13 cm radio contours overlaid . there are significant differences between the x - ray and radio morphologies for the emission that is clearly interior to the lobe ( figure [ swlobe ] ) . the surface brightness of the x - ray emission in this region varies by a factor of four . the radio contours decrease monotonically from the center to the edge of the lobe , whereas the x - ray emission shows several significant peaks and valleys not seen in the radio . we have divided the southwest lobe into two regions for spectral analysis to determine if there is any spectral difference between the emission along the southwest edge of the radio lobe and the emission that lies within the boundaries of the lobe . the first region , which we refer to as the enhancement , is a rectangular region along the southwest edge of the radio lobe . the second region , which we refer to as the diffuse region , is a rectangular region in the interior of the lobe . the details of these regions are summarized in table [ regtab ] . region 1 is the region along / beyond the southwest radio lobe and referred to as the enhancement region in the text . region 2 is a region covering lobe emission but excluding the bright foreground object , and referred to as the diffuse region in the text . the ra and dec are the coordinates of the center of the box , the roll is the rotation angle anti - clockwise from north , and the dimensions of the box are given by the width and length . the regions are shown graphically on the temperature map in figure [ tmap ] . unfortunately , the _ xmm - newton _ observations were aligned such that the mos chip gaps were placed directly along the enhancement , and one of the pn chip gaps was placed along the western edge of the lobe . the diffuse region does not intersect any of the gaps and provides a consistency test among all the data sets , which avoids uncertainties due to event filtering of events near chip boundaries or in the computation of the appropriate response matrices . the shapes and positions of these regions were chosen to avoid contamination from the bright source cxou j132507.5 - 430401 , which is believed to be a foreground star @xcite . a second point source , cxou j132509.6 - 430530 , located within the enhancement region , was also excluded . two background regions for each data region were chosen to the w and the nw of the nucleus at similar distances from the core as the source regions . these background regions were selected because they are devoid of point sources in the _ chandra _ images . for both regions in the lobe , all five data sets ( two _ chandra _ and three xmm - newton ) were fit with an absorbed power - law , and one- and two - temperature mekal thermal plasma models in the 0.3 to 5.0 kev band . the lower limit was fixed at 0.3 kev because below this energy the xmm - newton response is somewhat uncertain , and the acis - i instrument has little response . above 5.0 kev , the source flux falls , and the instrumental background and contamination from the bright , heavily absorbed nucleus become increasingly important . we intially performed the fits with the column density , @xmath57 , and the elemental abundance ( for thermal models ) as free parameters . it was found that best - fit column density was always consistent with the galactic value ( 7@xmath1610@xmath22 @xmath23 @xcite ) . the elemental abundance was poorly constrained because of the relatively high temperature . we therefore decided to fix the column at the galactic value and the elemental abundance at 0.4 times solar . the results of the spectral fits are summarized in table [ specfit ] . all fits were done independently , and all data were binned to a minimum of 30 counts per bin . all errors are 90% confidence for one free parameter . the rate is the background subtracted count rate in units of 10@xmath46 cts s@xmath4 . the indication @xmath67 in the error column signifies that that parameter is not meaningfully constrained . the parameter @xmath68 in the two temperature mekal model fits is the ratio of the best - fit emission measures . this parameter demonstrates that even in the cases where a second temperature component improves the fit , the emission measure of the lower - temperature component is small and that this second component contributes negligibly to the pressure . no parameters of the _ chandra _ spectral fits of region 2 are included as the parameters are not constrained in a useful way due to the low surface brightness . the xmm - newton spectra have several times the number of counts that the _ chandra _ spectra have , and the xmm - newton fits are generally better ( lower @xmath69 ) and have smaller error bars . we therefore use the xmm - newton fits to determine the spectral parameters ( i.e. the temperature or power - law index ) , and use the _ chandra _ result as an independent confirmation . as can be seen from table [ specfit ] , the thermal and the power law models both provide an adequate description of the data . there is no reason to favor one model over the other on the basis of spectral fitting alone . for reasons we outline below , however , the thermal model is more physically plausible . the addition of a second thermal component to the mekal fits generally improves the quality . we attribute this to a complex temperature distribution within this feature . it could indicate some spatial non - uniformities in the distribution of the hot ism around cen a , but we consider this less likely . the temperature of this second component is generally poorly constrained . although the addition of a second temperature component improves the fits , this component is roughly an order of magnitude lower in temperature and a factor of a few lower in emission measure than the hotter component . this second component contributes little to the pressure of this feature . given the spectral similarity between the ` enhancement ' region and the ` diffuse ' region , we will treat the entire lobe as a single feature and make no distinction between the emission interior to the lobe and the enhancement along the edge . there is some marginal evidence ( see table [ specfit ] ) that the temperature of the material that appears to be interior to the lobe is somewhat cooler ( @xmath5 10% ) than that along the edge enhancement . this is what one would expect for the supersonic lobe expansion model , our preferred model , but for simplicity we will ignore this small difference in the discussion below . we will use the results of the single - temperature fits of the enhancement region for all estimates of density and pressure when discussing the thermal model . because of complications in the point - source removal and the unfortunate alignment of the chip gaps in the _ xmm - newton _ data described above , the conversion of x - ray flux to particle density was determined by using the spectral parameters of the _ xmm - newton _ data , but the count rate from the second ( obsid 00962 ) _ chandra_/acis - i observation for the entire lobe . we determined the count rate in the _ chandra _ data set in a @xmath70 radius circle centered on the x - ray lobe in the 0.5 to 2.0 kev band with all point sources removed . background was estimated from an identical region approximately the same distance from the nucleus with all point sources removed as well . assuming a temperature of 2.88 kev ( the average of the eight single - temperature mekal fits in table [ specfit ] ) , the observed _ chandra_/acis - i count rate ( 7.54@xmath1610@xmath46 cts s@xmath4 in the 0.5 to 2.0 kev band ) corresponds to a flux of 9.6@xmath1610@xmath71 ergs @xmath23 s@xmath4 ( unabsorbed ) in the 0.1 to 10.0 kev band , and an x - ray luminosity of 1.41@xmath1610@xmath54 ergs s@xmath4 at the distance of cen a. x - ray emission from the jets , hotspots , and radio lobes of radio galaxies has been detected from several dozen sources with _ chandra _ @xcite . in most of these cases , the x - ray emission has been attributed to non - thermal process ( i.e. inverse - compton scattering of a variety of seed photons or synchrotron radiation ) from a population of relativistic electrons @xcite . although we can not formally reject a non - thermal hypothesis for cen a on the basis of the spectral analysis alone , physical arguments outlined below lead us to the conclusion that the x - ray emission from the southwest radio lobe of cen a is most likely thermal in origin . the interaction between the southwest radio lobe and the x - ray enhancement of cen a is most likely to be more closely analogous to the radio plasma / icm interactions recently observed by _ chandra _ in early - type galaxies @xcite and clusters of galaxies @xcite than to non - thermal sources of x - ray emission . cavities and x - ray shells created by the inflation / expansion of ` bubbles ' of radio plasma in the icm of galaxy clusters have been investigated theoretically by several authors @xcite . these bubbles are thought to be the backflow material from the propagation of the powerful jets of radio galaxies through the ism / icm . the shells of enhanced x - ray emission are due to the compression and shock heating of the ambient ism / icm after it passes through the bow shock . the cen a x - ray enhancement around the southwest radio lobe appears to be the visible result of a smaller - scale , lower - power ( fr i jet and galactic ism vs. fr ii jet and icm ) example of this radio plasma / thermal gas interaction . we consider in detail three possible models for the origin of the x - ray enhancement around the southwest radio lobe , two non - thermal and one thermal . first , we address the possibility that this emission is due to inverse - compton scattering from the relativistic electrons of the radio lobe . the most significant source of seed photons is from the cmb , but unless the lobe is far from equipartition , it is unlikely that a significant fraction of the x - ray emission originates from this mechanism . second , we investigate the possibility that the emission is synchrotron radiation from a population of ultra - relativistic electrons in the radio lobe . because of the short lifetime of these particles , we reject this hypothesis as well . finally , we explore the possibility that the emission originates from a thermal plasma that surrounds the radio lobe . in this model , which we consider the most plausible , the expansion of the lobe , energized by the counter jet , has compressed and shock - heated the ambient hot ism . a partial shell or cap of plasma surrounding the lobe and rotated with respect to our line of sight would naturally give the edge brightened appearance . first , we consider the possibility that the x - ray emission is due to inverse - compton scattering of cmb photons off radio - synchrotron - emitting relativistic electrons . there are several significant problems with this model , given the observed morphology and spectrum , that make it implausible . first , the spatial morphology of the x - ray emission argues strongly against this model . the most prominent part of the detected x - ray emission , the enhancement ahead of the lobe , has no detected radio counterpart down to at least a frequency of 327 mhz @xcite . the ic - scattering hypothesis would require a large population of low - energy ( @xmath722000 ) electrons without any extension to higher , observable energies . the ratio of x - ray to radio flux in the interior of the lobe and along the enhancement are significantly different , therefore implying a significant difference in the non - equipartion conditions or energy spectral index of the relativistic electrons in these regions . the synchrotron frequency for the electrons responsible for the ic scattering of the cmb photons into the x - ray band is 80 mhz assuming the equipartition magnetic field of 13 @xmath73 @xcite . emission at this frequency from the narrow shell would be difficult to detect , but there is no plausible reason to expect that there is a large population of @xmath722000 electrons while the presence of @xmath7210@xmath74 electrons is ruled out by the lack of detection of the shell by the vla or the atca . it is conceivable that a significant fraction of the diffuse emission in the interior of the lobe is due to ic scattering of cmb photons , and that the enhancement along the edge of the lobe originates in a different mechanism . such ic scattered x - ray emission has been observed in the radio lobes of several powerful radio galaxies @xcite . to investigate the importance of this mechanism , we used the code of @xcite to determine the magnetic field strength that would be necessary to produce the observed x - ray flux density given the known radio source properties . we modeled the electron spectrum in the lobe as a power - law of the form @xmath75 with @xmath76 , @xmath77 and a value of @xmath78 determined by the radio data at 2.3 and 8.4 ghz . we measured the x - ray flux density ( 82 njy at 1 kev ) from a circular region @xmath79 in radius covering most of the outer or southern half of the radio lobe , but excluding the enhancement along the edge of the lobe . the x - ray background was estimated from a region of the observation away from the lobe but at approximately the same distance from the nucleus . we find that the magnetic field must be an order of magnitude below the minimum - energy value ( determined assuming a tangled field in the lobes ) of 20 @xmath80 g , if the ic process is to produce all of the observed x - ray emission from this region . this is a much larger departure from equipartition than is observed in other sources . if instead the magnetic field strength has its equipartition value , the ic process can produce only @xmath81% of the observed x - rays . other sources of seed photons are even more implausible than the cmb . the synchrotron - self compton ( ssc ) mechanism can be rejected immediately as the density of radio photons is lower than that from the cmb . optical photons from either the nucleus or the stellar component are two other possibilities . both are unlikely . in the unified model of agn , cen a , as an fr i galaxy , is considered to be a misdirected bl lac @xcite . in this case , there may be a considerable flux of beamed optical photons emitted from the nuclear region @xcite . to upscatter optical photons into the x - ray band , the electrons must have @xmath82 . the typical optical luminosity of beamed photons from an fr i radio galaxy / bl lac is @xmath510@xmath83 ergs s@xmath4 @xcite . if all of these photons are emitted from the nucleus into the solid angle subtended by the southwest radio lobe , the energy density at the edge of the lobe is @xmath510@xmath71 ergs @xmath9 . it would require approximately 3@xmath1610@xmath84 electrons with @xmath85100 to produce the observed x - ray luminosity via ic scattering . the total energy of these electrons , @xmath86 , is @xmath510@xmath87 ergs , several orders of magnitude larger than the equipartition energy of the lobe . assuming they are distributed uniformly throughout the outer or southern half of the radio lobe , the pressure of these relativistic electrons would be @xmath53@xmath1610@xmath88 dynes @xmath23 , several orders of magnitude larger than the equipartition pressure of the lobe . the existence of such a large number of relativistic electrons is improbable . the energy density of optical starlight at the edge of the lobe is of the same order of magnitude as that estimated for the beamed nuclear source @xcite . we therefore reject the ic scattering hypothesis . x - ray synchrotron emission has been detected from a number of low - power jets by _ chandra _ , and may be a common feature of these jets @xcite . such a model was invoked to explain the x - ray emission associated with the forward jet of cen a @xcite . the main difficulty with a synchrotron model in the context of the cen a radio lobe is , however , the short lifetime of the particles . defining the lifetime of the particles as the timescale for the upper synchrotron cutoff frequency to become equal to the observing frequency , the particles will lose their energy on the order of tens of years in the equipartition magnetic field @xcite . for comparison , the light travel time across the lobe is approximately ten thousand years . this would imply that the particles must be re - energized hundreds or even thousands of times in the process of traveling through the lobe . the x - ray and radio knots in jets are commonly thought to be the sites of shocks where particle reacceleration occurs . no such knotty structure is seen in the x - ray surface brightness of the radio lobe ( figure [ chdimg ] ) . at the resolution of _ chandra _ , the forward jet has a complex morphology on scales from tens of parsecs to kiloparsecs @xcite . no such structure is seen in the x - ray morphology of the southwest lobe . it is conceivable that there are thousands of small - scale knots which are unresolvable with _ chandra _ but give the appearance of more or less uniform emission in the interior of the lobe . we consider this unlikely in the absence of a distribution which includes large knots . the x - ray morphology strengthens this argument . the projected length of the x - ray enhancement along the edge of the lobe is quite small ( @xmath5100 pc ) compared with the size of the lobe . it is difficult to see how a synchrotron model with thousands of unresolved knots would produce such a sharp , well defined feature exactly along the edge of the radio lobe . finally , if the x - ray emission were due to synchrotron radiation , the enhancement at the edge of the lobe would be marginally detectable in the optical unless the spectrum flattens significantly between the x - ray and optical . such a feature has not been seen . as a third scenario , we consider the possibility that the emission originates in a partial shell or cap of thermal plasma that surrounds the radio lobe . we suggest that little or none of the emission originates within the lobe , and that the lobe is surrounded by a shell of plasma . all of the x - ray emission that appears to be within the lobe actually originates in a sheath or cap of hot plasma that surrounds the lobe . that is , the emission that appears to be within the lobe is actually in front of and behind the lobe along our line of sight . emission from a thin cap of plasma rotated to our line of sight would appear as an edge - brightened shell and thus could take on the morphology of the detected emission . it is reasonable to expect that the southwest radio lobe and the surrounding plasma shell are rotated to our line of sight at a complementary angle to that of the forward jet . various estimates have been made of the angle to the line of sight of the forward jet , and all are highly uncertain . the most recent estimate of 50@xmath89 - 80@xmath89 is based on the vlbi brightness ratio of the milliarcsecond forward jet and the counter jet @xcite . other estimates typically range from 55@xmath89 - 70@xmath89 @xcite . in this work , we will assume a value of @xmath90=60@xmath89 for the angle between the forward jet and the line of sight , the angle between the counterjet / southwest radio lobe is therefore @xmath91=120@xmath89 . none of our results or conclusions is sensitive to this choice . we calculated the surface brightness from an optically thin plasma shell of uniform density rotated at an angle @xmath91=115@xmath89 to our line of sight with inner radius @xmath92 and outer radius @xmath93 , where @xmath94 is the radius of the lobe , and @xmath95 is the thickness of the x - ray emitting shell . this calculation is important for estimating the volume , and therefore the density , of the shell as described below . the parameter @xmath94 can be approximately measured from the width of the radio lobe in a direction perpendicular to the counter jet direction . the parameter @xmath95 can be estimated by measuring the thickness of the enhancement along the direction of the counter jet . the _ chandra _ obsid 00962 data were used for this measurement because the lobe is close to the best focus of the telescope so that broadening of this feature by the telescope psf is insignificant . the image of the lobe in this observation is considerably sharper than in the obsid 00316 observation , where the lobe is positioned at the very edge of the fov . we find the enhancement width to be @xmath96 ( fwhm ) on the sky by fitting a gaussian plus second - order polynomial ( to account for background and/or more extended emission in the interior of the lobe ) to the peak . the effect of projection on the sky was then removed by comparing the measured width with the simulated width for a range of parameters @xmath32 . for our assumed counterjet angle ( @xmath91=120@xmath89 ) , we estimate a compression of approximately 8:1 ( i.e. @xmath32=0.875 ) based on a comparison of the measured width of the projected enhancement with that in our calculated images . the radius of the radio lobe is @xmath97 , so the deprojected thickness of the shell is @xmath516.6@xmath98 or @xmath5281 pc . note that this observed compression ratio is consistent with the expected distance between the bow shock and the contact discontinuity if the lobe is expanding supersonically @xcite . we have based our estimate of the volume primarily on the width of the enhancement along the edge of the lobe . the sides of the shell in our preferred scenario will probably be thicker than at the enhancement along the leading edge of the lobe . systematic uncertainty in the thickness of the shell or oversimplifications of the assumed geometry will not change any of our general conclusions about the nature of the shell . the particle density as a function of the compression parameter @xmath32 is given by @xmath99 and is therefore not a strong function of the assumed thickness of the shell ( unless the shell is much thinner than we have assumed in which case the density would be even higher ) . if the material were uniformly distributed throughout the entire southern half of the lobe , the shell density would only be a factor of two smaller . given the uncertainty in @xmath100 , we estimate the uncertainty on the volume to be on the order of 30% . in table [ presstab ] we summarize the temperatures , the densities , and the pressures of the emission in the shell , the ism , and of the radio lobe ( equipartition pressure from the middle of the sw lobe taken from @xcite ) . as can be seen from the table , the shell is greatly overpressurized relative to both the ambient ism and the radio lobe , and the lobe is greatly overpressurized relative to the ism . we infer that both the lobe and the shell must be expanding supersonically ( relative to the ism ) and that the shell is confined by the ram pressure of the expansion . we hypothesize that the shell of hot plasma around the lobe is the result of the advance of an unseen , supersonic ( and possibly now extinct ) counter jet and the displacement and compression of the ism as the radio lobe inflates . the current existence of a counter jet is not necessary , only that it existed sometime in the past to initiate the flow . the overpressurization and thin extent of the shell require the expansion to be supersonic because otherwise the shell would have dissipated on a time scale smaller than the expansion of the lobe . in particular , the sharp boundary of the shell would dissipate in a timescale much smaller than the expansion of the lobe if the expansion were subsonic or transonic . if this shell is indeed due to the supersonic expansion of the radio lobe into the ism , the simplest interpretation of the thermodynamic properties of the gas would be that we are directly observing the bow shock across which the rankine - hugoniot ( rh ) conditions are formally met @xcite . in our case , however , we can not be directly observing the bow shock because the density contrast between the shell and the ism is considerably larger than the factor of @xmath54 ( for @xmath101=5/3 ) expected for a strong shock . our scenario is a bit different than a shock wave propagating into a uniform density medium however . both the gas density and pressure of the ism are falling rapidly ( @xmath102 ) as the lobe expands away from the nucleus , and the lobe / shell system may be expanding self - similarly . such a scenario has been investigated theoretically by several authors in the context of much more powerful frii jets propagating through the icm of clusters of galaxies @xcite . if the density gradient of the icm is steep enough , the density contrast between the shocked shell and the ambient ism / icm will be considerably larger than that expected on the basis of the rh conditions @xcite . it will appear as if the gas was further compressed as it crossed the bow shock . in fact , the thermodynamic state of the gas in the shell depends on its past history as it expanded through the denser regions of the icm . we suggest that such a model can explain the observed x - ray emission around the southwest radio lobe of cen a. we note that this problem has some similarities with that of a supernova explosion expanding into the ism @xcite . a detailed hydrodynamic simulation is required to fully interpret and understand this phenomenon and will be the subject of a future paper , but we can make some general quantitative statements about the evolution and energetics of the lobe . we have created a four - region model of this system as shown in figure [ shellmod ] in the rest frame of the lobe . region 1 is the radio lobe , region 2 is the x - ray shell , region 3 is a thin , hot boundary layer between the ism and the x - ray shell where the rh conditions are met , and region 4 is the ism . the subscripts 1,2,3 , and 4 will be used to denote the thermodynamic variables and the thicknesses of each region . the lobe is expanding into the ism with velocity v@xmath103=v@xmath104 . we assume that the lobe is expanding supersonically into the ism so that the rh shock conditions apply across the interface between regions 3 and 4 ( i.e. v@xmath103=v@xmath105 where @xmath106 is the sound speed in region 4 ) . we make the further assumption that region 3 is thin compared to the other regions ( i.e. @xmath107 ) so that the observable emission from the shell is dominated by region 2 . under these assumptions , we can determine the thermodynamic state of the gas in region 3 and estimate the expansion velocity of the lobe . the temperatures , pressures , and densities of the gas in regions 2 and 4 are known on the basis of the x - ray spectral analysis presented above . since the rh conditions apply across the interface between regions 3 and 4 , the density and velocity in region 3 are given by @xmath1086.8@xmath1610@xmath47 @xmath9 and v@xmath109v@xmath110 . balancing the thermal pressure of the gas in region 2 with the thermal and ram pressure of the gas in region 3 , @xmath1116.5 kev . region 3 therefore represents a region of high temperature between the visible x - ray shell and the ism . given the high temperature and low density ( relative to region 2 ) , region 3 is in practice indistinguishable and/or unobservable in the _ chandra_/_xmm - newton _ band . the ratio of the temperature of the ism to that of the boundary region is given by @xcite @xmath112 where @xmath113 is the mach number of the lobe expansion into the ism ( i.e. v@xmath103/@xmath106 ) . we find an expansion velocity for the lobe of approximately mach 8.5 , or @xmath52400 km / s . this confirms our assumption in the previous paragraph that the lobe is expanding supersonically into the ism and the rh shock conditions are appropriate . this analysis implicitly assumes that the lobe and the bow shock are expanding at the same velocity . if we add an additional constraint and force the lobe / bow shock expansion to be self - similar , the derived shock temperature would be somewhat higher ( @xmath58 kev ) . the pressure of the plasma in the shell ( region 2 ) is an order of magnitude larger than the equipartition pressure of the radio lobe ( region 1 ) . it is reasonable to assume that these two components are in approximate pressure equilibrium . this implies that there must be an additional component providing pressure support in the radio lobe , perhaps protons or lower energy electrons , for the shell not to destroy the lobe on the sound crossing time . such large deviations from the equipartion conditions have been inferred for other radio sources @xcite , but not on such small scales as we see in cen a. the temperature between the shock temperature ( 6.5 kev ) and the measured temperature of the gas ( 2.9 kev ) in the shell indicates that there must be significant cooling in the shell as the material flows away from the bow shock . neither radiation nor thermal conduction are likely to be important . the radiative timescale for the material in the shell is on the order of 10@xmath114 yrs , whereas the dynamical timescale of the lobe is on the order of 10@xmath115 yrs . therefore radiative cooling can not be significant . likewise , the timescale for thermal conduction to the ism must be longer than the dynamical timescale because the boundaries of the shell are sharp . this , and the inferred adiabatic evolution of the shell described below , imply that the thermal conductivity in the shell and between shell and the ism is considerably lower than the canonical value of @xcite . a similar suppression of transport processes has been observed in the icm of galaxy clusters @xcite . thermal conduction between the shell and the relativistic plasma in the radio lobe would heat the shell , not cool it . the only other possibility is adiabatic expansion . assuming that the lobe and shell expand self - similarly , the volume of the shell increases as the lobe inflates so that the temperature of the gas must decrease . the adiabatic evolution of a shell of shocked gas created by the supersonic inflation of a radio lobe in a @xmath0-model atmosphere has been studied by @xcite ; see also @xcite . as the lobe and shell expand self - similarly , new material is constantly added to the shell , but the material currently in the shell cools adiabatically as the volume of the shell increases . @xcite finds that if the density of the atmosphere into which the lobe / shell is expanding falls faster than @xmath116 , a temperature and density gradient between the bow shock and the contact discontinuity will be formed with the coolest material lying just above the contact discontinuity . this material will have the largest x - ray emissivity , so that the spectral parameters we derive are representative of this region of the shell . this scenario is therefore qualitatively consistent with our inferred temperature gradient between the shock temperature ( @xmath117 ) and the temperature of the shell ( @xmath118 ) . using the formalism of @xcite and assuming @xmath119 ( see section 3.1 ) , their model predicts a factor of @xmath51.4 temperature difference between the material along the contact discontinuity and that just behind the bow shock . a larger temperature gradient would be created if the value of @xmath0 were larger , but the lobe must have progressed through an atmosphere with a steeper density gradient ( @xmath120 with @xmath1211.7 - 1.8 ) to account for the observed temperature gradient . given the formal uncertainties in the thermodynamic parameters and assumptions ( i.e. uniform density and temperature of the shell , using the average @xmath0-model to describe the ism when there are large azimuthal asymmetries in the surface brightness , the unknown temporal history of the jet powering the lobe , the complex environment within 2 kpc of the nucleus and possible complications due to cold or warm gas remaining from the merger , etc . ) we do not consider the discrepancy between the predicted and measured density gradient to be significant . if the material in the shell is to behave adiabatically , the thermal conduction must be effectively suppressed . the heating timescale , @xmath122 , of the material in the shell is given by @xcite @xmath123 where @xmath124 is the sound crossing time of the shell , @xmath125 is the thickness of the shell , @xmath126 is the electron mean free path , and @xmath127 is the suppression factor . for the shell around the sw radio lobe , the canonical heating timescale ( @xmath127=1 ) is @xmath52@xmath1610@xmath128 yrs , approximately an order of magnitude less than the dynamical timescale of the lobe . therefore , the thermal conductivity must be suppressed by a factor of 100 or more for the shell to remain adiabatic and support the inferred temperature gradient . the swept up magnetic field , while not dynamically important in the shock for reasonable assumptions of the ambient field strength , could effectively suppress thermal conduction in the shell if shear flows are present in the shell ( e.g. if the shell is advancing more rapidly radially than laterally ) . we have made a consistency check on this analysis by comparing the mass of material in the shell with the mass of the ism in the region swept out and currently occupied by the lobe . the total mass of material in the shell is @[email protected]@xmath1610@xmath115 @xmath131 . we estimate the mass of the ism displaced by the expansion of the lobe by integrating the @xmath0-model in a piecewise fashion ( i.e. @xmath132= const for @xmath133 and @xmath134 for @xmath135 ) over the conical region currently occupied by the lobe and find @xmath1362@xmath1610@xmath115 @xmath131 . to order of magnitude , the mass in the hot shell is consistent with material swept up from the ism . one parameter commonly used to distinguish shock - heated , supersonically compressed gas from adiabatically , subsonically compressed gas is the specific entropy , @xmath137 . for shock - heated gas , the specific entropy will increase whereas there will be no change for adiabatically compressed gas @xcite . using the values from table [ presstab ] , the ratio of specific entropies of the shell and the ism at the edge of the shell is approximately unity . as described above , however , the current thermodynamic state of the shell depends on the past history in a complex way . it is probably more relevant to compare the current state of the shell with the density ( and temperature ) of the ism at a smaller distance from the nucleus . this would imply that the specific entropy of the shell is indeed larger than that of the gas and provides additional support for shock heating of the shell . as an alternative to the shock - heating of the hot phase of the ism , we consider the possibility that much cooler gas has been shock - heated by the supersonic expansion of the lobe . considerable amounts of cold gas ( i.e. neutral or molecular , @xmath138k ) are present in cen a , likely because of the merger with the spiral galaxy , although this gas is distributed unevenly . the hi observations of @xcite and @xcite detected @xmath58@xmath1610@xmath114 @xmath131 of neutral hydrogen aligned along the dust lane ( i.e. perpendicular to the jet ) . this mass estimate is actually a lower limit because the mass in the central 2.5 kpc is uncertain due to absorption . there is enough cold gas present in cen a to account for the mass in the shell if only a small fraction ( @xmath51% ) of it was swept up and heated by the lobe . the total mass of warmer ( 10@xmath139k ) gas in cen a is more uncertain . uv observations of more massive early - type galaxies have placed a lower limit on the mass of ionized gas of @xmath510@xmath115 @xmath131 @xcite . this estimate relies on a highly uncertain filling fraction and is only an order of magnitude estimate at best . unless this is a considerable underestimate , however , this mass of ionized material is probably not sufficient to account for the material in the shell in cen a. the spiral galaxy may have had a considerable warm ism component if it were similar to the milky way , so that the amount of warm gas in cen a may be much larger than is typical for massive , early - type galaxies . in either case , the final temperature of the gas in the shell depends only on the expansion velocity of the lobe , so that whether the shocked material originates in the coronal gas or in cooler gas , the implied expansion velocity of the lobe remains unchanged . the conventional paradigm suggests that the jets of fr i galaxies like cen a are expanding subsonically into the surrounding medium , whereas the lobes of the more powerful fr ii galaxies are expanding supersonically @xcite . the existence of the shock - heated shell around the southwest radio lobe implies supersonic expansion which runs counter to this prevailing view . this suggests either that cen a is different than most of the other well studied fr i galaxies , or that its proximity allows us to observe details that are not readily observable in more distant objects and the standard paradigm is therefore incorrect . the radio power of cen a ( @xmath140=1.85@xmath1610@xmath141 w hz@xmath4 @xcite ) is comparable to those of well - studied fr i radio galaxies from the 3c sample . cen a is in some ways , however , not a typical fr i object even though it is often referred to as the prototypical object of the class . the jet / lobe morphology of cen a is clearly different than the ` tailed twin jet ' morphology @xcite commonly associated with objects like 3c 31 @xcite and 3c 449 @xcite . the multiscale structure of cen a also makes it distinct from the more common bridged twin jet fr i sources such as 3c 296 @xcite . one the other hand , if cen a were at a distance comparable to these other fr i galaxies ( @xmath5100 mpc ) , the x - ray cap around the sw radio lobe would be virtually undetectable . the larger scale radio components of cen a ( e.g. the northern middle lobe ) are likely to be evolving subsonically @xcite . it is interesting that the x - ray morphology of the ne radio lobe is so different from the sw lobe . based on a preliminary examination of our ao-3 acis - s observation of cen a , there is some evidence for a partial shell around the ne lobe , but with an x - ray luminosity more than two orders of magnitude less than the shell around the sw lobe . these x - ray morphological differences are nt too surprising given the very different radio morphologies of the ne ( edge brightened on one side ) and southwest ( center filled with filamentary structure ) lobes . this argues that the nature of the flows on kpc scales to the ne and southwest are fundamentally different , and in particular that the environment must play a key role in the appearance of the jets and lobes and in the overall dynamics of the flow . there are larger scale asymmetries in the radio emission from cen a as well @xcite , so that the past history of the radio source may play a key role in its current appearance . prior to the launch of _ chandra _ , x - ray shells associated with the lobes of radio galaxies had been detected in only two objects , perseus a @xcite and cygnus a @xcite . since the launch of _ chandra _ , many additional examples of the interactions between the icm and the jets of radio galaxies have been detected @xcite . in all of these cases the shells are cooler than the surrounding ism , in contrast to what we have observed with cen a. in these other objects , radiative cooling may be important , but it is more likely that the cool shells are due to entrainment of lower temperature gas from the central regions of the galaxy or cluster by the inflation of lobe . it is clear that the nuclear activity can have an important effect on the galactic or cluster environment , and it is quite likely that the reverse is true as well . this observation of the southwest radio lobe of cen a provides the first clear demonstration of the complex relationship between the hot ism of early galaxies and nuclear outflows from supermassive black holes at their centers . other recent _ chandra _ observations have hinted at such a relationship @xcite , but we believe that this is the first definitive example where the nuclear outflow is providing sufficient energy to heat the ism . the total thermal energy of the gas within 15 kpc of the nucleus is @xmath51.8@xmath1610@xmath142 ergs if one assumes the gas is isothermal with a temperature of 0.3 kev . this is actually an underestimate because the temperature rises somewhat in the central 2 kpc , but is accurate enough for our order of magnitude comparison . the total thermal energy of the gas in the shell around the southwest radio lobe is @xmath54.2@xmath1610@xmath143 ergs , a significant fraction of the energy in the ism . in its current configuration , the energy of the shell is a few tens of percent of the total energy in the hot ism , and additional energy will be deposited in the ism as the lobe continues expanding . the ultimate fate of the gas ( and energy ) in the shell is not clear . it is possible that plasma in the shell will eventually come back into thermal equilibrium with the ism and settle back into hydrostatic equilibrium with the gravitating dark - matter potential , effectively heating the ism . on the other hand , the highly supersonic expansion of the lobe could drive the heated plasma completely out of the galaxy into the cen a group , or even out of the group into the igm . the temperature of the gas in the shell is also much too large to be gravitationally bound to a galaxy as small as cen a ( @xmath59=-20.4 @xcite , @xmath113=2.2@xmath1610@xmath144 from figure [ gravmass ] ) . the ratio of the kinetic energy of the shell to the thermal energy in the shell is @xmath56.5 . currently the kinetic energy of expansion is a factor of a few larger than the thermal energy of the shell , but it is likely that the shell is decelerating . it is clear that the expansion of the southwest radio lobe _ can _ provide enough energy to reheat the hot ism , although it is unclear if it actually does . the cooling time for the hot ism in the central regions of elliptical galaxies is @xmath510@xmath145 yrs , much less than a hubble time . prior to the _ chandra_/_xmm - newton _ era , it was therefore expected that significant amounts of cool gas would be found in their central regions . recent _ xmm - newton _ rgs observations of abell 1835 @xcite , m87 @xcite and ngc 4636 @xcite have not detected the spectral signatures of this cool gas and have cast serious doubts on the existence of large amounts of cool gas expected on the basis of observations with an earlier generation of x - ray observatories ( see @xcite for a detailed discussion ) . a similar situation exists for cooling flows in clusters of galaxies in that the large amount of cool gas ( @xmath146kev ) inferred from rosat , _ einstein _ , and asca observations is not being found with _ chandra _ or _ xmm - newton _ @xcite grating observations . in the absence of this cool gas , the ism must be occasionally reheated because of the relatively short radiative lifetime . it has been suggested that there is a cyclical relationship between the hot ism of elliptical galaxies ( and clusters of galaxies ) and nuclear activity @xcite . in these models , supermassive black holes ( smbh ) at the center of elliptical galaxies undergo intermittent outbursts as the hot ism radiatively cools and accretes . during the outburst , energy is transferred from the active nucleus to the hot ism via outflows that are observed as jets and radio lobes in galaxies . after the epoch of nuclear activity ends , the host galaxies undergo a long period of quiescence where the ism in cen a slowly cools . once a significant amount of energy is lost from the inner regions of the galaxy , material flows onto the smbh and initiates another epoch of nuclear activity . the implication is that all cooling - flow elliptical galaxies are radio galaxies , or that they have been radio galaxies in their past . this is still an open question . it has been shown that the host galaxies of fri and frii radio galaxies are drawn from a random population of otherwise normal elliptical galaxies @xcite . the probability that an elliptical galaxy is also a radio galaxy is a steep function of the host s luminosity @xcite . the more luminous , and therefore more massive , galaxies also tend to have the shortest cooling timescales for the ism at their centers , typically 10@xmath114 yrs @xcite . we have presented the results from one _ xmm - newton _ and two _ chandra _ observations of the hot ism and radio lobes of the nearby radio galaxy centaurus a. we find that : 1 . the temperature of the ism beyond 2 kpc from the nucleus is approximately 0.29 kev with a small decrease in temperature as a function of distance from the nucleus . the average radial surface brightness profile is well described by a @xmath0-model with an index of [email protected] . there is , however , some azimuthal structure in both the temperature and surface brightness profile , most likely related to a recent merger with a small spiral galaxy . x - ray emission coincident with the southwest radio lobe is detected . a sharp x - ray enhancement along the edge of the lobe is also observed . based on arguments about the energetics , the spectrum , the electron energy distributions , and the observed morphology , we reject a non - thermal ( i.e. synchrotron or inverse - compton scattering ) origin for the emission . we model this emission as a shell or cap of hot plasma that surrounds the radio lobe . the gas parameters of this shell were estimated using a simple model for the observed width of the enhancement along the edge of the lobe and an estimate of the angle made by the jet / counterjet with respect to the line of sight . 3 . based on spectral analysis , the temperature and density of the shell are much larger ( factors of 10 and 11.8 , respectively ) than the ambient medium . the shell is enormously overpressurized and this requires that the lobe and the shell are expanding supersonically into the ambient ism . this conclusion is supported by the small linear extent of the x - ray enhancement along the edge of the lobe . we estimate a mach number of about 8.5 , or a velocity of 2400 km / s . 4 . the density ratio between the material in the shell and that of the ambient ism is too large to be explained in terms of the canonical rankine - hugoniot shock conditions . we suggest that the appearance of the additional compression is in fact due to the supersonic expansion of the lobe into a medium with a steep pressure and density gradient . recent hydrodynamical simulations of the expansion of the lobes of fr ii galaxies into the icm support this conclusion . hydrodynamic simulation is required to quantitatively understand this phenomenon and will be the subject of a future paper . the x - ray shell is also enormously overpressurized relative to the equipartition pressure of the radio lobe . an additional component , perhaps protons or lower energy relativistic electrons , must be providing pressure support in the lobe . the amount of energy transferred to the ism by expansion / inflation of the radio lobe is a significant fraction of its total thermal energy , demonstrating the complex , and perhaps cyclical , link between the ism on the one hand and nuclear activity and outflows on the other . this last point could provide a partial answer to one of the long - standing puzzles in x - ray astronomy , namely why is there such a large variance in the x - ray luminosity of early - type galaxies of a given optical luminosity @xcite . the environment and depth of the dark matter potential play a key role in this , but the cyclical interaction between the ism and nuclear activity can contribute to this variance as well . that is , the x - ray luminosity of an early galaxy will depend where in this cooling / reheating cycle we happen to be observing it . _ chandra _ and _ xmm - newton _ have observed ( and will continue to observe ) a large number of early - type galaxies with a wide range of nuclear activities and radio powers . perhaps once a large enough sample has been observed and analyzed , a trend can be developed to quantify this relationship . cen a is , in fact , considerably underluminous for its optical luminosity which would support the idea that it is at the end of its cooling cycle . we are just catching cen a as it starts to reheat its ism . we would like to thank mark birkinshaw , torsten enlin , sebastian heinz , and francesco minitti for many stimulating and helpful discussions . we would also like to thank dan harris and the anonymous referee for their detailed comments about this paper . this work was supported by nasa contracts nas8 - 38248 , nas8 - 39073 , the chandra x - ray center , and the smithsonian institution . y offset & z offset + acis - i & 00316 & 5dec99 & 35.9 & 13:25:27.61 & -43:01:08.9 & @xmath147 & @xmath147 + acis - i & 00962 & 17may00 & 36.5 & 13:25:27.61 & -43:01:08.9 & @xmath148 & @xmath149 + & 93650201 & 2feb01 & 23.1 & 13:25:26.3 & -43:01:06 & & + - 00962 + + rate ( 10@xmath46 cts s@xmath4 ) & [email protected] & [email protected] & [email protected] & [email protected] & [email protected] + + index & 2.04@xmath155 & 1.80@xmath156 & 1.81@xmath157 & 1.85@xmath158 & 1.83@xmath155 + @xmath159 & 1.73 & 1.11 & 1.02 & 1.50 & 0.97 + + @xmath160 & 2.83@xmath161 & 2.62@xmath162 & 3.70@xmath163 & 3.4@xmath164 & 3.4@xmath165 + @xmath159 & 2.92 & 0.49 & 1.24 & 2.31 & 1.32 + + @xmath160 & 4.19@xmath166 & 2.90@xmath167 & 4.5@xmath168 & 5.1@xmath169 & 4.2@xmath170 + @xmath118 & 0.33@xmath171 & 0.47@xmath172 & 0.21@xmath173 & 0.76@xmath174 & 0.71@xmath172 + @xmath68 & 0.44 & 0.06 & 0.20 & 0.18 & 0.07 + @xmath159 & 1.12 & 0.50 & 1.03 & 1.46 & 1.30 + + rate ( 10@xmath46 cts s@xmath4 ) & [email protected] & [email protected] & [email protected] & & + + index & 2.00@xmath175 & 2.24@xmath176 & 2.08@xmath177 & & + @xmath159 & 1.60 & 0.98 & 1.29 & & + + @xmath160 & 2.84@xmath178 & 2.03@xmath179 & 2.25@xmath180 & & + @xmath159 & 1.82 & 1.47 & 1.59 & & + + @xmath160 & 3.43@xmath181 & 2.26@xmath182 & 3.47@xmath183 & & + @xmath118 & 0.34@xmath184 & 0.30@xmath185 & 0.38@xmath186 & & + @xmath68 & 0.29 & 0.35 & 0.375 & & + @xmath159 & 1.54 & 1.15 & 1.08 & & + ( kev ) & density ( @xmath9 ) + ism ( region 4 ) & 1.0@xmath1610@xmath187 & 0.29 & 1.7@xmath1610@xmath47 + shell ( region 2 ) & 2.1@xmath1610@xmath188 & 2.88 & 2.0@xmath1610@xmath46 + lobe ( equipartition ) & 1.4@xmath1610@xmath189 & & +
we present results from two _ chandra_/acis - i observations and one _ xmm - newton _ observation of x - ray emission from the ism and the inner radio lobes of the nearby radio galaxy centaurus a. the ism has an average radial surface brightness profile that is well described by a @xmath0-model profile with index @[email protected] and a temperature of @xmath20.29 kev beyond 2 kpc from the nucleus . we find that diffuse x - ray emission is coincident with the outer half of the southwest radio lobe , and a bright x - ray enhancement is detected along the edge of the lobe . on the basis of energetic and lifetime arguments , we reject a nonthermal explanation for this emission . we model this emission as a thin , hot shell or cap of x - ray emitting plasma surrounding the radio lobe that was created by the supersonic inflation of the lobe . this plasma shell is both hotter than ( @xmath32.9 kev ) and greatly overpressurized relative to the ambient ism indicating supersonic expansion . we estimate that the lobe is expanding into the ism at approximately mach 8.5 or 2400 km s@xmath4 . we are not directly observing the bow shock , but rather the cooler , denser material that is accumulating ahead of the contact discontinuity . the thermal energy in the shell is a significant fraction of the thermal energy of the hot ism , demonstrating the possibility that the hot ism of early galaxies can be re - energized by outflows from nuclear activity . interestingly , no similarly bright x - ray emission is detected in or along the edge of the ne lobe , implying that there are differences in the dynamics and evolution of the kpc - scale radio components .
a 43-year - old female presented with recurrent palpitations , documented narrow qrs tachycardia and normal electrocardiogram during sinus rhythm . during the electrophysiology study , single atrial extrastimuli were introduced from the proximal coronary sinus at a drive cycle length of 600 ms , starting at a coupling interval of 360 ms and decrementing by 10 ms . at longer coupling intervals , atrioventricular ( av ) conduction was seen with a narrow qrs . at a coupling interval of 330 ms ( figure 1 , panel a ) , conduction was seen with right bundle branch block . at a coupling interval of 320 as the extrastimulus coupling interval was reduced to 260 ms , av conduction resumed ( panel c ) and at 250 ms , narrow complex tachycardia was induced ( panel d ) . what is the explanation for the responses seen ? the phenomenon where av block is seen during a window of coupling intervals with intact conduction at both shorter and longer coupling intervals has been designated as a conduction gap and was initially described by moe et al . this constitutes a discontinuity in the av conduction curve and , in the classical explanation , is due to the functional refractory period of the proximal conducting tissue ( av node in this case ) being shorter than the effective refractory period ( erp ) of the distal conducting tissue ( the his purkinje system in this case ) . thus , in panel b , since the h1h2 interval is 342 ms and the erp of the his - purkinje system is longer than this , the impulse blocks in the his - purkinje system . in panel c , because of prolongation of a2h2 exceeding the decrease in a1a2 , the h1h2 is longer ( 375 ms ) and av conduction occurs . interesting in this case is the co - existence of dual av nodal physiology with infranodal gap phenomenon . in the setting of dual av nodal physiology , a different mechanism for gap phenomenon due to collision of the two wavefronts has also been described . gap phenomenon in the fast pathway conduction due to delay in the proximal av nodal region has also been described as another interaction of dual av nodal physiology and conduction gap . from the h1h2 curve , it is seen that extrastimuli that result in av block correspond to the nadir of the h1h2 curve , specifically intervals shorter than 350 ms . this is consistent with the classical explanation for the gap phenomenon in our patient , with the ah jump providing the increase in proximal conduction time to allow resumption of distal conduction.figure 2av conduction curves . ah interval of the extrastimulus ( a2h2 , denoted as dark circles ) and h1h2 intervals ( squares ) are plotted against the coupling intervals ( a1a2 ) . h1h2 intervals associated with conduction block after the his are denoted by an empty square while the others h1h2 intervals are filled . ah interval of the extrastimulus ( a2h2 , denoted as dark circles ) and h1h2 intervals ( squares ) are plotted against the coupling intervals ( a1a2 ) . h1h2 intervals associated with conduction block after the his are denoted by an empty square while the others h1h2 intervals are filled .
the gap phenomenon is an interesting phenomenon in electrophysiology arising from the differences in refractory periods at two or more levels of the atrioventricular ( av ) conduction system . we present a patient with dual av nodal physiology in whom the ah jump mediates the gap phenomenon . we also briefly discuss the other mechanisms of gap phenomenon that have been described in this setting .
recently , the lhc experiments atlas and cms , have reported an excess of diphoton events at an invariant mass around 750 gev from data at lhc run 2 with @xmath6 collisions at @xmath7 tev @xcite . the absence of data on other channels such as @xmath8 and @xmath9 indicate that the interpretation of these events can not be accommodated within the standard model ( sm ) . it has been suggested that an interpretation requires new physics beyond the known sm context . in fact this effect hints to the existence of singlets and vector like states in the low energy spectrum of the theory . the observed resonance could be explained by a sm scalar or pseudoscalar singlet state @xmath10 @xcite with mass @xmath11 gev . this state could be generated by the gluon - gluon fusion mechanism while subsequently it decays to two photons . schematically , this is described as follows gg x[gxgamma ] in a renormalisable theory , the production and decay in this process can be realised through loops involving appropriate vector - like states . remarkably , a common phenomenon in string theory model building is the occurrence of new singlet fields and vector - like exotic states in the massless spectrum of the effective low energy models which can mediate such processes @xcite excess . for a comprehensive list of models and interpretations see @xcite . ] . f - theory models in particular offer a wide range of possibilities@xcite . unlike other string constructions , they admit exceptional gauge symmetries such as @xmath12 and its subgroups , which incorporate naturally the concept of gauge coupling unification . besides , when most of the successful old gut groups are realised in an f - theory background they naturally predict vector - like pairs of quarks and leptons in the light spectrum . in fact , when the gut symmetry is @xmath0 or higher the appearance of such states is unavoidable @xcite . inspired by the above facts , in this note we construct a flipped @xmath0 model embedded in an f - theory motivated @xmath1 unified gauge group @xcite . we show that this construction includes singlets as well as vector - like states which come with the quantum numbers of sm particles capable of mediating processes such as the diphoton production . in addition , we find that other vector - like states with exotic quantum numbers emerge from the adjoint decomposition . the study of this alternative embedding is well motivated in f - theory constructions where the gut symmetry can be as large as @xmath13 . indeed , in the restricted case of minimal @xmath14 , there is a unique assignment of the hypercharge generator in this group . however , there are many possibilities with a larger gut symmetry and includes additional @xmath15 factors . in the case of @xmath16 , with the standard hypercharge assignment the extra @xmath17 factor is treated as a spectator , but there is no compelling reason for this . similarly , in the @xmath18 case , there are two additional @xmath15 factors that could contribute to the hypercharge . thus , different embeddings lead to distinct phenomenological predictions . in this work we wish to consider an alternative embedding of the hypecharge generator and try to assess the model in terms of its low energy predictions . in order to obtain chiral matter we will assume the existence of a suitable four - form flux . of course , the flux depends on the choice of the four complex dimensional calabi - yau ( cy ) manifold and the geometric properties of the divisor supporting the specific singularity ( @xmath1 in the present case ) . for our present purposes however , we will work in the spectral cover approach where the properties of our local construction can be adequately described in the infinitesimal vicinity of the gut divisor , and therefore , we will rely on the assumption that such a manifold exists . the layout of the present paper is as follows . in the next section we present an @xmath0 flipped model embedded in the @xmath1 gauge symmetry . we discuss the basic properties of its spectrum and the predicted exotics . in section 3 we derive the superpotential of the effective model emerging under the action of a @xmath19 monodromy . next , in section 4 we focus on the existence of exotic vector - like pairs and singlet field which are suitable to contribute to the diphoton emission in @xmath6 collisions . we present our conclusions in section 5 . in f - theory the gauge symmetry of the effective theory is linked to the geometric singularity of the compactification manifold . in the elliptic fibration these singularities are described by the sequence of the subgroups of the exceptional group @xmath12 . in the present f - theory construction we will analyse an @xmath20 gauge symmetry which admits a natural embedding in the exceptional group @xmath1 . therefore , with respect to the @xmath12 we have the following breaking pattern : @xmath21 where , in accordance to the standard terminology , the @xmath22 factor is considered as the group ` perpendicular ' to @xmath1 gut divisor . we will assume a semilocal approach where the @xmath1 representations transform non - trivially under @xmath22 . the matter content arises from the decomposition of the @xmath12 adjoint ( @xmath23 ) @xmath24 in the spectral cover approach the @xmath1 representations are distinguised by the ` weights ' @xmath25 of the @xmath22 cartan subalgebra subject to @xmath26 , while the @xmath22 adjoint ` decomposes ' into singlets @xmath27 . we introduce the notation @xmath28 while the @xmath1 adjoint @xmath29 is an @xmath22 singlet and therefore carries no @xmath30 index . since we are interested in a flipped @xmath0 model , in the subsequent analysis we choose to accommodate the ordinary fermionic states and higgs in the @xmath31 . we further assume that the symmetry breaks though a non - trivial abelian flux which , at the same time , determines the chirality of the complete @xmath1 representations @xmath32 . we start with the derivation of the flipped @xmath0 model see for example @xcite and references therein . ] in an f - theory inspired context . we will assume that the bulk gauge group is @xmath1 , which breaks to @xmath0 by turning on a @xmath33 gauge field configuration , where the particular @xmath33 is embedded in @xmath1 . under the decomposition @xmath34 , the relevant @xmath1 representations decompose as follows _ 6 & & so(10)u(1)_x + 78 & & * 45*_0+*1*_0+*16*_-3+_3[78 ] + 27&&*16*_1+*10*_-2+*1*_4[27 ] + & & _ -1+_2+*1*_-4[27n ] in principle , there are @xmath0 zero modes in the adjoint @xmath35 and @xmath36 as well as in @xmath37 , which might accommodate chiral matter provided that @xmath38 . in the next step , we break the @xmath0 symmetry down to @xmath39 by turning on a flux along @xmath40 , so that at this stage the symmetry breaking chain is _ 6 so(10)u(1)_x u(1)_x. if we denote with @xmath41 the corresponding abelian charges , for the flipped @xmath0 case we define the following combination charge assignment , we adopt the convention @xmath42 , @xmath43 and @xmath44 . ] z=-14 ( x+5 x)[zdef ] under the above symmetry breaking , the @xmath0 representations decompose to various @xmath14 multiplets . with respect to @xmath45 , these have the following ` charge ' assignments : 27 & & \{10_*-1*+_*-2*+1_*0*}+\{5_*2*+|5_*3*}+*1*_*-5*[27dec ] + 78 & & \{24_*0*+10_*-1*+_*1*+1_*0 * } + & & + \{10_*4*+|5_*3*+1_*5 * } + & & + \{_*-4*+ 5_*-3*+1_*-5 * } + & & + * 1*_*0*[78dec ] at the final stage , we break @xmath46 , where the hypercharge is defined to be the linear combination of the three abelian factors @xmath47 given by : y = -15 ( z+y6 ) [ zydef ] we will require that the sm fermions and higgs doublets reside on matter curves @xmath48 formed at the intersections of the gut surface with other 7-branes . employing the above hypercharge definition , the embedding of the sm states in the 27-representation of @xmath18 is as follows : & = & \ { lll*16*_1 + + * 10*_-2 + * 1*_4e^c . , the symbol @xmath49 stands for a @xmath14 singlet field while for all other sm states , we use the standard notation . as can be seen , compared to the standard @xmath0 embedding , here we obtain a ` flipped ' picture of the @xmath50-plet and singlet representations , i.e. , @xmath51 and @xmath52 . more precisely , compared to flipped @xmath14 , this @xmath53 definition flips @xmath54 with @xmath55 and @xmath56 with @xmath57 . the fermion component @xmath58 in this case is part of the @xmath59-plet ( @xmath60 ) , and the higgs @xmath61 is part of @xmath62 . furthermore , the @xmath14 singlet @xmath49 is electrically neutral and the right - handed electron is found in the @xmath0 singlet @xmath63 . in addition to the sm fields residing in the @xmath48 matter curves , there is also bulk matter emerging from the docomposition of the @xmath29 representation . namely : & = & \ { lll * 45*_0g_0+t_0+s_0+q+ + + \{(q+d^c+n^c)+c.c.}+_0 + * 16*_-3+|e^c + _ + 3 + e^c + * 1*_0 .. [ 78com ] we have used the symbols @xmath64 for the two neutral singlets , while for the remaining content arising from the decomposition of @xmath65 , we have introduced the notation g_0+t_0+s_0&= & ( 8,1)_0+(1,3)_0+(1,1)_0 + q+|q&= & ( 3,2)_16+(|3,2)_-16 + q+|q&= & ( 3,2)_-56+(|3,2)_56 . in the above , @xmath66 has the standard quark doublet quantum numbers and @xmath67 is its complex conjugate , while @xmath68 have exotic charges . in the standard ( non - flipped ) @xmath14 theory , the @xmath68 exotics emerge from the decomposition of the @xmath69-adjoint . hence , we observe that the flipped case interchanges @xmath68 exotics in the adjoint of the standard @xmath14 , with the ordinary @xmath70 within the @xmath71 of @xmath0 . if some of these bulk states remain in the light spectrum , they could contribute to new physics phenomena with possible signatures in future experiments . we will comment on these issues in the next section . having determined the particle spectrum of the effective field theory model , we proceed now to the superpotential . we find it convenient to perform the analysis using the spectral cover approach . given that there are two non - trivial @xmath1 representations available , namely @xmath37 and @xmath29 , the only possible tree level terms are @xmath72 , @xmath73 and @xmath74 where @xmath75 is a singlet embedded in the @xmath12 adjoint . we have explained in the introductory section that in the context of the @xmath76 spectral cover the fundamental representation is characterised by the corresponding weights @xmath30 and the yukawa couplings should respect the requirement @xmath26 . furthermore , as is well known , a monodromy action is required to ensure a top yukawa coupling at the tree - level . we choose a @xmath19 monodromy , which identifies the two weights @xmath77 . we accommodate the fermion families in @xmath78 and the higgs in @xmath79 , so that a diagonal yukawa term @xmath80 is allowed . after the implementation of the @xmath19 monodromy , the condition for the weights @xmath30 becomes @xmath81 . it is also worth observing that the spectral cover symmetry reduces essentially to a @xmath82 symmetry in the effective field theory model where the @xmath82 charges of the two matter curves are @xmath83 and @xmath84 . therefore , the symmetry of the effective model is in fact @xmath85 to define the homological properties of the matter curves , we recall that in the case of @xmath22 the spectral cover is described by a cubic polynomial whose roots the @xmath30 . for the case of @xmath19 monodromy we assume the factorisation . ] b_0s^3+b_2s+b_3 = ( a_1+a_2s+a_3s^2 ) ( a_4+a_5 s),[sce ] where the @xmath86 s homologies are @xmath87=\eta - kc_1 $ ] and @xmath88=c_1 $ ] . here @xmath89 , where @xmath90 is the first chern class of the gut `` surface '' @xmath91 and , @xmath92 that of the normal bundle . the second degree polynomial of the right part of the above equation means that two roots are not separable within the field of holomorphic functions , and as a result , a @xmath19 monodromy identifies the two weights @xmath93 in accordance with our assumptions stated above . moreover , equation ( [ sce ] ) implies the following relations @xmath94 between the coefficients b_0=a_3a_5 , b_1=a_2a_5+a_3a_4=0 , b_2=a_1a_5+a_2a_4 , b_3=a_1a_4 , [ bacoefs ] which can be used to determine the homologies of @xmath95 s . furhermore , the equation @xmath96 of the @xmath37 , implies that the two matter curves @xmath97 and @xmath79 are associated with the defining equations @xmath98 and @xmath99 respectively . from ( [ bacoefs ] ) , we infer that the homologies satisfy relations of the form @xmath87=[a_l]+[a_{8-l - k}]$ ] so that it can be readily found @xcite that the @xmath97 and @xmath79 homologies are @xmath100 and @xmath101 respectively where @xmath102 is left unspecified . then , assuming a @xmath15 flux piercing these matter curves , the multiplicities of @xmath103 are given by the restrictions @xmath104 and @xmath105 ( with @xmath106 denoting the abelian flux ) . from this , we deduce that the chiral states of the model are given by @xmath107 and therefore , the unknown homology @xmath102 does not play any rle in the determination of the chiral spectrum . hence , to obtain three chiral families we impose @xmath108 . .@xmath1 matter curves , their defining equations , the homology classes and the multiplicities in terms of the flux restrictions . [ cols="^,^,<,<,<",options="header " , ] as a result , they generate the superpotential terms ( |e^c)_-t_3(e^c)_t_1_31 [ t13 ] . we identify the @xmath109 gev resonance @xmath10 with the singlet @xmath110 which has the appopriate couplings to give rise to the diphoton diagram shown in figure [ ggxgg ] . note that a mixing term @xmath111 would allow an additional channel for the resonance to decay into diphotons through the last coupling in equation ( [ t13 ] ) . after supersymmetry breaking in the presence of the scalar @xmath10 , the effective lagrangian contains the terms & = & _ d|ddx+_e|e^ce^cx+12m_x^2xx+a_dxd^*d+ a_exe^c*e^c+ . we assume here that the singlet field receives a soft mass @xmath112 of the order of the susy breaking scale , @xmath113 are yukawa couplings of order one , and @xmath114 are the trilinear scalar parameters . for a pseudoscalar interaction we should replace the yukawa coupling according to @xmath115 . next we provide an estimate of the contributions of the above exotics to the diphoton excess . we assume that the production mechanism of the scalar resonance is mainly from gluon fusion , mediated by loops of the colour triplets , while its decay is mediated by triplets and ( @xmath116)-pairs as shown in the figure . the cross section for the scalar mediated process is @xmath117 where @xmath118 are the total width and the center of mass energy ( @xmath119tev ) respectively , and @xmath120 is the parton integral @xcite @xmath121 where @xmath122 is the function representing the gluon distribution inside the proton . the integral is computed using mstw2008nnlo @xcite and its numerical value at @xmath123 tev is estimated @xcite to be @xmath124 . the partial widths @xmath125 from loops involving fermions and scalars are given by @xcite(see also@xcite ) & = & & = & |_fd_r_fq_f^2_fs(_f ) + _ s d_r_sq_s^2 p(_s)|^2 , where @xmath126 is the dynkin index of the colour representation @xmath127 for the triplet ) , @xmath128 is its dimension , @xmath129 the charge and @xmath130 , with @xmath131 for the fermion and scalar masses respectively . the functions @xmath132 are @xmath133 where @xcite f()&=&\ { ll ^2&>1 + -14(-i)^2;&1 .. for the pseudoscalar contribution , in the above formulae we make the replacements @xmath134 and @xmath135 @xcite . for a numerical application , we first consider the existence of only one singlet field @xmath10 with mass @xmath136 gev and , for the sake of simplicity , we take a common mass for the various fermion - pairs contributing in the loops . since the scalar components are expected to be much heavier than the fermions , at this level of approximation their contributions are ignored . in figure [ xggxgamma ] we plot the widths @xmath137 and @xmath138 as a function of the mass of the fermion - pairs for two sets of fermion multiplicities for the scalar as well as the pseudoscalar case . if we ignore the large width suggested by the atlas data , we observe that there are regions of the fermion mass range where @xmath139 and @xmath140 , which are sufficient to interpret the data . we note in passing that a large decay width allows the exciting possibility of other decay channels including dark matter . in general , however , we expect more than one singlet field with approximately degenerate masses , so that the atlas large width could be explained as an unresolved resonance . another possibility is to invoke additional couplings in the superpotential such as @xmath141 which permit the resonance to decay into higgsinos , if kinematically possible , or sm higgs via the soft trilinear terms . before closing , we would like to make a final comment on the possible existence of additional ` exotic ' matter interactions . as we have pointed out , exotic matter arises from the decomposition @xmath142 with respect to @xmath143 ( the indices now refer to @xmath33 ) . we recall that in the twisted model the sm states are in @xmath144 while bulk states are the @xmath35 , and as such they have exotic charges . such states could pick up masses at a high scale . in case some of them remain light . due to their large @xmath145-hypercharge , they can in principle make a significant contribution to the production and decay of the resonance . as can be observed , all these states come in vector - like pairs , and therefore a possible coupling that could make them massive is m_q q|q + . [ m78 ] since these states carry non - zero ` charges ' under the three @xmath15 s , in principle , non - trivial fluxes might lead to additional chiral states . nevertheless , a solution to this problem is feasible if certain topological properties are assumed . indeed , we first recall that the number of states is given by the euler character @xmath102 . if @xmath146 is the dual representation of @xmath147 , @xmath148 is the bundle transforming in the representation @xmath149 , the net number of chiral minus anti - chiral states is given in terms of the formula @xcite , @xmath150 where we assume @xmath91 to be a del pezzo surface associated with the gauge group @xmath151 . if we designate with @xmath152 a line bundle over @xmath91 , the euler character is ( s , l_j)&=&1 + 12 c_1(l_j)c_1(l_j)+12 c_1(l_j)c_1(s ) + ( s , l_j^*)&=&1 + 12 c_1(l_j)c_1(l_j)-12 c_1(l_j)c_1(s ) , so that the difference counting the number of chiral states is ( s , l_j^*)-(s , l_j)=- c_1(l_j)c_1(s)[nochiral ] we can ensure the vector - like nature of the corresponding states by simply demanding c_1(l_j)c_1(s)=0[cond4vpairs ] for the particular line bundle . focusing now on the @xmath18 case , recall that under the successive breaking we have considered @xmath153 while the quantum numbers of the bulk states are 78 & & ( 1,1)_(0,0,0)+\{(1,1)_(0,0,0)+(1,1)_(0,0,0)+(8,1)_(0,0,0)+(1,3)_(0,0,0)+(3,2)_(-5,0,0)+(|3,2)_(5,0,0 ) . + & + & .(3,2)_(1,4,0)+(|3,2)_(-1,-4,0)+(|3,1)_(-4,4,0)+(3,1)_(4,-4,0)+(1,1)_(6,4,0)+(1,1)_(-6,-4,0 ) } + & + & \ { ( 1,1)_(0,-5,-3)+(|3,1)_(2,3,-3)+(1,2)_(-3,3,-3)+(1,1)_(6,-1,-3)+(3,2)_(1,-1,-3)+(|3,1)_(-4,-1,-3 ) } + & + & \ { ( 1,1)_(0,5,3)+(3,1)_(-2,-3,3)+(1,2)_(3,-3,3)+(1,1)_(-6,1,3)+(|3,2)_(-1,1,3)+(3,1)_(4,1,3 ) } we can express all the exotics obtained from the decomposition of the @xmath18-adjoint @xmath29 in terms of the following three line bundles : @xmath154 it can be shown that by imposing relations analogous to ( [ cond4vpairs ] ) for the three line bundles , all exotic states appear in vector - like pairs and hence , no chiral matter arises from the bulk modes . moreover , in the minimal case the extra states emerging from @xmath29 can assemble in a @xmath155 pair @xmath156 as already stated , these can receive a large mass from terms such as ( [ m78 ] ) , so that gauge coupling unification is not affected . and @xmath157 as a function of the masses of the fermion - pairs circulating in the loops of figure [ ggxgg].,title="fig : " ] and @xmath157 as a function of the masses of the fermion - pairs circulating in the loops of figure [ ggxgg].,title="fig : " ] * note added * : after this paper was submitted for publication the atlas and cms experiments have reported results based on updated analysis including data collected during 2016 . this data does not support the presence of the 750 gev resonance previously reported in 2015 and in moriond 2016 . we should emphasize that our string inspired @xmath18 model predicts the existence of the diphoton resonance as well as vectorlike fields . hopefully , some of these states can be discovered at the lhc and future colliders . in this work we have constructed a flipped @xmath20 model fully embedded in an @xmath18 gut symmetry within an f - theory context . we introduced abelian fluxes along @xmath15 s inside @xmath1 to realise the symmetry breaking and generate the chiral families in the low energy spectrum of the effective theory . we have presented simple cases that contain the three chiral families of quarks and leptons . furthermore , motivated by the 750 gev diphoton resonance reported by the atlas and cms experiments , we have given examples where the low energy spectrum consists of vector - like fields with a variety of mssm quantum numbers containing both coloured and leptonic states , as well as gauge singlets . the flipped so(10 ) model yields several vector - like @xmath3-pairs whose presence could enhance the diphoton decay mode of the scalar resonance . . _ g.k.l . would like to thank the physics and astronomy department and bartol research institute of the university of delaware for kind hospitality . q.s . is supported in part by the doe grant `` doe - sc-0013880 '' . _ cms collaboration [ cms collaboration ] , `` search for new physics in high mass diphoton events in proton - proton collisions at 13tev , '' cms - pas - exo-15 - 004 . r. franceschini _ et al . _ , `` what is the @xmath160 resonance at 750 gev ? , '' jhep * 1603 * ( 2016 ) 144 doi:10.1007/jhep03(2016)144 [ arxiv:1512.04933 ] . j. ellis , s. a. r. ellis , j. quevillon , v. sanz and t. you , jhep * 1603 * ( 2016 ) 176 doi:10.1007/jhep03(2016)176 [ arxiv:1512.05327 [ hep - ph ] ] + a. djouadi , j. ellis , r. godbole and j. quevillon , jhep * 1603 * ( 2016 ) 205 doi:10.1007/jhep03(2016)205 [ arxiv:1601.03696 [ hep - ph ] ] . y. mambrini , g. arcadi and a. djouadi , `` the lhc diphoton resonance and dark matter , '' phys . b * 755 * , 426 ( 2016 ) doi:10.1016/j.physletb.2016.02.049 [ arxiv:1512.04913 [ hep - ph ] ] . j. j. heckman , `` 750 gev diphotons from a d3-brane , '' nucl . b * 906 * ( 2016 ) 231 doi:10.1016/j.nuclphysb.2016.02.031 [ arxiv:1512.06773 [ hep - ph ] ] . m. cvetic , j. halverson and p. langacker , arxiv:1512.07622 [ hep - ph ] . l. a. anchordoqui , i. antoniadis , h. goldberg , x. huang , d. lust and t. r. taylor , `` 750 gev diphotons from closed string states , '' doi:10.1016/j.physletb.2016.02.024 arxiv:1512.08502 [ hep - ph ] . l. e. ibanez and v. martin - lozano , jhep * 1607 * ( 2016 ) 021 doi:10.1007/jhep07(2016)021 [ arxiv:1512.08777 [ hep - ph ] ] . e. palti , nucl . b * 907 * ( 2016 ) 597 doi:10.1016/j.nuclphysb.2016.04.026 [ arxiv:1601.00285 [ hep - ph ] ] . a. karozas et al , `` 750 gev diphoton excess from @xmath18 in f - theory guts , '' phys . b * 757 * ( 2016 ) 73 doi:10.1016/j.physletb.2016.03.054 [ arxiv:1601.00640 [ hep - ph ] ] . p. anastasopoulos and m. bianchi , `` revisiting light stringy states in view of the 750 gev diphoton excess , '' arxiv:1601.07584 [ hep - th ] . g. lazarides and q. shafi , phys . d * 93 * ( 2016 ) no.11 , 111702 doi:10.1103/physrevd.93.111702 [ arxiv:1602.07866 [ hep - ph ] ] . h. ito , t. moroi and y. takaesu , `` studying 750 gev di - photon resonance at photon?photon collider , '' phys . b * 756 * ( 2016 ) 147 doi:10.1016/j.physletb.2016.03.008 [ arxiv:1601.01144 [ hep - ph ] ] . t. li , j. a. maxin , v. e. mayes and d. v. nanopoulos , `` a flippon related singlet at the lhc ii , '' arxiv:1602.01377 [ hep - ph ] . y. kats and m. j. strassler , jhep * 1605 * ( 2016 ) 092 erratum : [ jhep * 1607 * ( 2016 ) 044 ] doi:10.1007/jhep05(2016)092 , 10.1007/jhep07(2016)044 [ arxiv:1602.08819 [ hep - ph ] ] . m. badziak , m. olechowski , s. pokorski and k. sakurai , phys . b * 760 * ( 2016 ) 228 doi:10.1016/j.physletb.2016.06.057 [ arxiv:1603.02203 [ hep - ph ] ] . l. aparicio , a. azatov , e. hardy and a. romanino , jhep * 1605 * ( 2016 ) 077 doi:10.1007/jhep05(2016)077 [ arxiv:1602.00949 [ hep - ph ] ] . y. hamada , h. kawai , k. kawana and k. tsumura , phys . d * 94 * ( 2016 ) no.1 , 014007 doi:10.1103/physrevd.94.014007 [ arxiv:1602.04170 [ hep - ph ] ] . s. f. ge , h. j. he , j. ren and z. z. xianyu , `` realizing dark matter and higgs inflation in light of lhc diphoton excess , '' arxiv:1602.01801 [ hep - ph ] . f. staub _ et al . _ , arxiv:1602.05581 [ hep - ph ] . c. beasley , j. j. heckman and c. vafa , `` guts and exceptional branes in f - theory - ii : experimental predictions , '' jhep * 0901 * ( 2009 ) 059 doi:10.1088/1126 - 6708/2009/01/059 [ arxiv:0806.0102 [ hep - th ] ] . r. donagi and m. wijnholt , `` model building with f - theory , '' adv . theor . math . phys . * 15 * ( 2011 ) 5 , 1237 doi:10.4310/atmp.2011.v15.n5.a2 [ arxiv:0802.2969 [ hep - th ] ] . f. gursey , p. ramond and p. sikivie , `` a universal gauge theory model based on e6 , '' phys . b * 60 * ( 1976 ) 177 . doi:10.1016/0370 - 2693(76)90417 - 2 y. achiman and b. stech , `` quark lepton symmetry and mass scales in an e6 unified gauge model , '' phys . lett . b * 77 * ( 1978 ) 389 . doi:10.1016/0370 - 2693(78)90584 - 1 q. shafi , `` e(6 ) as a unifying gauge symmetry , '' phys . b * 79 * ( 1978 ) 301 . doi:10.1016/0370 - 2693(78)90248 - 4 n. maekawa and t. yamashita , `` flipped so(10 ) model , '' phys . b * 567 * ( 2003 ) 330 doi:10.1016/j.physletb.2003.06.054 [ hep - ph/0304293 ] . @xcite j. c. callaghan , s. f. king , g. k. leontaris and g. g. ross , jhep * 1204 * ( 2012 ) 094 doi:10.1007/jhep04(2012)094 [ arxiv:1109.1399 [ hep - ph ] ] . j. c. callaghan , s. f. king and g. k. leontaris , jhep * 1312 * ( 2013 ) 037 doi:10.1007/jhep12(2013)037 [ arxiv:1307.4593 [ hep - ph ] ] . v. bouchard , j. j. heckman , j. seo and c. vafa , `` f - theory and neutrinos : kaluza - klein dilution of flavor hierarchy , '' jhep * 1001 * ( 2010 ) 061 doi:10.1007/jhep01(2010)061 [ arxiv:0904.1419 [ hep - ph ] ] . j. marsano , `` hypercharge flux , exotics , and anomaly cancellation in f - theory guts , '' phys . * 106 * ( 2011 ) 081601 doi:10.1103/physrevlett.106.081601 [ arxiv:1011.2212 [ hep - th ] ] . e. palti , `` a note on hypercharge flux , anomalies , and u(1)s in f - theory guts , '' phys . d * 87 * ( 2013 ) 8 , 085036 doi:10.1103/physrevd.87.085036 [ arxiv:1209.4421 [ hep - th ] ] . m. cvetic , t. w. grimm and d. klevers , `` anomaly cancellation and abelian gauge symmetries in f - theory , '' jhep * 1302 * ( 2013 ) 101 doi:10.1007/jhep02(2013)101 [ arxiv:1210.6034 [ hep - th ] ] . s. m. barr , `` a new symmetry breaking pattern for so(10 ) and proton decay , '' phys . b * 112 * ( 1982 ) 219 . doi:10.1016/0370 - 2693(82)90966 - 2 i. antoniadis , j. r. ellis , j. s. hagelin and d. v. nanopoulos , `` supersymmetric flipped su(5 ) revitalized , '' phys . b * 194 * ( 1987 ) 231 . doi:10.1016/0370 - 2693(87)90533 - 8 g. f. giudice and a. masiero , `` a natural solution to the mu problem in supergravity theories , '' phys . b * 206 * ( 1988 ) 480 . doi:10.1016/0370 - 2693(88)91613 - 9 r. blumenhagen , `` gauge coupling unification in f - theory grand unified theories , '' phys . lett . * 102 * , 071601 ( 2009 ) doi:10.1103/physrevlett.102.071601 [ arxiv:0812.0248 [ hep - th ] ] . g. k. leontaris and n. d. tracas , eur . j. c * 67 * ( 2010 ) 489 doi:10.1140/epjc / s10052 - 010 - 1298 - 2 [ arxiv:0912.1557 [ hep - ph ] ] . a. d. martin , w. j. stirling , r. s. thorne and g. watt , `` parton distributions for the lhc , '' eur . j. c * 63 * ( 2009 ) 189 doi:10.1140/epjc / s10052 - 009 - 1072 - 5 [ arxiv:0901.0002 [ hep - ph ] ] . a. djouadi , `` the anatomy of electro - weak symmetry breaking . i : the higgs boson in the standard model , '' phys . * 457 * ( 2008 ) 1 doi:10.1016/j.physrep.2007.10.004 [ hep - ph/0503172 ] . m. a. shifman , a. i. vainshtein , m. b. voloshin and v. i. zakharov , `` low - energy theorems for higgs boson couplings to photons , '' sov . j. nucl . phys . * 30 * ( 1979 ) 711 [ yad . fiz . * 30 * ( 1979 ) 1368 ] .
motivated by the diphoton excess at 750 gev reported by the atlas and cms experiments , we present an f - theory inspired flipped @xmath0 model embedded in @xmath1 . the low energy spectrum includes the three mssm chiral families , vectorlike color triplets , several pairs of charged @xmath2 singlet fields @xmath3 , as well as mssm singlets , one or more of which could contribute to the diphoton resonance . a total decay width in the multi - gev range can arise from couplings involving the singlet and mssm fields . * diphoton resonance in f - theory inspired flipped @xmath0 * + george k. leontaris@xmath4 and qaisar shafi@xmath5 + @xmath4 _ physics department , theory division , ioannina university , _ _ gr-45110 ioannina , greece _ @xmath5 _ bartol research institute , department of physics and astronomy , university of delaware , _ _ de 19716 , newark , usa _
SECTION 1. SHORT TITLE. This Act may be cited as the ``Job Creation Economic Stimulus Act of 2008''. SEC. 2. ADOPTION OF THE HIGH PRODUCTIVITY INVESTMENT DEDUCTION. (a) In General.--Part VI of subchapter B of chapter 1 of the Internal Revenue Code of 1986 (relating to itemized deductions for individuals and corporations) is amended by inserting after section 168 the following new section: ``SEC. 168A. HIGH PRODUCTIVITY INVESTMENT DEDUCTION. ``(a) Treatment as Expenses.--A taxpayer may elect to treat the cost of any high productivity property as an expense not chargeable to capital account. Any cost so treated shall be allowed as a deduction in the taxable year in which the high productivity property is placed in service. ``(b) Definition of High Productivity Property.-- ``(1) In general.--Except as provided in paragraph (3), the term `high productivity property' means any-- ``(A) computer, ``(B) computer related peripheral equipment, ``(C) computer based machinery, ``(D) electronic diagnostic equipment, ``(E) electronic control equipment, ``(F) other electronic, electromechanical, laser or computer based equipment, ``(G) computer software, ``(H) equipment used in the manufacture of semiconductors, ``(I) high technology medical equipment, ``(J) advanced technology communications equipment, ``(K) optical fiber and photonics equipment, ``(L) advanced environmental products, ``(M) advanced life science products, or ``(N) new high productivity assets. ``(2) Definitions.--For purposes of this subsection: ``(A) Computer.--The term `computer' means a programmable electronically activated device which-- ``(i) is capable of accepting information, applying prescribed processes to the information, and supplying the results of those processes, and ``(ii) consists of a central processing unit containing extensive storage, logic, arithmetic and control capabilities. ``(B) Computer related peripheral equipment.--The term `computer related peripheral equipment' means any auxiliary machine or other equipment (whether on-line or off-line) which is designed to be placed under the control of the central processing unit of a computer (as determined without regard to whether such machine or equipment is an integral part of other property which is not a computer). ``(C) Computer based machinery.--The term `computer based machinery' means any machine which-- ``(i) cuts, forms, shapes, drills, bores, mixes, paints, seals, welds, or otherwise transforms material, or ``(ii) handles, conveys, assembles, or packages materials or products, by responding to electronically stored information and programmed commands. ``(D) Electronic diagnostic equipment.--The term `electronic diagnostic equipment' means equipment that uses electronic components to sense or monitor location, size, volume, surface characteristics, pressure, temperature, speed, chemical composition, or other similar characteristics. ``(E) Electronic control equipment.--The term `electronic control equipment' means equipment that electronically controls pressure, temperature, size, volume, composition purity or other similar characteristics. ``(F) High technology medical equipment.--The term `high technology medical equipment' means any electronic, electromechanical, or computer-based high technology equipment used in the screening, monitoring, observation, diagnosis, or treatment of patients in a laboratory, medical, or hospital environment. ``(G) Advanced technology communications equipment.--The term `advanced technology communications equipment' means equipment used in the transmission or reception of voice, data, video, paging, messaging, or other communications services that are delivered using packet technology. A packet is a unit of data, or sequence of binary digits, that is routed between an origin and a destination on a packet- switched network. ``(H) Optical fiber and photonics equipment.--The term `optical fiber and photonics equipment' means optical fiber and the equipment and materials used to generate, manipulate and direct light particles over such fiber. ``(I) Advanced environmental products.--The term `advanced environmental product' means any high cell density ceramic or other device used for the control of nitrogen oxide and particulate emissions. ``(J) Advanced life sciences products.--The term `advanced life sciences product' means any polymer, ceramic or high-purity glass product used in biological research. ``(K) New high productivity assets.-- ``(i) In general.--The term `new high productivity assets' means any asset utilizing 1 or more technological or scientific processes which were not in common commercial use before January 1, 2007. ``(ii) Determinations.--The Secretary shall establish procedures pursuant to which taxpayers can seek a public ruling that a particular class of assets qualifies as new high productivity assets. The procedures shall require the Secretary to provide a determination within 90 days of receipt of a properly completed request for a public ruling. ``(3) Excluded property.--The term `high productivity property' shall not include-- ``(A) an entire car, locomotive, aircraft, ship or other vehicle solely because the vehicle is controlled in whole or part by a computer or other electronic equipment, ``(B) any equipment of a kind used primarily for entertainment or amusement of the user, and ``(C) typewriters, calculators, copiers, duplication equipment, and other similar equipment. ``(c) Election.--An election under this section for any taxable year shall-- ``(1) be made on an asset by asset basis, and ``(2) be made on the taxpayer's return of the tax imposed by this chapter for the taxable year. ``(d) Special Rules.-- ``(1) Cost.--For purposes of this section, the cost of property does not include so much of the basis of such property as is determined by reference to the basis of other property held at any time by the person acquiring such property. ``(2) Antichurning rules.-- ``(A) In general.--This section shall not apply to any property acquired by the taxpayer after December 31, 2007, if-- ``(i) the property was owned or used at any time during the period beginning on January 1, 2007, and ending on December 31, 2007, by the taxpayer or a related person, ``(ii) the property was owned or used at any time during the period described in clause (i), and, as part of the transaction, the user of the property does not change, ``(iii) the taxpayer leases such property to a person (or a person related to such person) who owned or used such property at any time during the period described in clause (i), or ``(iv) the property is acquired in a transaction as part of which the user of such property does not change and the property was acquired from a person to which clause (ii) or clause (iii) applies. ``(B) Applicable cost recovery rules.--Section 168 shall apply to any property to which this section does not apply by reason of this paragraph. ``(C) Special rules.--For purposes of this paragraph-- ``(i) property shall not be treated as owned before it is placed in service, and ``(ii) whether the user of a property changes will be determined in accordance with regulations prescribed by the Secretary. ``(3) Recapture in certain cases.--The Secretary shall, by regulations, provide for the recapturing the benefit under any deduction allowable under subsection (a) with respect to any property which is not used predominantly in a trade or business at any time. ``(4) Alternative depreciation system applies.--The election under subsection (a) may not be made with respect to property which at any time during the taxable year in which such property is placed in service is-- ``(A) described in paragraph (1) of section 168A(g), or ``(B) `listed property' `not predominantly used in a qualified business use' as such terms apply for purposes of paragraph (1) of 280F(b). ``(e) Termination.--This section shall only apply to property which is-- ``(1) acquired by the taxpayer after December 31, 2007, and before January 1, 2009, but only if no written binding contract for the acquisition was in effect before January 1, 2008, or ``(2)(A) acquired by the taxpayer pursuant to a written binding contract which was entered into after December 31, 2007, and before January 1, 2009, and ``(B) placed in service in taxable years beginning after December 31, 2009.''. (b) Conforming Amendment.--The table of sections for part VI of subchapter B of chapter 1 of such Code is amended by adding after section 168 the following new item: ``Sec. 168A. High productivity investment deduction.''. (c) Effective Date.--The amendments made by this section shall apply to property placed in service after December 31, 2007, with respect to taxable years beginning after such date. SEC. 3. 50 PERCENT ALLOWANCE FOR DEPRECIATION FOR CERTAIN PROPERTY ACQUIRED DURING 2008. (a) In General.--Paragraph (4) of section 168(k) of the Internal Revenue Code of 1986 (relating to 50-percent bonus for certain property) is amended-- (1) by striking ``May 5, 2003'' each place it appears and inserting ``December 31, 2007'', (2) by striking ``January 1, 2005'' each place it appears and inserting ``January 1, 2009'', (3) by striking ``May 6, 2003'' in subparagraph (B)(ii)(I) and inserting ``January 1, 2008'', (4) by striking ``January 1, 2006'' in subparagraph (B)(iii) and inserting ``January 1, 2010'', and (5) by striking ``of 30-percent bonus'' in the heading for subparagraph (E). (b) Repeal of Basis Limitation for Certain Property.--Subparagraph (B) of section 168(k)(2) of such Code is amended by striking clause (ii) and redesignating clause (iii) as clause (ii). (c) Syndications.--Paragraph (4) of section 168(k) of such Code (relating to 50-percent depreciation for certain property) is amended by adding at the end the following: ``(F) Syndications.--For purposes of applying paragraph (2)(A)(ii) by reason of this paragraph, if property-- ``(i) is treated as originally placed in service after December 31, 2007, either directly or by a lessor of such property or pursuant to paragraph (2)(D)(ii), and ``(ii) is sold within 6 months after such property is so placed in service, such property shall be treated as originally placed in service not earlier than the date of such sale.''. (d) Effective Date.-- (1) In general.--The amendments made by this section shall apply to property placed in service in taxable years beginning after December 31, 2007. (2) Exception for certain property.--The amendments made by this section shall not apply to any property to which section 105 of the Gulf Opportunity Zone Act of 2005 applies. SEC. 4. DEPRECIATION RULES NOT MODIFIED FOR PURPOSES OF ALTERNATIVE MINIMUM TAX. (a) Determination of Alternative Taxable Income.--Paragraph (1) of section 56(a) of the Internal Revenue Code of 1986 (relating to depreciation) is amended by adding at the end the following new subparagraph: ``(E) Termination.--This paragraph shall not apply to property placed in service in a taxable year beginning in 2008 or 2009.''. (b) Determination of Adjusted Current Earnings.--Subparagraph (A) of section 56(g)(4) of such Code (relating to depreciation) is amended by adding at the end the following new clause: ``(vi) Termination.--This subparagraph shall not apply to property placed in service in a taxable year beginning in 2008 or 2009.''. (c) Effective Date.--The amendments made by this section shall apply to property placed in service after December 31, 2007, in taxable years beginning after such date. SEC. 5. LONG-TERM CONTRACT ACCOUNTING. (a) In General.--Section 168(k)(2) of the Internal Revenue Code of 1986 is amended by adding after subparagraph (G) the following new subparagraph: ``(H) Long-term contract accounting.--The percentage of completion method under section 460 shall be applied as if this subsection had not been enacted.''. (b) Effective Date.--The amendment made by subsection (a) shall apply to property placed in service after the date of the enactment of this Act in taxable years ending after such date. SEC. 6. LONG-TERM UNUSED CREDITS ALLOWED AGAINST MINIMUM TAX. (a) In General.--Subsection (c) of section 53 of the Internal Revenue Code of 1986 (relating to limitation) is amended by adding at the end the following new paragraph: ``(2) Special rule for corporations with long-term unused credits.-- ``(A) In general.--If a corporation to which section 56(g) applies has a long-term unused minimum tax credit for a taxable year, the credit allowable under subsection (a) for the taxable year shall not exceed the greater of-- ``(i) the limitation determined under paragraph (1) for the taxable year, or ``(ii) the least of the following for the taxable year: ``(I) The sum of the tax imposed by section 55 and the regular tax reduced by the sum of the credits allowed under subparts A, B, D, E, and F of this part. ``(II) The long-term unused minimum tax credit. ``(III) The sum of-- ``(aa) 50 percent of qualified investment, plus ``(bb) the qualified investment carryover to the taxable year. ``(B) Long-term unused minimum tax credit.--For purposes of this paragraph-- ``(i) In general.--The long-term unused minimum tax credit for any taxable year is the portion of the minimum tax credit determined under subsection (b) attributable to the adjusted net minimum tax for taxable years beginning after 1986 and ending before the 3rd taxable year immediately preceding the taxable year for which the determination is being made. ``(ii) First-in, first-out ordering rule.-- For purposes of clause (i), credits shall be treated as allowed under subsection (a) on a first-in, first-out basis. ``(C) Qualified investment and qualified investment carryover.--For purposes of this paragraph-- ``(i) Qualified investment.--Qualified investment is property described in section 1245(a)(3) placed in service in the taxable year. ``(ii) Qualified investment carryover.--The qualified investment carryover is the amount by which 50 percent of qualified investment exceeds the amount of tax in paragraph (2)(A)(ii)(I). The qualified investment carryover may be carried only to the first taxable year following the current year. ``(D) Termination.--Subparagraph (A) shall not apply to any taxable year beginning after December 31, 2008.''. (b) Conforming Amendments.--Section 53(c) of such Code is amended-- (1) by striking ``The'' and inserting the following: ``(1) In general.--The''; and (2) by redesignating paragraphs (1) and (2) as subparagraphs (A) and (B), respectively.
Job Creation Economic Stimulus Act of 2008 - Amends the Internal Revenue Code to allow the expensing of the cost of certain high productivity property placed in service in 2008, including computer and computer-related peripheral equipment, electronic equipment, software, high technology medical equipment, and advanced environmental and life science products. Allow a 50% depreciation allowance for certain business equipment acquired in 2008. Exempts acclerated depreciation amounts related to properties placed in service in 2008 or 2009 from adjustments in computing alternative minimum taxable income. Allows an offset in 2008 against the alternative minimum tax liability of corporations for their long-term unused tax credits.
it is widely accepted that active galactic nuclei ( agn ) are powered by accretion of matter onto massive black holes . agn activity peaked at @xmath02 ( e.g. , maloney & petrosian 1999 ) and the high ( @xmath1l@xmath2 ) luminosities of quasi stellar objects ( qsos ) are explained by accretion onto super massive ( @xmath3 m@xmath2 ) black holes at or close to the eddington limit . the observed evolution of the space density of agn ( chokshi & turner 1992 , faber et al . 1997 , marconi & salvati 2001 ) implies that a significant fraction of luminous galaxies must host black holes , relics of past activity . indeed , it is now clear that a large fraction of hot spheroids contain a massive bh ( e.g. , magorrian et al . 1998 ; van der marel 1999 ) , and it appears that the bh mass is proportional to both the mass of the host spheroid ( kormendy & richstone 1995 ) and its velocity dispersion ( ferrarese & merritt 2001 , gebhardt et al . 2000 , merritt & ferrarese 2001 ) . several radio - galaxies , all associated with giant elliptical galaxies , like m87 ( macchetto et al . 1997 ) , m84 ( bower et al . 1998 ) , ngc 7052 ( van der marel & van den bosch 1998 ) and centaurus a ( marconi et al . 2001 ) , are now known to host supermassive ( @xmath4@xmath5 m@xmath2 ) bhs in their nuclei . the luminosity of their optical nuclei indicates that they are accreting at a low rate and/or low accretion efficiency ( chiaberge , capetti & celotti 1999 ) . they presumably sustained quasar activity in the past but at the present epoch are emitting much below their eddington limits ( l / l@xmath6@xmath7 ) . the study of the seyfert bh mass distribution provides a statistical method of investigating the interplay between accretion rate and bh growth . in order to achieve this it is necessary to directly measure the bh masses in seyfert galaxies and to compare their eddington and bolometric luminosities using the hard x - ray luminosities . similarly important will be the comparison between the bh masses found in seyfert galaxies with those of non active galaxies . however , to date , there are very few secure bh measurements or upper limits in spiral galaxies . it is therefore important to directly establish how common are bhs in spiral galaxies and whether they follow the same @xmath8-@xmath9 , @xmath8-@xmath10 correlations as elliptical galaxies . to detect and measure the masses of massive bhs requires spectral information at the highest possible angular resolution the sphere of influence " of massive bhs is typically @xmath11 in radius even in the closest galaxies . nuclear absorption line spectra can be used to demonstrate the presence of a bh , but the interpretation of the data is complex because it involves stellar - dynamical models that have many degrees of freedom . in seyfert galaxies the problems are compounded by the copious light from the agn . studies at _ hst _ resolution of ordinary optical emission lines from gas disks in principle provide a more widely applicable and readily interpreted way of detecting bhs ( cf . m87 , macchetto et al . 1997 , barth et al . 2001 ) provided that the gas velocity field are not dominated by non gravitational motions . prompted by these considerations , we have undertaken a spectroscopic survey of 54 spirals using stis on the _ hubble space telescope_. our sample was extracted from a comprehensive ground - based study by axon et al . who obtained h@xmath12 and nii rotation curves at a seeing - limited resolution of @xmath13 , of 128 sb , sbb , sc , and sbc spiral galaxies from rc3 . by restricting ourselves to galaxies with recession velocities @xmath14km / s , we obtained a volume - limited sample of 54 spirals that are known to have nuclear gas disks and span wide ranges in bulge mass and concentration . the systemic velocity cut - off was chosen so that we can probe close to the nuclei of these galaxies , and detect even lower - mass black holes . the frequency of agn in our sample is typical of that found in other surveys of nearby spirals , with comparable numbers of weak nuclear radio sources and liners . the observational strategy , used for all the galaxies in our sample , consisted in obtaining spectra at three parallel positions with the central slit centered on the nucleus and the flanking ones at a distance of 02 . at each slit position we obtained two spectra with the g750 m grating centered at h@xmath12 , with the second spectrum shifted along the slit by an integer number of detector pixels in order to remove cosmic - ray hits and hot pixels . the nuclear spectrum ( nuc ) was obtained with the 01 slit and no binning of the detector pixels , yielding a spatial scale of 00507/pix along the slit , a dispersion per pixel of @xmath15 and a spectral resolution of @xmath16 . the off - nuclear spectra ( pos1 and pos2 ) were obtained with the 02 slit and @xmath17 on - chip binning of the detector pixels , yielding 0101/pix along the slit , 1.108 / pix along the dispersion direction and @xmath18 . the raw spectra were processed with the standard pipeline reduction software . we derived rotation curves for each of the observed slit positions and applied our modeling code , described in detail in marconi et al . ( 2003 ) , to fit the observed rotation curves . briefly the code computes the rotation curves of the gas assuming that the gas is rotating in circular orbits within a thin disk in the galaxy potential . the gravitational potential has two components : the stellar potential , determined from wfpc or nicmos observations and characterized by its mass - to - light ratio , and a dark mass concentration ( the black hole ) , spatially unresolved at _ _ hst__+stis resolution and characterized by its total mass m@xmath19 . in computing the rotation curves we take into account the finite spatial resolution of _ _ hst__+stis and we integrate over the slit and pixel area . the @xmath20 is minimized to determine the free parameters using a downhill simplex algorithm . the emission line surface brightness is modeled with a composition of two gaussians , the first reproducing the central emission peak while the second accounts for the brightness behaviour at large radii . having fixed the line brightness distribution , the free parameters of the fit are the systemic velocity , @xmath21 , the impact parameter ( i.e. , the distance between the slit center and the center of rotation ) @xmath22 , the position of the galaxy center along the slit @xmath23 , the angle between the slit and the line of nodes , @xmath24 , the disk inclination @xmath25 , the mass to light ratio , @xmath26 and the black hole mass @xmath8 . we perform a @xmath20 minimization allowing all parameters to vary freely . in this review i discuss the key results for two galaxies in our sample , namely ngc 4041 and ngc 5252 , while in fig . 1 i show a sample of the images and spectra for a subset of the other galaxies that we have observed . ngc 4041 is classified as a sbc spiral galaxy with no detected agn activity . its average heliocentric radial velocity from radio measurements is @xmath27@xmath28 becoming @xmath29@xmath28 after correction for local group infall onto virgo . with @xmath30=75@xmath28 @xmath31@xmath32 this corresponds to a distance of @xmath33 and to a scale of 95@xmath34/@xmath35 . slit positions overlaid on the acquisition image . the 0,0 position is the position of the target derived from the stis acq procedure . the white cross is the kinematic center derived from the fitting of the rotation curves . ] hst_/stis spectra were used to map the velocity field of the gas in its nuclear region . we detected the presence of a compact ( @xmath36 ) , high surface brightness , circularly rotating nuclear disk cospatial with a nuclear star cluster . this disk is characterized by a rotation curve with a peak to peak amplitude of @xmath37@xmath28 and is systematically blueshifted by @xmath3820@xmath28 with respect to the galaxy systemic velocity . best fit standard model of the observed rotation curves ( solid line ) compared with the data . the dotted line is the best fit model without a black hole . the model values are connected by straight lines in order to guide the eye . note that points from external and nuclear regions are not connected because they are kinematically decoupled . the right panel is a zoom on the nuclear disk region . , title="fig : " ] best fit standard model of the observed rotation curves ( solid line ) compared with the data . the dotted line is the best fit model without a black hole . the model values are connected by straight lines in order to guide the eye . note that points from external and nuclear regions are not connected because they are kinematically decoupled . the right panel is a zoom on the nuclear disk region . , title="fig : " ] the standard approach followed in gas kinematical analysis is to assume that ( i ) gas disks around black holes are not warped i.e. , they have the same line of nodes and inclinations as the more extended components , and ( ii ) the stellar population has a constant mass - to - light ratio with radius ( e.g. , van der marel & van den bosch @xmath32998 ; barth et al . 2001 ) . using the emission line flux distribution derived from the imaging data , the inclination of the galactic disk can be fixed to @xmath25= 20 , i.e. , the remaining free parameters can then be the fit with the procedure described earlier and we find that , in order to reproduce the observed rotation curve , a dark point mass [ supermassive bh ] of @xmath39 m@xmath2 is needed . however , the blueshift of the inner disk suggests the possibility that the nuclear disk could be dynamically independent . following this line of reasoning we have relaxed the standard assumptions and model the curves by allowing variations in the stellar mass - to - light ratio and the disk inclination . we have found that the kinematical data can be accounted for by the stellar mass provided that either the mass - to - light ratio is increased by a factor of @xmath40 or the inclination is allowed to vary . this model resulted in a @xmath41 upper limit of @xmath42 m@xmath2 for the mass of any nuclear black hole . combining the results from the standard and alternative models , the present data only allow us to set an upper limit of @xmath43 m@xmath2 to the mass of the nuclear bh . if this upper limit is taken in conjunction with an estimated bulge b magnitude of @xmath44 and with a central stellar velocity dispersion of @xmath45@xmath28 , the putative black hole in ngc 4041 is not inconsistent with both the @xmath8-@xmath46 and the @xmath8-@xmath10 correlations . ngc 5252 is an early type ( s0 ) seyfert 2 galaxy at a redshift @xmath47 whose line emission shows a biconical morphology ( tadhunter & tsvetanov 1989 ) extending out to 20 kpc from the nucleus along pa -15@xmath48 . on a sub - arcsec scale three emission line knots form a linear structure oriented at pa @xmath49 , close to the bulge major axis , suggestive of a small scale gas disk . for h@xmath50 km s@xmath32 mpc@xmath32 at the distance of ngc 5252 ( 92 mpc ) , @xmath13 corresponds to 450 pc . [ nuc ] shows the line central velocity , flux and fwhm for the central slit of our stis observations . emission , which is detected out to a radius of @xmath516 corresponding to @xmath52 pc , is strongly concentrated showing a bright compact knot cospatial with the continuum peak . two secondary emission line maxima are also present at @xmath53 from the main peak . they represent the intersection of the slit with emission line knots seen in our wfpc and stis images . two different gas systems are present in the nuclear regions of ngc 5252 : the first shows a symmetric velocity field , with decreasing line width and can be interpreted as being produced by gas rotating around the nucleus . the second component , showing significant non circular motions , is found to be associated exclusively with the off - nuclear blobs . following the fitting procedure described earlier , the best fit to our data is obtained for a black hole mass @xmath54 m@xmath2 . the model fitting of the nuclear rotation curve of ngc 5252 shows that the kinematics of gas in its innermost regions can be successfully accounted for by circular motions in a thin disk when a point - like dark mass ( presumably a supermassive black hole ) is added to the galaxy potential . left , m@xmath19 vs. bulge mass and right , m@xmath19 vs. stellar velocity dispersion @xmath55 with the best fits obtained from a bisector linear regression analysis ( solid line ) and ordinary least - square ( dashed line).,title="fig : " ] left , m@xmath19 vs. bulge mass and right , m@xmath19 vs. stellar velocity dispersion @xmath55 with the best fits obtained from a bisector linear regression analysis ( solid line ) and ordinary least - square ( dashed line).,title="fig : " ] the central velocity dispersion of ngc 5252 ( nelson & whittle , 1995 ) is @xmath56 @xmath28 . the correlation between velocity dispersion and black hole mass predicts a mass of @xmath57 m@xmath2 where the error is dominated by the uncertainty in @xmath58 . therefore , the black hole mass we derived for ngc 5252 is larger by a factor @xmath38 than the value expected from this correlation ! ( see fig . [ corr ] ) . this value however is in good agreement with the correlation between bulge and bh mass . as for its active nucleus , ngc 5252 is an outlier when compared to the available data for seyfert galaxies , not only as it harbours a black hole larger than typical for these objects , but also as its host galaxy is substantially brighter than average for seyfert galaxies . on the other hand , both the black hole and the bulge s mass are typical of the range for radio - quiet quasars . combining the determined bh mass with the hard x - ray luminosity , we estimate that ngc 5252 is emitting at a fraction @xmath59 of l@xmath60 . this active nucleus thus appears to be a quasar relic , now probably accreting at a relatively low rate , rather than a low black hole mass counterpart of qsos .
we have embarked in an _ hst _ program to determine the masses of black holes in spiral galaxies directly by measuring the line emission arising from an extended accretion disk . for each of the galaxies in our sample we have measured the rotation curve and determined the mass distribution within the inner 550 pc . we have modeled the stellar mass component using the photometric data from existing _ hst _ images and using both data sets we have derived the masses of the black holes in each galaxy . these results will be very important in clarifying the role of the black hole in powering the agn , will shed light into the effectiveness of the accretion mechanisms and finally will be important in addressing the fundamental issue of unification for seyfert 1 and seyfert 2 galaxies .
the number of known extrasolar planets has exploded in the last two decades . this has been driven by improvements in all of the different techniques used to detect and characterise exoplanets , including the radial velocity ( rv ) method ( e.g. * ? ? ? * ) , the transit method ( e.g. * ? ? ? * ) , and gravitational microlensing ( e.g. * ? ? ? * ; * ? ? ? the problem of inferring the properties of an exoplanetary system from observational data can be challenging . in the case of radial velocity data , the expected signal due to an exoplanet is periodic , and the goal is to infer the number of planets in the system , as well as their properties such as orbital periods and eccentricities . many different techniques have been proposed for doing this . these techniques fall into two main classes : i ) those based on periodograms , and ii ) those based on model fitting in the bayesian inference framework , to describe the uncertainties probabilistically . bayesian model fitting via markov chain monte carlo ( mcmc ) tends to be computationally intensive , especially if we want to calculate the posterior distribution for @xmath0 , the number of planets . it is well known that rv datasets can contain periodic signals resulting from stellar activity rather than planets , which can affect the conclusions we draw about exoplanet systems . therefore , it is important to develop models which attempt to distinguish stellar activity signals from keplerian planet signals based on the shape of the oscillations and/or additional data constraining the periods of any stellar activity signals . we do not address this important challenge in the present paper . rather , we consider the problem of inferring the number @xmath0 of keplerian signals in an rv dataset in a computationally efficient way , under the simplifying assumption that only keplerian signals are present in the data . we introduce a trans - dimensional birth - death mcmc approach @xcite to inferring @xmath0 . when @xmath0 is treated as just another model parameter , we can obtain its posterior distribution in a single run . in addition , rather than trying to sample the posterior distribution , we use diffusive nested sampling ( dns * ? ? ? * ) which replaces the posterior distribution with an alternative _ mixture of constrained priors _ , allowing mixing between separated modes . as a result , we are able to sample the posterior distribution for @xmath0 , and evaluate the marginal likelihood ( including the sum over @xmath0 ) in a single run which takes about 10 minutes on a 2 - 3 planet system . on the other hand the approach of @xcite takes approximately 30 minutes per planet ( gregory , priv . . a c++ implementation of our method is available online at https://github.com/eggplantbren/exoplanet under the terms of the gnu general public licence . bayesian inference is the use of probability theory to describe uncertainty @xcite . in this framework , we approach data analysis problems by first constructing a _ hypothesis space _ , which is the set of possible answers to the problem we are considering . normally , this is the set of possible values of a vector of parameters @xmath3 whose values we want to know . we then assign probability distributions called the _ prior _ and the _ sampling distribution_. the prior distribution @xmath4 describes our initial uncertainty about which values of the parameters @xmath3 are plausible , and the sampling distribution @xmath5 describes our initial uncertainty about the data set we re going to observe , as a function of the unknown parameters @xmath3 . when the data is known , our state of knowledge about the parameter is updated from the prior @xmath4 to the posterior distribution given by bayes rule : @xmath6 where @xmath5 as a function of @xmath3 is called the likelihood , once the actual dataset has been substituted in . note that some authors do not distinguish between a sampling distribution and a likelihood . throughout this paper we use the term sampling distribution for @xmath7 if we are discussing a probability distribution ( actually a family of them , indexed by @xmath3 ) over the set of possible datasets . we use the term _ likelihood _ when the actual dataset has been plugged in , when @xmath7 becomes a scalar function ( not a probability distribution ) over the parameter space . the denominator , often called the _ evidence _ or _ marginal likelihood _ , is given by the expected value of the likelihood with respect to the prior : @xmath8 where the integral is over the entire @xmath9-dimensional parameter space . in the context of bayesian computation , the prior is often denoted @xmath10 , the likelihood @xmath11 , and the marginal likelihood @xmath12 . the number of orbiting planets , @xmath0 , is an important parameter . to calculate the posterior distribution for @xmath0 , most authors consider various trial values of @xmath0 , and calculate the marginal likelihood @xmath13 for each possible value of @xmath0 ( e.g. * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) , marginalising over all other model parameters . the posterior distribution for @xmath0 can then be found straightforwardly by using bayes rule with @xmath0 as the only unknown parameter : @xmath14 popular methods for calculating the marginal likelihood are nested sampling @xcite and ideas related to thermodynamic integration ( e.g. * ? ? ? * ) . relationships between these methods are discussed by @xcite and @xcite . this traditional approach can be very time consuming . methods for calculating the marginal likelihood are already more intensive than standard mcmc methods for sampling the posterior , because they usually involve a sequence of probability distributions ( e.g. the constrained priors in nested sampling , or the annealed distributions in thermodynamic integration ) rather than a single distribution ( the posterior ) . this intensive process needs to be run many times , for @xmath15 , @xmath16 , @xmath17 , and so on . the traditional approach to inferring @xmath0 also contradicts fundamental ideas in bayesian computation . imagine we are trying to compute the posterior distribution for a parameter @xmath18 in the presence of a nuisance parameter @xmath19 . this is usually solved by exploring the joint posterior for @xmath18 and @xmath19 , and then only looking at the generated values of @xmath18 . nobody would suggest the wasteful alternative of using a discrete grid of possible @xmath18 values and doing an entire nested sampling run for each , to get the marginal likelihood as a function of @xmath18 . when the hypothesis space for @xmath18 is discrete , mcmc is still possible and there is no reason to switch to the wasteful alternative . trans - dimensional mcmc methods such as birth - death mcmc @xcite or the more general reversible jump mcmc @xcite treat the model dimension @xmath0 as just another model parameter . at fixed @xmath0 , standard techniques such as the metropolis algorithm can be used to explore the posterior distribution . additional moves that propose to change the value of @xmath0 are also defined . the simplest of these are birth - death moves . more complicated moves , such as split - and - merge , are possible but not always necessary . trans - dimensional mcmc is a natural tool for a wide range of astronomical data analysis problems ( e.g. * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) . in the exoplanet context , a birth move proposes to add one more planet to the model . the new planet s properties ( period , amplitude , eccentricity , etc ) are drawn from their prior distribution which may depend on other other model parameters or hyperparameters . the corresponding death move simply chooses a planet currently in the model , and removes it . the acceptance probability for these moves is 1 if we want to explore the prior . to implement these moves in nested sampling @xcite , where the target distribution is proportional to the prior but with a hard likelihood constraint , then the acceptance probability is 1 if the proposed move satisfies the likelihood constraint , and 0 if it does not . a recent paper @xcite introduced a general approach to implementing trans - dimensional models within diffusive nested sampling @xcite , a general mcmc algorithm . the @xcite software predefines the metropolis proposals for exploring trans - dimensional target distributions , including when the prior for the properties of each model component ( i.e. each planet ) is defined hierarchically . it is well known that bayesian computation ( using mcmc for example ) can be difficult when the posterior distribution is multimodal or has strong dependencies between parameters . an uncommon but less well - known difficulty is the existence of _ phase transitions _ @xcite . imagine a high - dimensional unimodal posterior distribution that is composed of a broad , high volume but low density `` slab '' with a narrow , low volume but high density `` spike '' on top of it . an example is a mixture of two concentric high - dimensional gaussians with different widths . if you ran mcmc on such a posterior , it would be difficult to jump between the slab and the spike components . if the mcmc is currently in the spike region ( or phase ) it will be unable to escape : a proposed move into the slab will be rejected because of the ratio of densities . conversely , if the mcmc was in the slab region , it would be unlikely to go into the spike region , because its volume is so small : it would be very unlikely to _ propose _ to move into the spike . thus , the situation behaves much like a multimodal posterior , despite only being unimodal . if the slab contains a very small amount of posterior probability , it is not a problem if an mcmc algorithm spends all its time in the spike . however , this situation could still cause problems with the calculation of the marginal likelihood if annealing methods are used . the thermodynamic integral formula gives the log of the marginal likelihood @xmath12 as an average of log likelihoods : @xmath20\right>_\beta \ , d\beta\end{aligned}\ ] ] where the expectation is taken with respect to the distribution with `` inverse temperature '' @xmath21 , proportional to @xmath22 . even if the slab contains virtually zero probability when @xmath23 ( i.e. the posterior ) , for some values of the inverse temperature @xmath21 the slab and the spike will both be important . at these temperatures the mcmc will fail to mix ( it will incorrectly spend all its time in either the slab or the spike , rather than mixing between the two ) and will give a misleading estimate of the average log likelihood at that temperature and therefore an incorrect marginal likelihood estimate . phase transitions are well known in statistical mechanics , but can also occur in bayesian data analysis . typically this occors when the data contains a `` big '' effect which provides a lot of information about some parameters , and a `` small '' subtle effect as well . nested sampling , and variants such as dns , are not affected by phase transitions because the exploration only makes use of likelihood _ rankings _ , rather than likelihood values themselves , and are therefore invariant under monotonic transformations of the likelihood function . part of their output , the relationship between the likelihood @xmath24 and the enclosed prior mass @xmath25\ , d^n\theta$ ] can be used to diagnose whether the problem contains a phase transition . in particular , if the graph of @xmath26 vs. @xmath27 is convex at some point , then a phase transition exists @xcite . to fit a planet model to rv data , we need parameters to describe the properties of each planet . for simplicity , we describe each planet by five parameters : the orbital period @xmath28 , the semi - amplitude ( in metres per second ) of the rv signal @xmath29 , the phase of the signal @xmath30 ( defined such that @xmath31 gives an rv signal whose maximum is at @xmath32 ) , the eccentricity @xmath33 , and the `` viewing angle '' @xmath34 ( also known as the longitude of the line of sight ) . we defined our parameters such that in the limit of zero eccentricity , the rv signal of a planet reduces to @xmath35 . the unknown parameters are : @xmath36 where @xmath0 is the number of planets , @xmath37 are hyperparameters hyperparameters used to define the prior for the properties of the planets , and @xmath38 are the properties of planet @xmath39 . the parameter @xmath40 describes a dc offset in the data , and @xmath41 and @xmath1 are parameters of the noise distribution which are discussed further below . note that our parameter @xmath34 is standard , however @xmath30 is non - standard because we assert that @xmath42 always implies the signal is at its maximum at @xmath32 . our parameter space is equivalent to the standard one , we are just using a different coordinate system . a standard assumption for the probability distribution of the data given the parameters ( known as the sampling distribution , which becomes the likelihood function when the dataset is known ) is a normal distribution with standard deviation @xmath43 known from the error bars in the data set . however , it is usually recommended to put in `` safety features '' , in case the data set contains any discrepant measurements , or in case the error bars in the data set are underestimated . to achieve this , we used a student-@xmath44 distribution instead of a normal distribution , with scale parameter @xmath45 and shape parameter @xmath1 . the parameter @xmath41 is an `` extra noise '' parameter that effectively increases the size of the error bars , and the shape parameter @xmath1 allows for heavier tails than a normal distribution . if @xmath1 is large , the student-@xmath44 distribution is approximately a normal distribution , and if @xmath1 is small the noise distribution has much heavier tails . for instance , when @xmath46 the student-@xmath44 distribution becomes a cauchy distribution . all of the model assumptions are specified in detail in table [ tab : priors ] . we assigned hierarchical priors to some of the planet s parameters ( i.e. the prior for the planets parameters is defined conditional on some hyperparameters ) . this allows the model to capture the idea that knowing the values of some planet s parameters provides some information about the parameters of another planet . not using hierarchical priors usually implies a strong prior commitment to the hypothesis that the properties of the planets are spread out across the whole domain of possible values , which is not necessarily the case . most of our priors were chosen to represent vague prior knowledge , rather than the judgement of an informed expert on extrasolar planets . uniform distributions were used for parameters such as phases , where time - translation symmetry seems plausible . for some parameters we assigned the distribution in terms of the log of the parameter , rather than the parameter itself , when the parameter is positive and uncertain by orders of magnitude . truncated cauchy distributions were used when there is a preferred value , but since these have very heavy tails , the assumption is quite fail safe relative to other possible `` informative '' assignments such as normal distributions . for example , the prior for @xmath47 , the typical orbital period , is centered around 1 year but could be as low as @xmath48 years or as high as @xmath49 years , a very generous range . a uniform distribution for @xmath50 would have been more conventional , whereas the cauchy distribution expresses a slight preference for @xmath47 being of order one year . an apparently strange choice is the conditional prior for the ( logarithms of ) the orbital periods , which is a biexponential distribution given a location parameter @xmath47 and a scale parameter @xmath51 which determines the width of the distribution and scale parameter @xmath52 has probability density function @xmath53 . ] . rather than assigning independent priors to the log periods , the hierarchical model allows for the periods to `` cluster around '' a typical period @xmath47 if there is evidence for this . on the other hand , independent priors for the periods would imply a strong prior commitment to the hypothesis that the periods are spread out across the whole prior volume ( equivalent to assuming a fixed large value for @xmath51 ) . a more conventional choice for the conditional prior given @xmath47 and @xmath51 would have been a normal distribution . however , the @xcite software needs to know the corresponding cumulative distribution and its inverse , which are not available in closed form for the normal distribution . our prior for @xmath51 , which controls the diversity of the log - periods , was uniform between 0.1 and 3 , since it is unlikely that many planets have extremely similar or extremely different ( over several orders of magnitude ) orbital periods . for the velocity semi - amplitudes @xmath54 , we chose an exponential distribution given the hyperparameter @xmath55 which sets the mean of the exponential distribution . our prior for @xmath55 spans many orders of magnitude but expresses a slight preference for @xmath55 being of order unity , using a cauchy distribution . the prior for the semi - amplitudes will influence how many of these low amplitude planets will be inferred : if we believe there are many , and the data are uninformative about low amplitude planets , then the posterior distribution for @xmath0 will also indicate that there may be many low amplitude planets . however , their other properties , such as their orbital periods , will not be well determined . the beta prior for eccentricity was suggested by gregory ( priv . comm ) and is an approximation to the inferred frequency distribution of eccentricities in the population @xcite . [ cols="<,<,<",options="header " , ] the expected ( noise - free ) signal due to an exoplanet is periodic , but non - sinusoidal when the orbit is not perfectly circular . the expected shape @xmath56 of the variations is needed in order to evaluate the likelihood function for any proposed setting of the parameters . to save time , we pre - computed the properties of orbits as a function of eccentricity . we also made the standard assumption that the planets do not interact , so the expected signal due to several planets is the sum of the contributions of each planet . consider a test particle moving in the @xmath57-@xmath58 plane under the influence of a point mass at the origin . the motion of the test particle represents the reflex motion of the host star orbiting around the center of mass of the system . the equations of motion for the particle are : @xmath59 where @xmath60 . the solutions to this system of equations are elliptical orbits with the focus at the origin . we set the initial position to @xmath61 , and the initial velocity to @xmath62 where @xmath63 $ ] . if @xmath64 , the orbit is circular and as @xmath65 decreases the orbit becomes more elliptical . for trial values of @xmath65 ranging from 0.4 to 1 in steps of 0.005 , we calculated the orbit , and saved the velocities @xmath66 and @xmath67 as a function of time to disk . these saved orbits were used as a lookup table for constructing the expected signal @xmath68 due to a single planet . because of the initial conditions , the simulated orbits were all horizontally aligned . if the observer is located on the @xmath57-axis a large distance from the origin , they will measure @xmath69 . however , if the observer is located at an angle @xmath70 with respect to the @xmath57-axis , then the radial velocity measured will instead be @xmath71 . since the our orientation with respect to the orbits is unknown , each planet requires a `` viewing angle '' parameter @xmath70 also known as the longitude of the line of sight @xcite . the eccentricity of the orbit , in terms of @xmath65 , is @xmath72 . by precomputing a set of orbits before running the mcmc , we are able to do @xmath73 15,000 likelihood evaluations per second per cpu core . to test our proposed methodology , we generated a simulated dataset for a system with @xmath74 planets . the dataset was `` inspired by '' the @xmath1 oph dataset ( section [ sec : nu_oph ] ) , and contains two large signals with periods of 530 and 3120 days , whose semi - amplitudes are 291 m s@xmath75 and 181 m s@xmath75 respectively . the other five planets have much lower semi - amplitudes , ranging from 4 - 30 ms@xmath75 . the standard deviation for the noise in the data was 5 m s@xmath75 , so some of these low - amplitude signals should be detectable . the simulated data is shown in figure [ fig : fake_data ] , along with the true radial velocity curve @xmath56 that was used to generate the data . oph dataset , which we used to test our methodology . the dominant signal is from two large planets with periods periods of 530 and 3120 days and semi - amplitudes of 291 m s@xmath75 and 181 m s@xmath75 respectively . there are also five much smaller planets which contribute small additional effects to the data . [ fig : fake_data ] ] we ran our algorithm on the simulated dataset to obtain samples from the posterior distribution . we obtained 520 posterior samples . the posterior distribution for @xmath0 , the number of planets , is shown in figure [ fig : fake_data_n ] . the true number of planets , 7 , is not the most probable value , but it does have substantial probability . the posterior distribution suggests that @xmath0 could be anywhere from 6 to 10 . given the simulated dataset . the true number of planets was 7 . [ fig : fake_data_n ] ] the posterior distribution for the periods @xmath76 is shown in figure [ fig : fake_data_periods ] . because of the label switching degeneracy , the posterior distribution for each period is identical , so we pooled the samples for all periods . defining the log - periods by @xmath77 $ ] , figure [ fig : fake_data_periods ] is a monte carlo representation of the mixture distribution @xmath78 if a certain period is accurately measured ( i.e. it appears in close to 100% of the posterior samples and the distribution for its period is very narrow ) then it will appear in figure 3 with a height of @xmath79 . if the uncertainty in the period is larger than the histogram bin width then the peak will be spread over several bins . the posterior distribution for the periods , shown in figure [ fig : fake_data_periods ] , shows that six of the true periods were recovered , with probability close to 1 . one period ( with @xmath80 ) which was actually present was not `` detected '' because it had a very small amplitude . we note that the posterior probability near this period should not be precisely zero . there is also some evidence for periods which did not actually exist , however the posterior probabilities for these peaks are not close to 1 . ] the joint posterior distribution for the periods and the amplitudes of the signals is shown in figure [ fig : fake_data_posterior ] along with the eccentricities . as with figure [ fig : fake_data_periods ] , the samples for all planets were combined . the true values are also plotted as circles . clearly , the reason the period of @xmath81 was not `` detected '' was that it had a very low amplitude of approximately 4 m s@xmath75 which is below the noise level . ] the @xmath1 oph system is generally accepted to have two confirmed planets ( e.g. * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) , with periods of @xmath82 and @xmath83 days . to test our approach we applied it to the rv data from @xcite . the posterior distribution for @xmath0 , the number of planets , is shown in figure [ fig : nu_oph_n ] , showing that @xmath0 could be anywhere from 2 to 10 , and the posterior probability that @xmath84 is about 88% . of course , these extra possible signals are not necessarily a planet but a feature in the data which is better explained by a periodic signal than by noise ( and may have been explained by correlated noise , had we included it ) . the posterior distribution for the logarithms of the periods is shown in figure [ fig : nu_oph_periods ] . as in section [ sec : fake_data ] , the posterior samples for all periods were combined to make this figure , which shows several prominent peaks . the two peaks with vertical dashed lines are the commonly accepted periods of 530 and 3190 days , and the other prominent peaks ( i.e. signals which have a moderate probability of existence ) have periods of 36.11 @xmath2 0.034 days , 75.58 @xmath2 0.80 days , and 1709 @xmath2 183 days . as with any mcmc output , if we are interested in the probability of any proposition @xmath85 ( for example , `` @xmath86 a planet exists with period between 35 and 37 days '' ) , we can calculate the proportion of the posterior samples for which @xmath85 is true , which ( if we have a lot of samples ) is a monte carlo estimate of the posterior probability of @xmath85 . for @xmath1 oph , we calculated the probability that at least one of these `` extra '' signals ( beyond the two commonly accepted ones ) exists , as 85% . given that they exist , their amplitudes are low , around 5 - 40 metres per second , which we note is above the noise level . an example model fit to the data is shown in figure [ fig : nuoph ] . orbiting @xmath1 oph . the posterior probability of @xmath87 is about 88% , however the prior probability of @xmath87 was already high due to the uniform prior for @xmath0 . [ fig : nu_oph_n ] ] oph system . the two dashed vertical lines are the commonly accepted periods of @xmath82 and @xmath83 days . the next most prominent peaks with well determined periods are at log periods of around 1.6 , 2.4 , and 3.3 , corresponding to periods of 36.11 @xmath2 0.034 days , 75.58 @xmath2 0.80 days , and 1709 @xmath2 183 days.[fig : nu_oph_periods ] ] oph radial velocity data and an example model fit which includes a third period . the amplitude of this additional signal is low but is about twice the reported errorbars on the measurements.[fig : nuoph ] ] oph analysis . there are several phase transitions ( concave - up regions ) present . the small one at @xmath88 separates models which contain additional signals from models that do not . without using nested sampling it would be more difficult to mix between these two situations and calculate the posterior probability for the existence of the additional signals . [ fig : logl0 ] ] figure [ fig : logl0 ] shows the relationship between the likelihood @xmath24 and the enclosed prior mass @xmath89 for the @xmath1 oph analysis . these plots are a standard output of nested sampling analyses , and provide insights into the structure of the problem . concave - up regions of this curve indicate phase transitions which can cause severe problems for annealing - based methods , and sometimes even for sampling the posterior distribution . in this analysis , the models with a third signal exist to the left of the phase transition at @xmath88 , and models without the signal exist to the right of the phase transition . mixing between these two phases is crucial for accurately computing the posterior probability that extra signals exist . the marginal likelihood for our model was @xmath90 . nested sampling also allows for calculation of the `` information '' , or kullback - leibler divergence ( a quantity in information theory ) from the prior to the posterior , which quantifies how much we learned about the parameters : @xmath91 \ , d\theta\end{aligned}\ ] ] an intuitive interpretation of this quantity is the number of times the prior distribution had to be compressed by a factor of @xmath92 ( if the logarithm in the formula is a natural logarithm ) to get to the posterior distribution . for the @xmath1 oph data the information was @xmath93 nats ( natural units ) or 111 bits , so the posterior occupies about @xmath94 times the prior volume the red dwarf star gliese 581 is thought to host several planets . exactly how many is a matter of considerable debate . according to @xcite , there are two planets ( b and c , with periods 5.36 and 12.91 days respectively ) whose existence is generally accepted , two more ( d and e , with periods 66 and 3.15 days respectively ) whose existence was mostly accepted , and another two ( f and g , with periods 433 and 36.5 days respectively ) whose existence was generally doubted . however , @xcite found that planets d and g do not exist but are signals due to stellar activity . while our model can not account for stellar variability and contribute to that particular discussion , it is a challenging and interesting dataset from an inference point of view . to run our code on the combined dataset from the harps and hires spectrographs , we extended the model to include separate dc offsets for each instrument , as well as separate `` extra noise '' parameters @xmath95 and @xmath1 . we also increased @xmath96 from 10 to 15 for this system . the posterior distribution for @xmath0 is shown in figure [ fig : gliese581_n ] , and shows strong evidence for at least eight periods ( @xmath97 ) . some authors recommend that the probability of @xmath0 planets should be @xmath98 150 times greater than the probability of @xmath99 planets existing before making a claim that @xmath0 planets have been definitively detected . such a decision rule is presumably equivalent to a utility function where false positives are much worse than false negatives . we note that applying this rule to our results , we would assert @xmath100 , even though this has a very small posterior probability . it is now recognised that there are many possible sources for oscillations in a data set and not all such oscillations should be claimed as planets . our model can not distinguish between oscillations due to planets and oscillations due to stellar activity : any oscillations found in the dataset will be described as `` planets '' by the model . another important consideration is the physical stability of the orbital system which is ignored by this type of analysis @xcite . however it is interesting that we find many more signals in the data than previous authors . by inspecting the posterior distribution for the periods ( figure [ fig : gliese581_periods ] ) , we see that only 4 of the periods are well determined and have a posterior probability close to 1 ( i.e. they are present in all samples ) , corresponding to the known periods of gliese 581 b , c , d , and e. the other `` periods '' are more uncertain . as with @xmath1 oph , we can calculate the posterior probability of any hypothesis about gliese 581 by computing the fraction of the posterior samples that have that property . the posterior probabilities for planets b , c , d , and e , are close to 1 . the posterior probability a signal exists with @xmath101 $ ] is 88% , and the probability for a signal with @xmath102 $ ] is 85% . one possible explanation for the large number of inferred signals that a non - sinusoidal signal due to stellar activity is being modelled as several periods ( e.g. as happens in asteroseismology when sinusoidal models are used ; * ? * ) , and if the model were extended to include a `` stochastic '' oscillation ( e.g. * ? ? ? * ) , the number of periods detected may be reduced substantially . another contributing factor is the prior for the amplitudes . with these kinds of models , the posterior distribution for @xmath0 can be influenced by the prior for the amplitudes @xmath29 . many authors assign independent broad priors to the amplitudes , and this causes the `` occam s razor '' penalty for adding extra signals to be quite strong . since we use a hierarchical prior for the amplitudes , if some amplitudes are found to be low , @xmath55 will become small . when @xmath55 is small , it is likely that any extra signals will have small amplitudes , so the `` occam s razor '' effect is weaker . an example model fit for gliese 581 is shown in figure [ fig : gliese581 ] . orbiting gliese 581.[fig : gliese581_n ] ] ] ] the marginal likelihood was @xmath103 and the information was @xmath104 nats . this compares favourably to the marginal likelihood of @xmath105 ( for a 6-planet model ) found by @xcite , although it is unclear whether we used exactly the same dataset . interestingly , the log - likelihood curve ( figure [ fig : logl ] ) shows this problem has two phase transitions . while these do not affect the posterior distribution ( as they did for @xmath1 oph ) , they would cause difficulties if we tried to calculte the marginal likelihood using annealing . and -45 ) correspond to phase transitions . thermal approaches to this problem would produce misleading estimates of the marginal likelihood because they would mix poorly at temperatures around 11 and 4 . [ fig : logl ] ] in this paper we introduced a trans - dimensional mcmc approach to inferring the number of planets @xmath0 in an exoplanetary system from radial velocity data . the mcmc was implemented using the framework of @xcite which defines trans - dimensional birth and death moves , and does the sampling with respect to a nested sampling target distribution , rather than directly sampling the posterior . this approach allows us to compute the results in a single run , which provides posterior samples and an estimate of the marginal likelihood . by using diffusive nested sampling , instead of directly trying to sample the posterior distribution , we can overcome difficult features in the problem , such as phase transitions and ( to some extent ) multiple modes . we applied the code to two well - studied rv datasets , @xmath1 oph and gliese 581 . in @xmath1 oph , we found some evidence for additional signals with low amplitude , but with several possible solutions for their periods . given our modelling assumptions , the posterior probability at least one of these additional signals is real is 85% . the posterior distribution contains models both with and without these additional signals , however , these are separated by a phase transition . therefore mixing between the two situations would be infrequent if we simply tried to sample the posterior distribution . with the combined hires+harps dataset from gliese 581 , we found evidence for a large number of `` planets '' , although only four have well determined periods , corresponding to the gliese 581 b , c , d , and e. since our model does not include any possibility of stellar variability , any such periodic signals will be attributed to `` planets '' . including non - planetary stellar variability is a crucial next step . it is a pleasure to thank fengji hou and david hogg ( nyu ) for inspiring me ( bjb ) to finally work on this problem , and phil gregory ( ubc ) for writing so many interesting papers on it . we also thank tom loredo ( cornell ) and dan foreman - mackey ( nyu ) for interesting conversations and feedback , and ben montet ( caltech ) and geraint lewis ( sydney ) for helpful discussions . the referee also provided excellent suggestions for improving the manuscript .
inferring the number of planets @xmath0 in an exoplanetary system from radial velocity ( rv ) data is a challenging task . recently , it has become clear that rv data can contain periodic signals due to stellar activity , which can be difficult to distinguish from planetary signals . however , even doing the inference under a given set of simplifying assumptions ( e.g. no stellar activity ) can be difficult . it is common for the posterior distribution for the planet parameters , such as orbital periods , to be multimodal and to have other awkward features . in addition , when @xmath0 is unknown , the marginal likelihood ( or evidence ) as a function of @xmath0 is required . rather than doing separate runs with different trial values of @xmath0 , we propose an alternative approach using a trans - dimensional markov chain monte carlo method within nested sampling . the posterior distribution for @xmath0 can be obtained with a single run . we apply the method to @xmath1 oph and gliese 581 , finding moderate evidence for additional signals in @xmath1 oph with periods of 36.11 @xmath2 0.034 days , 75.58 @xmath2 0.80 days , and 1709 @xmath2 183 days ; the posterior probability that at least one of these exists is 85% . the results also suggest gliese 581 hosts many ( 7 - 15 ) `` planets '' ( or other causes of other periodic signals ) , but only 4 - 6 have well determined periods . the analysis of both of these datasets shows phase transitions exist which are difficult to negotiate without nested sampling . [ firstpage ] stars : planetary systems techniques : radial velocities methods : data analysis methods : statistical
The Federation of Sovereign Indigenous Nations continues to face "a state of crisis" after a sixth girl became the most recent suicide in northern Saskatchewan in less than a month. "This is heartbreaking and shocking," said Federation of Sovereign Indigenous Nations vice-chief Kimberly Jonathan. "Our youth ought to be planning their future and celebrating their successes; instead, there's despair and hoplessness." On Sunday, a 13-year-old girl from La Ronge, Sask., took her own life. Earlier in October, three girls aged 12 to 14 from Stanley Mission, Sask., and La Ronge also killed themselves in the span of four days. A week later, a 10-year-old girl in Deschambault Lake, Sask., took her own life. Then last Friday, a 13-year-old girl killed herself on the Makwa Sahgaiehcan First Nation in Saskatchewan. "They're not just statistics," said Jonathan. "Our little girls are dying. It isn't about this being No. 6." Jonathan said she had been talking to a number of Indigenous and non-Indigenous leaders from across the country Monday. Many expressed shock and sadness over this spate of suicides, she said. The heartache, though, is mixed with frustration. FSIN vice-chief Kimberly Jonathan and her daughter. Jonathan says as a mother of three girls she's horrified at what's taking place in northern Saskatchewan. (Submitted by Kimberly Jonathan) "It's more than the pit-of-my-stomach anger," she said. "The pit-of-my-soul pain. As a life-giver of three Indigenous girls, I just cannot fathom having to write another proposal for help." Jonathan said she doesn't know what more to do at this point. She said she's tired of Indigenous people being treated like beggars, having to plead their case for help in the midst of a crisis. She's once again calling on Prime Minister Justin Trudeau to visit northern Saskatchewan and provide the necessary support. "Condolences: Thank you for them," said Jonathan. "We need action. We need to see resources that our leadership have been asking for years." More than anything, Jonathan is stressing the importance of this being a provincial and national issue. She is calling on people everywhere to be a part of action that makes elected officials step up. "We don't want photo [opportunities], we don't want pretty speeches," she said. "Pretty speeches are not going to save our children." Education director responds Northern Lights School Division education director Ken Ladouceur said teachers and students in these affected communities are being given all the support they need right now. "Words escape you," he said. "Our hearts are breaking for the parents, families and Indigenous people everywhere." This school division is not new to tragedy. Most recently, Ladouceur helped guide staff and students through the school shooting in La Loche. Now, Ladouceur is trying to be a leader in the face of yet another tragedy. People came together in La Ronge, Sask. for a candlelight vigil in memory of three young girls. (Don Somers/CBC) "We are no stranger to suicide within our schools and across our Indigenous populations in the north," he said. "It is something we are always aware of and trying to support as much as we can." Ladouceur knows more work can be done, though. "Prevention programs are in all of our schools," he said. "The age of these students tells us we can't put enough interventions and support in for these youth." Staff and administration are working with local health districts to provide all the help they can. Ladouceur said he knows how difficult this is on the teachers right now. "The students are as close to them as their own family." Leaders speak out "Research and experience shows that the connection between youth suicide and the autonomy of Indigenous communities, working on reconciliation and empowering those communities is a large part of that solution," said Buckley Belanger, MLA for Athabasca. Belanger also took issue with comments made earlier in the year by health minister Jim Reiter — when he was the minister responsible for First Nations, Métis and northern affairs — and said the government would look at the Truth and Reconciliation Commission's calls to action that made sense and could be done quickly. Belanger mentioned the years of work which went into research, interviews and consultations before the final report was released. "They were not done so provincial ministers could decide what made sense to them," Belanger said. "If this government really isn't willing to listen, if they aren't willing to work with the Indigenous communities, if they are only going to do what is quick and easy for them, then how does this government expect anything to change?" NDP Opposition leader Trent Wotherspoon said the supports offered to northern communities after the first three youths took their own lives haven't been enough. He mentioned long-standing inequities and inadequacies in the north. "We've got a sixth suicide," he said. "What we're doing just isn't working. The supports just haven't been there." Wotherspoon said long-term commitments need to be made to address issues such as addictions and housing. "We've got a real shortfall to make up for in the long-term." He said it takes resources to bolster basic things such as evening programs, and to continue to working with northern leadership, providing the sources to help healing. "This is unspeakably tragic," said Premier Brad Wall. Wall said suicide prevention strategies have been developing in collaboration with school divisions and health regions. "Obviously we need to continue to do more," he said. Wall said the government is looking at all options to address the issue, noting the pattern of all six lives lost being young girls. "Everything's on the table. It's an all-of-the-above approach we need to take for this because we just can't afford to lose any young girls, or any young people period," he said. MP Georgina Jolibois called on the federal government to address the immediate needs of Indigenous mental health in northern communities. "The government needs to end the Band-Aid strategy and commit to a culturally appropriate long-term approach to mental wellness," Jolibois said during Monday's question period in the House of Commons. "How much louder do our kids need to be?" ||||| The first order of business for Saskatchewan’s new children’s advocate will be to address the unfolding suicide crisis in the north. Premier Brad Wall rose in the Legislature on Monday to say he supports Corey O’Soup — who starts his job Nov. 1 — focussing first on “the plight of northern youth.” The announcement came a day after yet another young person took her life in La Ronge over the weekend — becoming the sixth young girl in northern Saskatchewan to do so in a month. Five other girls between the ages of 10 and 14 from the communities of La Ronge, Stanley Mission, Deschambault Lake and Loon Lake also committed suicide in October. “This is unspeakably tragic,” Wall told media. “Each one of these losses — and to have them one after the other — it has the undivided attention of northern leaders, it has the undivided attention of our northern health region, of government offices in social services and in justice and every elected person in the house on either side.” Wall told the Legislature many initiatives are underway in the north to assist people struggling with depression and suicidal thoughts. Most recently, regional emergency operations centres were established in La Ronge and Stanley Mission in mid-October. NDP Leader Trent Wotherspoon said more needs to be done. “That is important, but it’s not the entire solution,” Wotherspoon said in the Legislature. “There are long-standing inequities and inadequacies too often dismissed by government that require immediate, long-term action to address this epidemic through education, through justice, through community health and recreation, through the economy.” Speaking with reporters following Question Period, Wall said his cabinet has been discussing what more can be done. Among the ideas being floated is bringing self-esteem workshops — which are already offered for girls in southern Saskatchewan — to the north. “Everything is on the table. It’s an all-of-the-above approach we need to take on this because we just can’t afford to lose any young girls or any young people, period, to this,” he said. Wall said he plans to visit La Ronge this month to offer support and hear about what’s working and what’s still needed. — With files from D.C. Fraser [email protected] Twitter.com/MsAndreaHill
– "Our little girls are dying," Kimberly Jonathan, vice-chief of the Federation of Sovereign Indigenous Nations, tells the CBC. On Sunday, a 13-year-old girl killed herself in an Indigenous community in northern Canada, the Canadian Press reports. She was the sixth Indigenous girl to commit suicide in the province of Saskatchewan last month. The other five girls were between the ages of 10 and 14. "Our youth ought to be planning their future and celebrating their successes; instead, there's despair and hopelessness," Jonathan tells CBC. The deaths have left politicians and advocates scrambling to find answers. “There are long-standing inequities and inadequacies too often dismissed by government that require immediate, long-term action," Trent Wotherspoon, leader of the Saskatchewan New Democratic Party, tells the Saskatoon Star Phoenix. He says Indigenous communities need everything from better classrooms to improved addiction services. A new children's advocate starting Tuesday in Saskatchewan will make "the plight of northern youth" a focus. And there is talk about launching self-esteem workshops for girls in northern parts of the province. Prime Minister Justin Trudeau has said the federal government will work with Indigenous communities to solve this problem. (One Indigenous community in Ontario had 11 suicide attempts in a single day.)
SECTION 1. SHORT TITLE. This Act may be cited as the ``Comprehensive Wildlife Disease Testing Acceleration Act of 2002''. SEC. 2. DEFINITIONS. In this Act: (1) Chronic wasting disease.--The term ``chronic wasting disease'' means the animal disease that afflicts deer and elk-- (A) that is a transmissible disease of the nervous system resulting in distinctive lesions in the brain; and (B) that belongs to the group of diseases-- (i) that is known as transmissible spongiform encephalopathies; and (ii) that includes scrapie, bovine spongiform encephalopathy, and Cruetzfeldt- Jakob disease. (2) Epizootic hemorrhagic disease.--The term ``epizootic hemorrhagic disease'' means the animal disease afflicting deer and other wild ruminants-- (A) that is an insect-borne transmissible viral disease; and (B) that results in spontaneous hemorraging in the muscles and organs of the afflicted animals. (3) Secretary.--The term ``Secretary'' means the Secretary of Agriculture. (4) Task force.--The term ``Task Force'' means the Interagency Task Force on Epizootic Hemorrhagic Disease established by section 4(a). SEC. 3. CHRONIC WASTING DISEASE SAMPLING GUIDELINES AND TESTING PROTOCOL. (a) Sampling Guidelines.-- (1) In general.--Not later than 30 days after the date of enactment of this Act, the Secretary shall issue guidelines for the collection of animal tissue by Federal, State, tribal, and local agencies for testing for chronic wasting disease. (2) Requirements.--Guidelines issued under paragraph (1) shall-- (A) include procedures for the stabilization of tissue samples for transport to a laboratory for assessment; and (B) be updated as the Secretary determines to be appropriate. (b) Testing Protocol.--Not later than 30 days after the date of enactment of this Act, the Secretary shall issue a protocol to be used in the laboratory assessment of samples of animal tissue that may be contaminated with chronic wasting disease. (c) Laboratory Certification and Inspection Program.-- (1) In general.--Not later than 30 days after the date of enactment of this Act, the Secretary shall establish a program for the certification and inspection of Federal and non-Federal laboratories (including private laboratories) under which the Secretary shall authorize laboratories certified under the program to conduct tests for chronic wasting disease. (2) Verification.--In carrying out the program established under paragraph (1), the Secretary may require that the results of any tests conducted by private laboratories shall be verified by Federal laboratories. (d) Development of New Tests.--Not later than 45 days after the date of enactment of this Act, the Secretary shall accelerate research into-- (1) the development of animal tests for chronic wasting disease, including-- (A) tests for live animals; and (B) field diagnostic tests; and (2) the development of testing protocols that reduce laboratory test processing time. SEC. 4. INTERAGENCY TASK FORCE ON EPIZOOTIC HEMORRHAGIC DISEASE. (a) In General.--There is established a Federal interagency task force to be known as the ``Interagency Task Force on Epizootic Hemorrhagic Disease'' to coordinate activities to prevent the outbreak of epizootic hemorrhagic disease and related diseases in the United States. (b) Membership.--The Task Force shall be composed of-- (1) the Secretary, who shall serve as the chairperson of the Task Force; (2) the Secretary of the Interior; (3) the Secretary of Commerce; (4) the Secretary of Health and Human Services; (5) the Secretary of the Treasury; (6) the Commissioner of Food and Drugs; (7) the Director of the National Institutes of Health; (8) the Director of the Centers for Disease Control and Prevention; (9) the Commissioner of Customs; and (10) the heads of any other Federal agencies that the President determines to be appropriate. (c) Report.--Not later than 60 days after the date of enactment of this Act, the Task Force shall submit to Congress a report that-- (1) describes any activities that are being carried out, or that will be carried out, to prevent-- (A) the outbreak of epizootic hemorrhagic disease and related diseases in the United States; and (B) the spread or transmission of epizootic hemorrhagic disease and related diseases to dairy cattle or other livestock; and (2) includes recommendations for-- (A) legislation that should be enacted or regulations that should be promulgated to prevent the outbreak of epizootic hemorrhagic disease and related diseases in the United States; and (B) coordination of the surveillance of and diagnostic testing for epizootic hemorrhagic disease, chronic wasting disease, and related diseases. SEC. 5. FUNDING. To carry out this Act, the Secretary may use funds made available to the Secretary for administrative purposes.
Comprehensive Wildlife Disease Testing Acceleration Act of 2002 - Directs the Secretary of Agriculture, with respect to chronic wasting disease (a disease affecting deer and elk), to: (1) issue guidelines for animal tissue collecting and laboratory testing; (2) establish a laboratory certification and inspection program; and (3) accelerate testing research.Establishes the Interagency Task Force on Epizootic Hemorrhagic Disease to coordinate epizootic hemorrhagic disease (a disease affecting deer and other wild ruminants) prevention activities.
null
we use micropatterning and strain engineering to encapsulate single living mammalian cells into transparent tubular architectures consisting of three - dimensional ( 3d ) rolled - up nanomembranes . by using optical microscopy , we demonstrate that these structures are suitable for the scrutiny of cellular dynamics within confined 3d - microenvironments . we show that spatial confinement of mitotic mammalian cells inside tubular architectures can perturb metaphase plate formation , delay mitotic progression , and cause chromosomal instability in both a transformed and nontransformed human cell line . these findings could provide important clues into how spatial constraints dictate cellular behavior and function .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Burt Lake Band of Ottawa and Chippewa Indians Reaffirmation Act''. SEC. 2. FINDINGS. Congress finds as follows: (1) The members of the Burt Lake Band of Ottawa and Chippewa Indians, whose historic name is the Cheboigan (or Cheboygan) Band, are descendants and political successors to signatories of the 1836 Treaty of Washington and the 1855 Treaty of Detroit. The Band was twice recognized by the United States, on a government-to-government relationship basis, through the execution and ratification of those treaties. (2) The 1836 Treaty of Washington provided that the Cheboigan Band would receive a reservation of 1,000 acres on the Cheboigan, within its aboriginal territory, but the United States failed to provide that reservation. The 1855 Treaty of Detroit provided for the withdrawal of unsold lands in 2 Michigan townships 35 North and 36 North Range 3 West for the use of the Cheboygan Band, but due to the Federal Government's failure to act, those members who selected allotments within that area were not awarded those individual land holdings until 3 years after a special Act of Congress was passed in 1872. (3) Between 1845 and 1850 the Band's members used treaty annuity payments to purchase land for the Band in Burt Township, Cheboygan County, Michigan. That land, called Colonial Point, was placed in trust with the Governor of Michigan on the advice of Federal Indian agents. (4) During the next 50 years, questions arose regarding the taxability of the property, and the acreage was ultimately sold for back taxes in 1900. (5) After the Band was forcibly evicted from Colonial Point and its village was burned to the ground by its new owner, John McGinn, the majority of the Band's families took up residency on nearby Indian Road on lands which other Band members had purchased or received as treaty allotments or homesteads. (6) In 1911, the United States filed suit in the United States Federal District Court for Eastern Michigan seeking to regain possession of the Colonial Point Lands (United States v. McGinn, Equity No. 94, filed June 11, 1911). In its complaint, the United States advised the Court that it was suing on behalf of the: ``Cheboygan band of Indians [which] is now and was at all the times mentioned in this bill of complaint a tribe of indians [sic] under the care, control, and guardianship of the plaintiff and said band is now and was at all times mentioned in this bill of complaint recognized by the plaintiff through its chiefs or head men which it annually elects.''. (7) In 1917, the Federal District Court decided the McGinn case against the United States finding that the language in the Colonial Point deeds did not prevent the Colonial Point land from being taxed. (8) Over the next 20 years, members of the Band asked the United States to appeal, or otherwise rectify the District Court's decision, but no Federal action was taken. Throughout this period, the United States continued to provide the Band and its members with many of the same Federal services that were being provided to other Indian tribes in Michigan. (9) The Act of June 18, 1934 (hereafter in this Act referred to as the ``Indian Reorganization Act''), authorized and directed the Bureau of Indian Affairs to provide technical assistance and Federal funds to petitioning tribes to assist them in reorganizing their governments and improving their economies. Members of the Cheboigan Band, as well as members of other landless treaty Tribes in Michigan, submitted petitions to receive that assistance. Similar petitions were also submitted by 4 Michigan bands that still held communal lands. Possession of a tribal land base was a prerequisite to the receipt of most of the Federal funds and services provided for in the Indian Reorganization Act. (10) While the Indian Reorganization Act directed the Secretary to assist landless bands, like Burt Lake, and authorized Federal funds to acquire land for landless tribes, no Federal funds were appropriated to acquire new tribal lands for any of the landless bands in Michigan. After struggling with this dilemma, the Bureau of Indian Affairs extended the benefits of the Indian Reorganization Act to only those 4 Michigan tribes that had an existing land base on the date of the enactment of the Indian Reorganization Act. Of the Ottawa and Chippewa Tribes who signed the 1836 and 1855 Treaties, only 1 group, the Bay Mills Indian Community was reaffirmed. (11) The failure of the Bureau of Indian Affairs to grant Indian Reorganization Act benefits to the Cheboigan Band did not terminate the band's government-to-government relationship with the United States, and Congress has never taken any action to terminate the Federal acknowledgment of the Burt Lake Band. (12) The Bureau of Indian Affairs lacked and lacks the legal authority to terminate a tribe that has been acknowledged by an Act of Congress. (13) In recent years, the Federal recognition of the following Michigan tribes, who were also denied the benefits of the Indian Reorganization Act, has been reaffirmed: (A) The Sault Ste. Marie Tribe of Chippewa was reaffirmed by a Memorandum of the Commissioner of Indian Affairs on September 7, 1972. (B) The Grand Traverse Band of Ottawa and Chippewa Indians was reaffirmed by the Bureau of Indian Affairs Branch of Acknowledgment on May 27, 1980. (C) The Little Traverse Bay Bands of Odawa Indian and the Little River Band of Ottawa Indians each had its Federal status reaffirmed by an Act of Congress on September 21, 1994. (D) The Lac Vieux Desert Band of Lake Superior Chippewa Indians had its Federal status reaffirmed by an Act of Congress at the request of the Administration on September 8, 1988. (E) The Pokagon Indian Nation had its Federal status reaffirmed by an Act of Congress on September 21, 1994. (F) The Huron Potawatomi Nation had its Federal status reaffirmed by the Bureau of Indian Affairs' Branch of Acknowledgment and Research on March 17, 1996. (G) The Gun Lake Tribe (Match-She-Be-Nash-She-Wish) had its Federal status reaffirmed by the Bureau of Indian Affairs' Office of Federal Acknowledgment on August 23, 1999. (14) The Band has been consistently recognized by third parties as a distinct Indian community since well before 1900. (15) All of the Band's adult members are the children, grandchildren, or great grandchildren of Indian persons who resided on or near Colonial Point or Indian Road at the time of the Burn Out. Most of the Band's adult members grew up on or near Indian Road or had an immediate family member who did. As the result, the Band's members have maintained very close social and political ties. (16) The Band's families have and continue to provide mutual aid to each other, visit each other regularly, mobilize to assist each other in times of need, practice traditional arts and crafts, gather for Ghost Suppers, decorate the graves of their ancestors, and participate in other traditional tribal ceremonies and events. (17) Since 1829 the Band's members have attended and consistently mobilized to maintain the Indian Mission Church of St. Mary's, first on Colonial Point and later on Indian Road. The Band's members have also worked together to maintain the Tribe's 2 Indian cemetaries. They have also dug the graves and buried their relatives in those 2 Indian cemeteries for almost 200 years. (18) The Band's members have throughout time made formal and informal decisions for the community. The Band has also organized its own modern tribal government without the assistance of the Bureau of Indian Affairs. (19) The majority of the Band's elders have a high degree of Indian blood and continue to speak the Ottawa language when they gather with each other. Before World War II, more than 50 percent of the Burt Lake families were still speaking the traditional language in their homes, and more than 50 percent of those tribal members who were married were married to other Ottawa and Chippewa individuals. SEC. 3. DEFINITIONS. For purposes of this Act-- (1) the term ``Band'' or ``Tribe'' means the Burt Lake Band of Ottawa and Chippewa Indians which was previously called the Cheboigan or Cheboygan Band of Ottawa and Chippewa Indians; (2) the term ``Burn Out'' means the destruction of the Colonial Point Indian Village of the Burt Lake Band in 1900; (3) the term ``OFA'' means the Office of Federal Acknowledgment, a Branch of the United States Department of Interior's Bureau of Indian of Indian Affairs; and (4) the term ``Secretary'' means the Secretary of the Interior. SEC. 4. FEDERAL RECOGNITION. (a) Federal Recognition.--Federal recognition of the Burt Lake Band of Ottawa and Chippewa Indians is hereby reaffirmed. All laws and regulations of the United States of general application to Indians or nations, tribes, or bands of Indians including the Act of June 18, 1934 (25 U.S.C. 461 et seq., commonly referred to as the ``Indian Reorganization Act''), which are inconsistent with any specific provision of this Act shall not be applicable to the Band and its members. (b) Federal Services and Benefits.-- (1) In general.--Notwithstanding any other provision of law, after the date of the enactment of this Act, the Band and its members shall be eligible for all services and benefits provided by the Federal Government to Indians because of their status as federally recognized Indians without regard to the existence of a reservation or the location of the residence of any member on or near any Indian reservation. (2) Service area.--For purposes of the delivery of Federal services to the enrolled members of the Band and to other Indians, all of Cheboygan County Michigan, and any area in the State of Michigan that is outside of Cheboygan County, but located within 25 miles of the Tribe's Cemetery at the St. Mary's Indian Mission Church, shall be deemed to be within the Service Area of the Burt Lake Band. Nothing contained herein shall prohibit the Federal Government from providing services to members of the Band who reside or are domiciled outside this Service Area, or from otherwise expanding the Band's Service Area in compliance with applicable Federal law and policy. If any part of the Band's service area overlaps with the service area of another federally recognized Indian tribe, that overlap shall be addressed in compliance with existing Federal policies and regulations. SEC. 5. REAFFIRMATION OF RIGHTS. (a) In General.--All rights and privileges of the Band and its members, which may have been abrogated or diminished before the date of the enactment of this Act are hereby reaffirmed. (b) Existing Rights of Tribe.--Nothing in this Act shall be construed to diminish any right or privilege of the Band or of its members that existed before the date of the enactment of this Act. Except as otherwise specifically provided in any other provision of this Act, nothing in this Act shall be construed as altering or affecting any legal or equitable claim the Band may have to enforce any right or privilege reserved by or granted to the Band which was wrongfully denied to or taken from the Band before the enactment of this Act. SEC. 6. TRIBAL LANDS. The Secretary shall acquire real property in Cheboygan County in trust for the benefit of the Burt Lake Band of Ottawa and Chippewa Indians, if at the time of such acceptance by the Secretary, there are no adverse legal claims on such property including outstanding liens, mortgages or taxes owed. Such lands shall become part of the initial reservation of the Band at the request of the Band. The Secretary is also authorized to acquire and accept real property in other geographic areas into trust for the benefit of the Band and to declare those lands to be a part of the Band's Reservation or Initial Reservation to the full extent otherwise authorized by applicable law. SEC. 7. MEMBERSHIP. (a) In General.--Membership in the Burt Lake Band of Ottawa and Chippewa Indians shall consist of persons who can present evidence, acceptable to the Tribe, showing that they meet the requirements of subsection (b), and persons who meet such other requirements as are specified by the Tribe in its Tribe's Constitution and Enrollment Ordinance as the same may be from time to time amended. (b) Membership Criteria.-- (1) To qualify for membership in the Burt Lake Band of Ottawa and Chippewa Indians, a person must be able to demonstrate through evidence acceptable to the Tribe that the person meets at least one of the following requirements: (A) The person descends from one or more tribal members who were domiciled at Colonial Point, Burt Township, Cheboygan County, Michigan before or at the time that the Tribe's village was burned in October 1900, as said tribal members are identified in the United States v. McGinn litigation and related documents, and/or the 1950 Albert Shananaquet list of Colonial Point Residents. (B) The person descends from one or more tribal members who are listed on the 1900 and/or the 1910 Burt Lake Township Federal Census, Indian Enumeration Schedule. (C) The person has an Indian ancestor who was, prior to 1910, living in tribal relations with the Burt Lake Band of Ottawa and Chippewa Indians as the Burt Lake Band is defined in this Act. (D) The person descends from Rose Midwagon Moses. (2) In addition to the requirements under paragraph (1), to qualify for membership in the Burt Lake Band of Ottawa and Chippwa Indians, a person must be able to demonstrate through evidence acceptable to the Tribe that the person meets all of the following criteria: (A) That the person is in tribal relations with other Burt Lake Band members. (B) That the person's ancestors have lived in tribal relations with other Burt Lake Band members on a substantially continuous basis from 1910 to the present. (C) That the person has a completed tribal membership enrollment file as prescribed by the Tribal Enrollment Ordinance. (D) That the person's membership application has been processed and that the person has been approved for membership in the Burt Lake Band in the manner prescribed by the Tribal Enrollment Ordinance. (c) Base Roll.--The base roll of the Burt Lake Band of Ottawa and Chippewa Indians shall consist of the 320 persons whose names were listed on the official roll of the Burt Lake Band which were members submitted by the Band to the Bureau of Indian Affairs' Office of Federal Acknowledgment on May 2, 2005, and shall also include the biological sons and daughters who were born to those members between the submission of that list and the enactment of this Act. SEC. 8. CONSTITUTION. The initial Constitution of the Burt Lake Band of Ottawa and Chippewa Indians shall be the Constitution which the Band submitted to the Bureau of Indian Affairs' Office of Federal Acknowledgment on May 2, 2005.
Burt Lake Band of Ottawa and Chippewa Indians Reaffirmation Act - Reaffirms federal recognition and the rights and privileges of the Burt Lake Band of Ottawa and Chippewa Indians (Cheboigan or Cheboygan Band, in Michigan). Entitles such Band to the federal services and benefits provided to recognized Indians. Provides for lands to be acquired and held in trust for the Band by the Secretary of the Interior.
macrophage inflammatory protein-1 ( mip-1 , also known as ccl3 ) is a member of the cc chemokine family . mip-1 cdna was originally cloned from lipopolysaccharide ( lps)-activated raw264.7 mouse macrophage cells as a gene encoding an endogenous inflammatory mediator . ccl3 and related cc chemokines such as ccl4 and ccl5 are classified as inflammatory chemokines because of their ability to induce chemotactic mobilization of monocyte - lineage cells and lymphocytes into inflammatory tissues . ccl3 also regulates the proliferation of hematopoietic stem / progenitor cells ( hspcs ) in the bone marrow ( bm ) . however , the contribution of endogenous ccl3 in the bm to normal physiologic hematopoiesis is poorly understood . peripheral blood cells are continuously produced in adulthood by differentiation from a limited number of hematopoietic stem cells ( hscs ) present in the bm . the hsc pool in bm is stably maintained by an intricate balance between differentiation , self - renewal , and reversible quiescent cell cycle arrest of hscs . during the 1970s and 1980s , regulation of the steady - state quiescent status of hscs was proposed to involve an unknown regulatory factor present in the bm microenvironment termed a stem cell inhibitor ( sci ) . immediately after the cloning of ccl3 cdna , graham et al . observed that culture supernatant of the j774.2 macrophage cell line contained a factor exhibiting sci - like activity against colony formation of bm primitive cells . subsequent studies revealed that ccl3 could reversibly inhibit colony formation and proliferation of hspcs both in vitro and in vivo . intriguingly , ccl3 inhibits the proliferation of primitive progenitor cells but activates the proliferation of more mature progenitor cells . moreover , ccl3 can maintain a quiescent status in hscs by blocking cell cycle entry , thereby exhibiting a myeloprotective effect against cell cycle - specific anticancer drugs . furthermore , administration of a high dose of ccl3 rapidly induces mobilization of mouse and human hspcs from bm to the peripheral blood . however , the bm of ccl3-deficient mice does not exhibit any obvious hematopoietic abnormalities , and a major cellular source of ccl3 in steady - state bm has not yet been identified . thus , the precise regulatory functions of ccl3 in hematopoiesis under physiologic and various pathologic conditions remain elusive . chemokines execute their biologic activities through binding to their corresponding receptors , which are g - protein coupled receptors ( gpcrs ) with seven - span transmembrane ( 7-tm ) portions . human and mouse ccl3 bind to ccr1 , ccr5 , and d6 receptors , and mouse ccl3 can additionally bind to ccr3 . among these receptors , moreover , bm cells derived from ccr1 , ccr3 , ccr5 , and d6 mice do not exhibit reduced sci activity in vitro . further investigation of the molecular mechanisms underlying ccl3-mediated sci activity , in particular the identity of the receptor involved in this process , is required . leukemia is a hematopoietic neoplasm arising from neoplastic transformation of hspcs and is assumed to involve oligoclonal or heteroclonal cells . expansion of leukemia cells can arise from a small number of specialized leukemia cells called leukemia initiating cells ( lics ) . furthermore , lics exhibit phenotypes similar to those of normal hspcs , such as self - renewal and cellular quiescence . the control of lics is therefore presumed to involve cellular and molecular mechanisms similar to those regulating normal hspcs . thus , ccl3 , which has biologic activities in normal hspc function , might also have a role in leukemogenesis . in this review article more than 90% of cases of chronic myeloid leukemia ( cml ) are associated with the presence of the philadelphia chromosome that arises from a reciprocal translocation between chromosomes 9 and 22 . this chromosomal translocation results in the formation of a breakpoint cluster region and a constitutively activated tyrosine kinase , the bcr - abl fusion protein . bcr - abl is a pathognomic protein for cml and its expression transforms hscs into lics whose maintenance in bm is indispensable for cml leukemogenesis . cml lics share characteristic capabilities with normal hscs , including self - renewal and cellular quiescence . however , in contrast to normal hscs , lics are resistant to the sci activity of ccl3 during in vitro proliferation . wark et al . revealed that forced activation of the abl tyrosine kinase in a multipotent stem cell line directly represses the ccl3-mediated increase in cytosolic ca concentration . in contrast , this treatment did not affect expression of the ccl3 receptor or its affinity for ccl3 . consistent with these findings , other independent groups have reported that ccl3 receptors are expressed at similar levels in normal and cml progenitor cells . based on this observation , it has been proposed that this abl tyrosine kinase - mediated unresponsiveness to ccl3 contributes to the preferential expansion of cml lics in the leukemic bm microenvironment . in contrast , we and other groups have observed decreased expression of ccl3 receptors , especially ccr5 , in cml progenitor cells . thus , it remains controversial whether the unresponsiveness of lics to ccl3 arises from decreased ccl3 receptor expression and/or functionality . nevertheless , ccl3 exposure can preferentially induce quiescent cell cycle arrest in normal hspcs but not in lics . moreover , pretreatment of normal hspcs with ccl3 confers resistance to cell cycle - specific anti - leukemia drugs such as cytosine arabinoside ( ara - c ) , whereas cytotoxicity is preserved in lics . thus , combined administration of ccl3 with antileukemia drugs was tested as an approach to selectively kill cml cells before the advent of molecularly targeted drugs such as the tyrosine kinase inhibitor imatinib . in the early stages of cml a small number of lics coexists with a large number of normal hematopoietic cells but over time the lics gradually accumulate and eventually predominate in the limited space of the bm microenvironment . several lines of evidence indicate a crucial role of ccl3 in the initiation and progression of cml . zhang et al . demonstrated that cml cells produce high levels of ccl3 , which , in combination with other cytokines and chemokines , might confer a growth advantage to lics over normal hspcs . schepers et al . further demonstrated that ccl3 induces remodeling of the leukemia niche cell to effectively support lic proliferation . we observed that bcr - abllineagec - kit immature leukemia cells are a main source of ccl3 . moreover , ccl3 can induce the mobilization of normal hspcs from bm to peripheral blood and promotes the maintenance of lics in bm of cml mice . furthermore , ccl3-mediated maintenance of lics was also observed in the setting of recurrence after cessation of imatinib treatment . conversely , normal hspcs can directly impede the maintenance of lics in bm when the cml cells lack ccl3 or the normal hspcs lack ccl3-binding receptors including ccr1 and ccr5 . thus , ccl3 might act to expel normal ccr1- or ccr5-expressing hspcs from the bm , making space available for lic expansion . it was previously assumed that the outgrowth of acute myeloid leukemia ( aml ) cells is initiated and maintained from lics with properties similar to those of normal hscs , as in cml . based on detailed investigation of bm samples of human aml patients , goardon et al . recently demonstrated that a large number of patients with aml exhibit expansion of cells with lic potential that are differentiated from hscs and resemble lymphoid - primed multipotential progenitors or granulocyte - macrophage progenitors . additionally , there is accumulating evidence that lics are present among cd34 aml blast cells in human patients with aml . cd34 aml cells are resistant to the sci activity of ccl3 when cultured in the presence of ccl3 , although the underlying molecular mechanism remains elusive . moreover , as in cml , ccl3 is produced abundantly in the bm of aml patients and contributes to remodeling of the microenvironment . it has been proposed that leukemia cell - derived ccl3 inhibits osteoblastic cell functions , thus accelerating the disruption of normal bm microenvironment and hematopoiesis . in the nup98-hoxd13mediated mouse aml model , the proto - oncogene meis1 accelerates aml development and induces ccl3 expression , which is at least partially responsible for the intra - bm survival of lics by potentiating their repopulation capacity . moreover , binding of the homeodomain of meis1 to the regulatory sequence of the ccl3 gene is required for ccl3 expression . however , the detailed molecular and cellular mechanisms by which ccl3 contributes to aml pathophysiology remain unknown . chronic lymphocytic leukemia ( cll ) is a lymphoproliferative disorder in which neoplastic cd5 b cells clonally expand in the peripheral blood , secondary lymphoid tissues , and bm . burger et al . recently demonstrated that cll b cells produce high levels of ccl3 and its related chemokine ccl4 in co - culture with cd68 nurselike cells and after b - cell receptor stimulation . they further revealed that the plasma ccl3 level is a reliable prognostic marker in cll patients . concomitantly , zucchetto et al . demonstrated that cd38cd49d cll cells selectively and aberrantly express ccl3 through interaction with cd38 and cd31 expressed on stromal cells in the bm microenvironment . the recruited cells produce inflammatory factors including tnf- that eventually activate stromal cells to express vcam-1 , a ligand for cd49d that delivers pro - survival signals to cll cells through its interaction with cd49d . thus , ccl3 can promote establishment of the leukemia niche , which is essential for cll cell survival . the above experimental and clinical evidence indicates that ccl3 is crucially involved in multiple pathophysiologic processes of leukemogenesis in various types of leukemia . these processes include preferential proliferation of lics , expulsion of normal hspcs , and/or establishment of a leukemia - adapted niche in the bm ( fig . 1 ) . moreover , high levels of ccl3 expression are observed in other malignant hematopoietic neoplasms , such as adult t - cell leukemia and multiple myeloma . myeloma cell - derived ccl3 can support the proliferation of myeloma cells directly or indirectly through the accelerated formation of osteoclasts , which can provide a pro - myeloma niche within the bm . thus , ccl3 overexpression may have an important role in the progression of hematopoietic malignancies in general , although malignant transformation of each hematologic disease also involves individual and diverse intrinsic events . given the crucial role of ccl3 in the leukemic bm microenvironment , we assume that therapeutic blockade of ccl3 activity in leukemia patients would correct the bm microenvironment and eventually inhibit the dominant proliferation of lics through effects on normal bm cells , rather than through direct killing of lics . function of ccl3 in the leukemic bone marrow microenvironment . in the leukemic bm microenvironment ( blue shaded area ) , ccl3 can induce multiple processes that support the dominant proliferation of leukemia cells : ( 1 ) conversion of normal niche cells to leukemia - adapted cells ; ( 2 ) selective inhibition of normal hspcs ; ( 3 ) mobilization of normal hspcs from bm . abbreviations : bm , bone marrow ; hspc , hematopoietic stem / progenitor cell the development of antineoplastic drugs , including antileukemic drugs , requires identification of causative oncogenes and appropriate molecular targets . however , leukemia cells , especially lics , crucially depend on the appropriate microenvironment for expansion in the bm , particularly during the initiation phase or chemotherapy - induced remission , when only a small number of lics is present in the bm . moreover , the creation of a favorable niche for normal hspcs may create a disadvantage for growth of leukemia cells . thus , in parallel with the pursuit of molecular targeted therapy , it is necessary to investigate the pathophysiologic roles of endogenous mediators such as ccl3 that can profoundly affect the proleukemic niche . this may lead to the development of novel antileukemic therapies that supplement molecular targeted therapy .
the biologic function of the cc chemokine macrophage inflammatory protein-1 ( mip-1/ccl3 ) has been extensively studied since its initial identification as a macrophage - derived inflammatory mediator . in addition to its proinflammatory activities , ccl3 negatively regulates the proliferation of hematopoietic stem / progenitor cells ( hspcs ) . on the basis of this unique function , ccl3 is alternatively referred to as a stem cell inhibitor . this property has prompted many researchers to investigate the effects of ccl3 on normal physiologic hematopoiesis and pathophysiologic processes of hematopoietic malignancies . consequently , there is accumulating evidence supporting a crucial involvement of ccl3 in the pathophysiology of several types of leukemia arising from neoplastic transformation of hspcs . in this review we discuss the roles of ccl3 in leukemogenesis and its potential value as a target in a novel therapeutic strategy for the treatment of leukemia .
Aerosmith‘s Steven Tyler has again sent President Trump a cease-and-desist letter for using the band’s music without permission at a political rally. In 2015, the singer’s legal team warned the then-Republican presidential candidate over his use of “Dream On” on the campaign trail. Three years later, a Trump rally Tuesday at West Virginia’s Charleston Civic Center featured Aerosmith’s 1993 hit “Livin’ on the Edge,” resulting in another cease-and-desist letter from Tyler, Variety reports. “By using ‘Livin’ On The Edge’ without our client’s permission, Mr. Trump is falsely implying that our client, once again, endorses his campaign and/or his presidency, as evidenced by actual confusion seen from the reactions of our client’s fans all over social media,” the cease-and-desist letter stated. “This specifically violates Section 43 of the Lanham Act, as it ‘is likely to cause confusion, or to cause mistake, or to deceive as to the affiliation, connection, or association of such person with another person.'” The letter also notes that Trump’s team has ignored the previous cease-and-desist from 2015, making this latest violation a “willful infringement. “What makes this violation even more egregious is that Mr. Trump’s use of our client’s music was previously shut down, not once, but two times, during his campaign for presidency in 2015,” the letter continued. The scene in WV before Trump’s rally. Aerosmith’s “Livin’ on the edge” playing. pic.twitter.com/HW1qr9TBgE — Jim Acosta (@Acosta) August 21, 2018 Tyler is a co-writer on “Livin’ on the Edge” alongside band mate Joe Perry and songwriter Mark Hudson. Like in 2015, it is Tyler – and not Aerosmith – taking legal action against Trump: Both Perry and drummer Joey Kramer are avowed Republicans, with Kramer especially vocal about his support from Trump. On Monday, the night before the West Virginia rally, Aerosmith performed alongside Post Malone at the MTV Video Music Awards. The band will begin a Las Vegas residency in April 2019. ||||| Aerosmith singer Steven Tyler is demanding President Donald Trump stop using the band’s songs at rallies, like the one held at the Charleston Civic Center in West Virginia on Tuesday (August 21). The band’s 1993 hit “Livin’ on the Edge” was played as Trump devotees entered the venue, which has a capacity of 13,500. Tyler has in turn sent a “cease and desist” letter through his attorney Dina LaPolt to the White House accusing the President of willful infringement in broadcasting the song, which was written by Tyler, Joe Perry and Mark Hudson. Citing the Lanham Act, which prohibits “any false designation or misleading description or representation of fact … likely to cause confusion … as to the affiliation, connection, or association of such person with another person,” Tyler’s attorney contends that playing an Aerosmith song in a public arena gives the false impression that Tyler is endorsing Trump’s presidency. The matter has come up previously with another Aerosmith song, “Dream On,” which Trump used during his 2015 election campaign. Following a similar letter stating, “Trump for President needs our client’s express written permission in order to use his music” and that the campaign “was violating Mr. Tyler’s copyright,” BMI drove the point home and pulled the public performance rights for the song. Public performance rights for “Livin’ on the Edge” are administered by ASCAP. The scene in WV before Trump’s rally. Aerosmith’s “Livin’ on the edge” playing. pic.twitter.com/HW1qr9TBgE — Jim Acosta (@Acosta) August 21, 2018 During the rally, President Trump spoke about immigration, trade and politics, peppered with his usual banter about Special Counsel Robert Mueller’s investigation into Russian interference in the 2016 election. Earlier in the day, Michael Cohen, Trump’s longtime former personal attorney, pleaded guilty to eight criminal counts in federal court on Tuesday, including campaign finance violations related to payments made to women who claim to have had affairs with Trump. Paul Manafort, Trump’s former campaign chairman, was also found guilty Tuesday on eight of 18 counts in his federal trial over fraud charges. The case involved work Manafort did on behalf of a pro-Russian government in Ukraine. Shortly after the verdicts were announced, President Trump told reporters: “I feel badly for Paul Manafort” and called him “a good man.” On Sunday, Aerosmith was among the top-billed acts on the 2018 MTV Video Music Awards, joining Post Malone and 21 Savage for the show’s closing performance, a medley of “Dream On” and “Toys in the Attic.” Read portions of Tyler’s letter to the White House below: It has come to our attention that President Donald J. Trump and/or The Trump Organization (collectively, “Mr. Trump”) have been using our client’s song “Livin’ On The Edge” in connection with political rally events (the Rallies”), including at an event held yesterday at the Charleston Civic Center in Charleston, West Virginia on August 21, 2018. As expressly outlined in the Previous Letters, Mr. Trump does not have our client’s permission to use any of our client’s music, including “Livin’ On The Edge”. What makes this violation even more egregious is that Mr. Trump’s use of our client’s music was previously shut down, not once, but two times, during his campaign for presidency in 2015. Please see the Previous Letters sent on behalf of our client attached here as Exhibit A. Due to your receipt of the Previous Letters, such conduct is clearly willful, subjecting Mr. Trump to the maximum penalty under the law. As we have made clear numerous times, Mr. Trump is creating the false impression that our client has given his consent for the use of his music, and even that he endorses the presidency of Mr. Trump. By using “Livin’ On The Edge” without our client’s permission, Mr. Trump is falsely implying that our client, once again, endorses his campaign and/or his presidency, as evidenced by actual confusion seen from the reactions of our client’s fans all over social media. This specifically violates Section 43 of the Lanham Act, as it “is likely to cause confusion, or to cause mistake, or to deceive as to the affiliation, connection, or association of such person with another person.” Further, as we have also made clear, Mr. Trump needs our client’s express written permission in order to use his music. We demanded Mr. Tyler’s public performance societies terminate their licenses with you in 2015 in connection with “Dream On” and any other musical compositions written or co-written by Mr. Tyler. As such, we are unaware of any remaining public performance license still in existence which grants Mr. Trump the right use his music in connection with the Rallies or any other purpose. If Mr. Trump has any such license, please forward it to our attention immediately. In addition, Mr. Tyler’s voice is easily recognizable and central to his identity, and any use thereof wrongfully misappropriates his rights of publicity. Mr. Trump does not have any right to use the name, image, voice or likeness of our client, without his express written permission. RELATED CONTENT:
– Steven Tyler is not President Trump's biggest fan. Case in point: The Aerosmith frontman's attorney has sent a cease-and-desist letter to the White House demanding that Trump stop using the band's song "Living' on the Edge" at rallies, Variety reports. Trump's team had just blared the 1993 Aerosmith song at a Trump rally at West Virginia's Charleston Civic Center on Tuesday (as captured in this video tweeted by CNN's Jim Acosta). "Mr. Trump is creating the false impression that our client has given his consent for the use of his music, and even that he endorses the presidency of Mr. Trump," the letter reads. "... What makes this violation even more egregious is that Mr. Trump’s use of our client’s music was previously shut down, not once, but two times, during his campaign for presidency in 2015," the letter adds. Indeed, it's not the first such cease-and-desist from Tyler to Trump: Tyler's legal team told the then-candidate not to use the song "Dream On" while campaigning, Rolling Stone reports. Yet Tyler appears to be alone behind the letters. His "Livin' on the Edge" co-writer, lead guitarist Joe Perry, and bandmate Joey Kramer are avid Republicans.